Package | Description |
---|---|
org.apache.hadoop.hive.ql.exec |
Hive QL execution tasks, operators, functions and other handlers.
|
org.apache.hadoop.hive.ql.exec.persistence | |
org.apache.hadoop.hive.ql.exec.spark |
Modifier and Type | Field and Description |
---|---|
protected HiveKey |
FileSinkOperator.key |
protected HiveKey |
ReduceSinkOperator.keyWritable |
Modifier and Type | Field and Description |
---|---|
protected HivePartitioner<HiveKey,Object> |
FileSinkOperator.prtner |
Modifier and Type | Method and Description |
---|---|
HiveKey |
TopNHash.getVectorizedKeyToForward(int batchIndex)
After vectorized batch is processed, can return the key that caused a particular row
to be forwarded.
|
HiveKey |
PTFTopNHash.getVectorizedKeyToForward(int batchIndex) |
protected HiveKey |
ReduceSinkOperator.toHiveKey(Object obj,
int tag,
Integer distLength) |
Modifier and Type | Method and Description |
---|---|
int |
PTFTopNHash._tryStoreKey(HiveKey key,
boolean partColsIsNull,
int batchIndex) |
void |
PartitionKeySampler.collect(HiveKey key,
Object value) |
protected int |
ReduceSinkOperator.computeMurmurHash(HiveKey firstKey) |
int |
HiveTotalOrderPartitioner.getPartition(HiveKey key,
Object value,
int numPartitions) |
int |
TopNHash.tryStoreKey(HiveKey key,
boolean partColsIsNull)
Try store the non-vectorized key.
|
int |
PTFTopNHash.tryStoreKey(HiveKey key,
boolean partColsIsNull) |
void |
TopNHash.tryStoreVectorizedKey(HiveKey key,
boolean partColsIsNull,
int batchIndex)
Try to put the key from the current vectorized batch into the heap.
|
void |
PTFTopNHash.tryStoreVectorizedKey(HiveKey key,
boolean partColsIsNull,
int batchIndex) |
Modifier and Type | Method and Description |
---|---|
ObjectPair<HiveKey,org.apache.hadoop.io.BytesWritable> |
KeyValueContainer.next() |
Modifier and Type | Method and Description |
---|---|
void |
KeyValueContainer.add(HiveKey key,
org.apache.hadoop.io.BytesWritable value) |
Modifier and Type | Method and Description |
---|---|
static HiveKey |
SparkUtilities.copyHiveKey(HiveKey key) |
Modifier and Type | Method and Description |
---|---|
Iterable<scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable>> |
HiveMapFunction.call(Iterator<scala.Tuple2<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable>> it) |
Iterable<scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable>> |
HiveReduceFunction.call(Iterator<scala.Tuple2<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>>> it) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
SparkPlan.generateGraph() |
scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable> |
HiveBaseFunctionResultList.ResultIterator.next() |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
SparkShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
SortByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
GroupByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
MapTran.transform(org.apache.spark.api.java.JavaPairRDD<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> input) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
ShuffleTran.transform(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
ReduceTran.transform(org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> input) |
Modifier and Type | Method and Description |
---|---|
void |
HiveBaseFunctionResultList.collect(HiveKey key,
org.apache.hadoop.io.BytesWritable value) |
static HiveKey |
SparkUtilities.copyHiveKey(HiveKey key) |
Modifier and Type | Method and Description |
---|---|
Iterable<scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable>> |
HiveReduceFunction.call(Iterator<scala.Tuple2<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>>> it) |
void |
HiveVoidFunction.call(scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable> t) |
protected void |
HiveReduceFunctionResultList.processNextRecord(scala.Tuple2<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> inputRecord) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
SparkShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
SortByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
GroupByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
ShuffleTran.transform(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
ReduceTran.transform(org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> input) |
Constructor and Description |
---|
HiveReduceFunctionResultList(Iterator<scala.Tuple2<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>>> inputIterator,
SparkReduceRecordHandler reducer)
Instantiate result set Iterable for Reduce function output.
|
Copyright © 2017 The Apache Software Foundation. All rights reserved.