Package | Description |
---|---|
org.apache.hadoop.hive.ql.exec |
Hive QL execution tasks, operators, functions and other handlers.
|
org.apache.hadoop.hive.ql.exec.persistence | |
org.apache.hadoop.hive.ql.exec.spark | |
org.apache.hadoop.hive.ql.exec.vector.reducesink |
Modifier and Type | Field and Description |
---|---|
protected HiveKey |
FileSinkOperator.key |
protected HiveKey |
ReduceSinkOperator.keyWritable |
Modifier and Type | Field and Description |
---|---|
protected HivePartitioner<HiveKey,Object> |
FileSinkOperator.prtner |
Modifier and Type | Method and Description |
---|---|
HiveKey |
TopNHash.getVectorizedKeyToForward(int batchIndex)
After vectorized batch is processed, can return the key that caused a particular row
to be forwarded.
|
HiveKey |
PTFTopNHash.getVectorizedKeyToForward(int batchIndex) |
protected HiveKey |
ReduceSinkOperator.toHiveKey(Object obj,
int tag,
Integer distLength) |
Modifier and Type | Method and Description |
---|---|
int |
PTFTopNHash._tryStoreKey(HiveKey key,
boolean partColsIsNull,
int batchIndex) |
void |
PartitionKeySampler.collect(HiveKey key,
Object value) |
protected int |
ReduceSinkOperator.computeMurmurHash(HiveKey firstKey) |
int |
HiveTotalOrderPartitioner.getPartition(HiveKey key,
Object value,
int numPartitions) |
int |
TopNHash.tryStoreKey(HiveKey key,
boolean partColsIsNull)
Try store the non-vectorized key.
|
int |
PTFTopNHash.tryStoreKey(HiveKey key,
boolean partColsIsNull) |
void |
TopNHash.tryStoreVectorizedKey(HiveKey key,
boolean partColsIsNull,
int batchIndex)
Try to put the key from the current vectorized batch into the heap.
|
void |
PTFTopNHash.tryStoreVectorizedKey(HiveKey key,
boolean partColsIsNull,
int batchIndex) |
Modifier and Type | Method and Description |
---|---|
ObjectPair<HiveKey,org.apache.hadoop.io.BytesWritable> |
KeyValueContainer.next() |
Modifier and Type | Method and Description |
---|---|
void |
KeyValueContainer.add(HiveKey key,
org.apache.hadoop.io.BytesWritable value) |
Modifier and Type | Method and Description |
---|---|
static HiveKey |
SparkUtilities.copyHiveKey(HiveKey key) |
Modifier and Type | Method and Description |
---|---|
Iterator<scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable>> |
HiveMapFunction.call(Iterator<scala.Tuple2<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable>> it) |
Iterator<scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable>> |
HiveReduceFunction.call(Iterator<scala.Tuple2<HiveKey,V>> it) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
MapTran.doTransform(org.apache.spark.api.java.JavaPairRDD<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> input) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
ReduceTran.doTransform(org.apache.spark.api.java.JavaPairRDD<HiveKey,V> input) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
SparkPlan.generateGraph() |
scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable> |
HiveBaseFunctionResultList.next() |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
RepartitionShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,V> |
SparkShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
GroupByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
SortByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
ShuffleTran.transform(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input) |
Modifier and Type | Method and Description |
---|---|
void |
HiveBaseFunctionResultList.collect(HiveKey key,
org.apache.hadoop.io.BytesWritable value) |
static HiveKey |
SparkUtilities.copyHiveKey(HiveKey key) |
Modifier and Type | Method and Description |
---|---|
Iterator<scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable>> |
HiveReduceFunction.call(Iterator<scala.Tuple2<HiveKey,V>> it) |
void |
HiveVoidFunction.call(scala.Tuple2<HiveKey,org.apache.hadoop.io.BytesWritable> t) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
ReduceTran.doTransform(org.apache.spark.api.java.JavaPairRDD<HiveKey,V> input) |
protected void |
HiveReduceFunctionResultList.processNextRecord(scala.Tuple2<HiveKey,V> inputRecord) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
RepartitionShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,V> |
SparkShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,Iterable<org.apache.hadoop.io.BytesWritable>> |
GroupByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
SortByShuffler.shuffle(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input,
int numPartitions) |
org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> |
ShuffleTran.transform(org.apache.spark.api.java.JavaPairRDD<HiveKey,org.apache.hadoop.io.BytesWritable> input) |
Constructor and Description |
---|
HiveReduceFunctionResultList(Iterator<scala.Tuple2<HiveKey,V>> inputIterator,
SparkReduceRecordHandler reducer)
Instantiate result set Iterable for Reduce function output.
|
Modifier and Type | Field and Description |
---|---|
protected HiveKey |
VectorReduceSinkCommonOperator.keyWritable |
Copyright © 2021 The Apache Software Foundation. All rights reserved.