Package | Description |
---|---|
org.apache.hadoop.hive.ql.optimizer.spark | |
org.apache.hadoop.hive.ql.parse.spark |
Modifier and Type | Method and Description |
---|---|
static void |
SparkSortMergeJoinFactory.annotateMapWork(GenSparkProcContext context,
MapWork mapWork,
SMBMapJoinOperator smbMapJoinOp,
TableScanOperator ts,
boolean local)
Annotate MapWork, input is a SMBJoinOp that is part of a MapWork, and its root TS operator.
|
Modifier and Type | Method and Description |
---|---|
void |
GenSparkUtils.annotateMapWork(GenSparkProcContext context)
Fill MapWork with 'local' work and bucket information for SMB Join.
|
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions) |
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions,
boolean deferSetup) |
ReduceWork |
GenSparkUtils.createReduceWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork) |
void |
GenSparkUtils.processFileSink(GenSparkProcContext context,
FileSinkOperator fileSink) |
void |
GenSparkUtils.processPartitionPruningSink(GenSparkProcContext context,
SparkPartitionPruningSinkOperator pruningSink)
Populate partition pruning information from the pruning sink operator to the
target MapWork (the MapWork for the big table side).
|
void |
GenSparkUtils.removeUnionOperators(GenSparkProcContext context,
BaseWork work) |
protected void |
GenSparkUtils.setupMapWork(MapWork mapWork,
GenSparkProcContext context,
PrunedPartitionList partitions,
TableScanOperator root,
String alias_id) |
protected void |
GenSparkUtils.setupReduceSink(GenSparkProcContext context,
ReduceWork reduceWork,
ReduceSinkOperator reduceSink) |
Constructor and Description |
---|
GenSparkWorkWalker(Dispatcher disp,
GenSparkProcContext ctx)
constructor of the walker - the dispatcher is passed.
|
Copyright © 2022 The Apache Software Foundation. All rights reserved.