Modifier and Type | Method and Description |
---|---|
Object |
AccumuloRangeGenerator.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected Object |
AccumuloRangeGenerator.processExpression(ExprNodeGenericFuncDesc func,
Object[] nodeOutputs) |
Modifier and Type | Class and Description |
---|---|
class |
AmbiguousMethodException
Exception thrown by the UDF and UDAF method resolvers in case a unique method
is not found.
|
class |
NoMatchingMethodException
Exception thrown by the UDF and UDAF method resolvers in case no matching method
is found.
|
class |
UDFArgumentException
exception class, thrown when udf argument have something wrong.
|
class |
UDFArgumentLengthException
exception class, thrown when udf arguments have wrong length.
|
class |
UDFArgumentTypeException
exception class, thrown when udf arguments have wrong types.
|
Modifier and Type | Method and Description |
---|---|
static String |
Utilities.getDatabaseName(String dbTableName)
Accepts qualified name which is in the form of dbname.tablename and returns dbname from it
|
static String[] |
Utilities.getDbTableName(String dbtable)
Extract db and table name from dbtable string, where db and table are separated by "."
If there is no db name part, set the current sessions default db
|
static String[] |
Utilities.getDbTableName(String defaultDb,
String dbtable) |
FunctionInfo |
Registry.getFunctionInfo(String functionName)
Looks up the function name in the registry.
|
static FunctionInfo |
FunctionRegistry.getFunctionInfo(String functionName) |
static Set<String> |
FunctionRegistry.getFunctionSynonyms(String funcName)
Returns the set of synonyms of the supplied function.
|
void |
Registry.getFunctionSynonyms(String funcName,
FunctionInfo funcInfo,
Set<String> synonyms)
Adds to the set of synonyms of the supplied function.
|
GenericUDAFEvaluator |
Registry.getGenericUDAFEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns)
Get the GenericUDAF evaluator for the name and argumentClasses.
|
static GenericUDAFEvaluator |
FunctionRegistry.getGenericUDAFEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns)
Get the GenericUDAF evaluator for the name and argumentClasses.
|
GenericUDAFResolver |
Registry.getGenericUDAFResolver(String functionName) |
static GenericUDAFResolver |
FunctionRegistry.getGenericUDAFResolver(String functionName) |
GenericUDAFEvaluator |
Registry.getGenericWindowingEvaluator(String functionName,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns) |
static GenericUDAFEvaluator |
FunctionRegistry.getGenericWindowingEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns) |
static String |
FunctionRegistry.getNormalizedFunctionName(String fn) |
static TableFunctionResolver |
FunctionRegistry.getTableFunctionResolver(String functionName) |
static String |
Utilities.getTableName(String dbTableName)
Accepts qualified name which is in the form of dbname.tablename and returns tablename from it
|
static FunctionInfo |
FunctionRegistry.getTemporaryFunctionInfo(String functionName) |
WindowFunctionInfo |
Registry.getWindowFunctionInfo(String functionName) |
static WindowFunctionInfo |
FunctionRegistry.getWindowFunctionInfo(String functionName) |
static TableFunctionResolver |
FunctionRegistry.getWindowingTableFunction() |
static boolean |
FunctionRegistry.impliesOrder(String functionName)
Both UDF and UDAF functions can imply order for analytical functions
|
static boolean |
FunctionRegistry.isRankingFunction(String name)
Use this to check if function is ranking function
|
static boolean |
FunctionRegistry.isTableFunction(String functionName) |
static boolean |
FunctionRegistry.pivotResult(String functionName) |
void |
Operator.removeChildAndAdoptItsChildren(Operator<? extends OperatorDesc> child)
Remove a child and add all of the child's children to the location of the child
|
static void |
Utilities.reworkMapRedWork(Task<? extends Serializable> task,
boolean reworkMapredWork,
HiveConf conf)
The check here is kind of not clean.
|
static void |
Utilities.validateColumnNames(List<String> colNames,
List<String> checkCols) |
Modifier and Type | Method and Description |
---|---|
int |
TypeRule.cost(Stack<Node> stack) |
int |
RuleRegExp.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack.
|
int |
RuleExactMatch.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack.
|
int |
Rule.cost(Stack<Node> stack) |
void |
DefaultGraphWalker.dispatch(Node nd,
Stack<Node> ndStack)
Dispatch the current operator.
|
Object |
Dispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
Dispatcher function.
|
Object |
DefaultRuleDispatcher.dispatch(Node nd,
Stack<Node> ndStack,
Object... nodeOutputs)
Dispatcher function.
|
void |
TaskGraphWalker.dispatch(Node nd,
Stack<Node> ndStack,
TaskGraphWalker.TaskGraphWalkerContext walkerCtx)
Dispatch the current operator.
|
<T> T |
DefaultGraphWalker.dispatchAndReturn(Node nd,
Stack<Node> ndStack)
Returns dispatch result
|
Object |
NodeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Generic process for all ops that don't have specific implementations.
|
Object |
CompositeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
void |
TaskGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
LevelOrderWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
GraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
DefaultGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
TaskGraphWalker.walk(Node nd)
walk the current operator and its descendants.
|
protected void |
PreOrderWalker.walk(Node nd)
Walk the current operator and its descendants.
|
void |
PreOrderOnceWalker.walk(Node nd)
Walk the current operator and its descendants.
|
protected void |
ForwardWalker.walk(Node nd)
walk the current operator and its descendants.
|
protected void |
DefaultGraphWalker.walk(Node nd)
walk the current operator and its descendants.
|
Modifier and Type | Class and Description |
---|---|
class |
Table.ValidationFailureSemanticException
Marker SemanticException, so that processing that allows for table validation failures
and appropriately handles them can recover from these types of SemanticExceptions
|
Modifier and Type | Method and Description |
---|---|
void |
Table.validatePartColumnNames(Map<String,String> spec,
boolean shouldBeFull) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
AbstractSMBJoinProc.canConvertBucketMapJoinToSMBJoin(MapJoinOperator mapJoinOp,
Stack<Node> stack,
SortBucketJoinProcCtx smbJoinContext,
Object... nodeOutputs) |
protected boolean |
AbstractSMBJoinProc.canConvertJoinToBucketMapJoin(JoinOperator joinOp,
SortBucketJoinProcCtx context) |
protected boolean |
AbstractSMBJoinProc.canConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext) |
protected boolean |
AbstractBucketJoinProc.canConvertMapJoinToBucketMapJoin(MapJoinOperator mapJoinOp,
BucketJoinProcCtx context) |
static void |
BucketMapjoinProc.checkAndConvertBucketMapJoin(ParseContext pGraphContext,
MapJoinOperator mapJoinOp,
String baseBigAlias,
List<String> joinAliases)
Check if a mapjoin can be converted to a bucket mapjoin,
and do the version if possible.
|
protected boolean |
AbstractBucketJoinProc.checkConvertBucketMapJoin(BucketJoinProcCtx context,
Map<String,Operator<? extends OperatorDesc>> aliasToOpInfo,
Map<Byte,List<ExprNodeDesc>> keysMap,
String baseBigAlias,
List<String> joinAliases) |
protected boolean |
AbstractSMBJoinProc.checkConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext) |
protected GroupByOptimizer.GroupByOptimizerSortMatch |
GroupByOptimizer.SortGroupByProcessor.checkSortGroupBy(Stack<Node> stack,
GroupByOperator groupByOp) |
MapJoinOperator |
ConvertJoinMapJoin.convertJoinMapJoin(JoinOperator joinOp,
OptimizeTezProcContext context,
int bigTablePosition,
boolean removeReduceSink) |
static MapJoinOperator |
MapJoinProcessor.convertJoinOpMapJoinOp(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin) |
static MapJoinOperator |
MapJoinProcessor.convertJoinOpMapJoinOp(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean adjustParentsChildren) |
protected MapJoinOperator |
AbstractSMBJoinProc.convertJoinToBucketMapJoin(JoinOperator joinOp,
SortBucketJoinProcCtx joinContext) |
protected void |
AbstractSMBJoinProc.convertJoinToSMBJoin(JoinOperator joinOp,
SortBucketJoinProcCtx smbJoinContext) |
MapJoinOperator |
SparkMapJoinProcessor.convertMapJoin(HiveConf conf,
JoinOperator op,
boolean leftSrc,
String[] baseSrc,
List<String> mapAliases,
int bigTablePos,
boolean noCheckOuterJoin,
boolean validateMapJoinTree)
convert a regular join to a a map-side join.
|
MapJoinOperator |
MapJoinProcessor.convertMapJoin(HiveConf conf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean validateMapJoinTree)
convert a regular join to a a map-side join.
|
protected void |
AbstractBucketJoinProc.convertMapJoinToBucketMapJoin(MapJoinOperator mapJoinOp,
BucketJoinProcCtx context) |
static MapJoinOperator |
MapJoinProcessor.convertSMBJoinToMapJoin(HiveConf hconf,
SMBMapJoinOperator smbJoinOp,
int bigTablePos,
boolean noCheckOuterJoin)
convert a sortmerge join to a a map-side join.
|
static MapWork |
GenMapRedUtils.createMergeTask(FileSinkDesc fsInputDesc,
org.apache.hadoop.fs.Path finalName,
boolean hasDynamicPartitions,
CompilationOpContext ctx)
Create a block level merge task for RCFiles or stripe level merge task for
ORCFiles
|
static void |
GenMapRedUtils.createMRWorkForMergingFiles(FileSinkOperator fsInput,
org.apache.hadoop.fs.Path finalName,
DependencyCollectionTask dependencyTask,
List<Task<MoveWork>> mvTasks,
HiveConf conf,
Task<? extends Serializable> currTask) |
List<String> |
ColumnPrunerProcCtx.genColLists(Operator<? extends OperatorDesc> curOp)
Creates the list of internal column names(these names are used in the
RowResolver and are different from the external column names) that are
needed in the subtree.
|
List<String> |
ColumnPrunerProcCtx.genColLists(Operator<? extends OperatorDesc> curOp,
Operator<? extends OperatorDesc> child)
Creates the list of internal column names(these names are used in the
RowResolver and are different from the external column names) that are
needed in the subtree.
|
MapJoinOperator |
MapJoinProcessor.generateMapJoinOperator(ParseContext pctx,
JoinOperator op,
int mapJoinPos) |
protected abstract void |
PrunerOperatorFactory.FilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top)
Generate predicate.
|
protected void |
FixedBucketPruningOptimizer.FixedBucketPartitionWalker.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
protected void |
FixedBucketPruningOptimizer.BucketBitsetGenerator.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
static void |
MapJoinProcessor.genLocalWorkForMapJoin(MapredWork newWork,
MapJoinOperator newMapJoinOp,
int mapJoinPos) |
static void |
MapJoinProcessor.genMapJoinOpAndLocalWork(HiveConf conf,
MapredWork newWork,
JoinOperator op,
int mapJoinPos)
Convert the join to a map-join and also generate any local work needed.
|
protected void |
MapJoinProcessor.genSelectPlan(ParseContext pctx,
MapJoinOperator input) |
int |
TableSizeBasedBigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseCtx,
JoinOperator joinOp,
Set<Integer> bigTableCandidates) |
int |
BigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseContext,
JoinOperator joinOp,
Set<Integer> joinCandidates) |
int |
AvgPartitionSizeBasedBigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseCtx,
JoinOperator joinOp,
Set<Integer> bigTableCandidates) |
static List<String> |
AbstractBucketJoinProc.getBucketFilePathsOfPartition(org.apache.hadoop.fs.Path location,
ParseContext pGraphContext) |
static List<Index> |
IndexUtils.getIndexes(Table baseTableMetaData,
List<String> matchIndexTypes)
Get a list of indexes on a table that match given types.
|
static List<org.apache.hadoop.fs.Path> |
GenMapRedUtils.getInputPathsForPartialScan(TableScanOperator tableScanOp,
Appendable aggregationKey) |
int |
ConvertJoinMapJoin.getMapJoinConversionPos(JoinOperator joinOp,
OptimizeTezProcContext context,
int buckets,
boolean skipJoinTypeChecks,
long maxSize) |
static MapJoinDesc |
MapJoinProcessor.getMapJoinDesc(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin) |
static MapJoinDesc |
MapJoinProcessor.getMapJoinDesc(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean adjustParentsChildren) |
List<String> |
ColumnPrunerProcCtx.getSelectColsFromLVJoin(RowSchema rs,
List<String> colList)
Create the list of internal columns for select tag of LV
|
void |
ColumnPrunerProcCtx.handleFilterUnionChildren(Operator<? extends OperatorDesc> curOp)
If the input filter operator has direct child(ren) which are union operator,
and the filter's column is not the same as union's
create select operator between them.
|
static void |
GenMapRedUtils.initPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Initialize the current plan by adding it to root tasks.
|
static void |
GenMapRedUtils.initUnionPlan(GenMRProcContext opProcCtx,
UnionOperator currUnionOp,
Task<? extends Serializable> currTask,
boolean local) |
static void |
GenMapRedUtils.initUnionPlan(ReduceSinkOperator op,
UnionOperator currUnionOp,
GenMRProcContext opProcCtx,
Task<? extends Serializable> unionTask)
Initialize the current union plan.
|
static void |
GenMapRedUtils.joinPlan(Task<? extends Serializable> currTask,
Task<? extends Serializable> oldTask,
GenMRProcContext opProcCtx)
Merge the current task into the old task for the reducer
|
static void |
GenMapRedUtils.joinUnionPlan(GenMRProcContext opProcCtx,
UnionOperator currUnionOp,
Task<? extends Serializable> currentUnionTask,
Task<? extends Serializable> existingTask,
boolean local) |
static SamplePruner.LimitPruneRetStatus |
SamplePruner.limitPrune(Partition part,
long sizeLimit,
int fileLimit,
Collection<org.apache.hadoop.fs.Path> retPathList)
Try to generate a list of subset of files in the partition to reach a size
limit with number of files less than fileLimit
|
ParseContext |
Optimizer.optimize()
Invoke all the transformations one-by-one, and alter the query plan.
|
Object |
SparkRemoveDynamicPruningBySize.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SortedMergeJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SortedMergeBucketMapjoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SkewJoinOptimizer.SkewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SetReducerParallelism.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SamplePruner.FilterPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SamplePruner.DefaultPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
RemoveDynamicPruningBySize.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
ReduceSinkMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
PrunerOperatorFactory.FilterPruner.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerOperatorFactory.DefaultPruner.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerExpressionOperatorFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerExpressionOperatorFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerExpressionOperatorFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerExpressionOperatorFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
MergeJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
MapJoinProcessor.CurrentMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in the context.
|
Object |
MapJoinProcessor.MapJoinFS.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in a list of mapjoins followed by a filesink.
|
Object |
MapJoinProcessor.MapJoinDefault.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the mapjoin in a rejected list.
|
Object |
MapJoinProcessor.Default.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Nothing to do.
|
Object |
GroupByOptimizer.SortGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GroupByOptimizer.SortGroupBySkewProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenMRUnion1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Union Operator encountered .
|
Object |
GenMRTableScan1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Table Sink encountered.
|
Object |
GenMRRedSink3.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered.
|
Object |
GenMRRedSink2.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered.
|
Object |
GenMRRedSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Sink encountered.
|
Object |
GenMROperator.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Reduce Scan encountered.
|
Object |
GenMRFileSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
File Sink Operator encountered.
|
Object |
FixedBucketPruningOptimizer.NoopWalker.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
DynamicPartitionPruningOptimization.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ConvertJoinMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateFilterProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateGroupByProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateDefaultProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateSelectProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateFileSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateStopProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateReduceSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateTableScanProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerFilterProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerGroupByProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerScriptProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerLimitProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerPTFProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerDefaultProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerTableScanProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerReduceSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewForwardProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerSelectProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerUnionProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
BucketMapjoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingReduceSinkOptimizer.BucketSortReduceSinkProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
abstract Object |
AbstractSMBJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
abstract Object |
AbstractBucketJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected void |
GroupByOptimizer.SortGroupByProcessor.processGroupBy(GroupByOptimizer.GroupByOptimizerContext ctx,
Stack<Node> stack,
GroupByOperator groupByOp,
int depth) |
static Object |
ReduceSinkMapJoinProc.processReduceSinkToHashJoin(ReduceSinkOperator parentRS,
MapJoinOperator mapJoinOp,
GenTezProcContext context) |
static org.apache.hadoop.fs.Path[] |
SamplePruner.prune(Partition part,
FilterDesc.SampleDesc sampleDescr)
Prunes to get all the files in the partition that satisfy the TABLESAMPLE
clause.
|
static void |
GenMapRedUtils.setMapWork(MapWork plan,
ParseContext parseCtx,
Set<ReadEntity> inputs,
PrunedPartitionList partsList,
TableScanOperator tsOp,
String alias_id,
HiveConf conf,
boolean local)
initialize MapWork
|
static void |
GenMapRedUtils.setTaskPlan(String path,
String alias,
Operator<? extends OperatorDesc> topOp,
MapWork plan,
boolean local,
TableDesc tt_desc)
set the current task in the mapredWork.
|
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
TableScanOperator topOp,
Task<?> task,
boolean local,
GenMRProcContext opProcCtx)
set the current task in the mapredWork.
|
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
TableScanOperator topOp,
Task<?> task,
boolean local,
GenMRProcContext opProcCtx,
PrunedPartitionList pList)
set the current task in the mapredWork.
|
static void |
ColumnPrunerProcFactory.setupNeededColumns(TableScanOperator scanOp,
RowSchema inputRS,
List<String> cols)
Sets up needed columns for TSOP.
|
abstract ParseContext |
Transform.transform(ParseContext pctx)
All transformation steps implement this interface.
|
ParseContext |
StatsOptimizer.transform(ParseContext pctx) |
ParseContext |
SortedMergeBucketMapJoinOptimizer.transform(ParseContext pctx) |
ParseContext |
SortedDynPartitionOptimizer.transform(ParseContext pCtx) |
ParseContext |
SkewJoinOptimizer.transform(ParseContext pctx) |
ParseContext |
SimpleFetchOptimizer.transform(ParseContext pctx) |
ParseContext |
SimpleFetchAggregation.transform(ParseContext pctx) |
ParseContext |
SamplePruner.transform(ParseContext pctx) |
ParseContext |
RedundantDynamicPruningConditionsRemoval.transform(ParseContext pctx)
Transform the query tree.
|
ParseContext |
PointLookupOptimizer.transform(ParseContext pctx) |
ParseContext |
PartitionColumnsSeparator.transform(ParseContext pctx) |
ParseContext |
NonBlockingOpDeDupProc.transform(ParseContext pctx) |
ParseContext |
MapJoinProcessor.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
LimitPushdownOptimizer.transform(ParseContext pctx) |
ParseContext |
JoinReorder.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
IdentityProjectRemover.transform(ParseContext pctx) |
ParseContext |
GroupByOptimizer.transform(ParseContext pctx) |
ParseContext |
GlobalLimitOptimizer.transform(ParseContext pctx) |
ParseContext |
FixedBucketPruningOptimizer.transform(ParseContext pctx) |
ParseContext |
ConstantPropagate.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
ColumnPruner.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
BucketMapJoinOptimizer.transform(ParseContext pctx) |
ParseContext |
BucketingSortingReduceSinkOptimizer.transform(ParseContext pctx) |
protected void |
ConstantPropagate.ConstantPropagateWalker.walk(Node nd) |
protected void |
ColumnPruner.ColumnPrunerWalker.walk(Node nd)
Walk the given operator.
|
static Map<Node,Object> |
PrunerUtils.walkExprTree(ExprNodeDesc pred,
NodeProcessorCtx ctx,
NodeProcessor colProc,
NodeProcessor fieldProc,
NodeProcessor genFuncProc,
NodeProcessor defProc)
Walk expression tree for pruner generation.
|
static void |
PrunerUtils.walkOperatorTree(ParseContext pctx,
NodeProcessorCtx opWalkerCtx,
NodeProcessor filterProc,
NodeProcessor defaultProc)
Walk operator tree for pruner generation.
|
Modifier and Type | Class and Description |
---|---|
class |
CalciteSemanticException
Exception from SemanticAnalyzer.
|
Modifier and Type | Method and Description |
---|---|
protected org.apache.calcite.rex.RexNode |
RexNodeConverter.convert(ExprNodeColumnDesc col) |
org.apache.calcite.rex.RexNode |
RexNodeConverter.convert(ExprNodeDesc expr) |
Operator |
HiveOpConverter.convert(org.apache.calcite.rel.RelNode root) |
static org.apache.calcite.rex.RexNode |
RexNodeConverter.convert(org.apache.calcite.plan.RelOptCluster cluster,
ExprNodeDesc joinCondnExprNode,
List<org.apache.calcite.rel.RelNode> inputRels,
LinkedHashMap<org.apache.calcite.rel.RelNode,RowResolver> relToHiveRR,
Map<org.apache.calcite.rel.RelNode,com.google.common.collect.ImmutableMap<String,Integer>> relToHiveColNameCalcitePosMap,
boolean flattenExpr) |
static Map<ASTNode,ExprNodeDesc> |
JoinCondTypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx) |
static org.apache.calcite.sql.SqlOperator |
SqlFunctionConverter.getCalciteOperator(String funcTextName,
GenericUDF hiveUDF,
com.google.common.collect.ImmutableList<org.apache.calcite.rel.type.RelDataType> calciteArgTypes,
org.apache.calcite.rel.type.RelDataType retType) |
Object |
JoinCondTypeCheckProcFactory.JoinCondColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected ExprNodeColumnDesc |
JoinCondTypeCheckProcFactory.JoinCondDefaultExprProcessor.processQualifiedColRef(TypeCheckCtx ctx,
ASTNode expr,
Object... nodeOutputs) |
ParseContext |
HiveOpConverterPostProc.transform(ParseContext pctx) |
protected void |
JoinCondTypeCheckProcFactory.JoinCondDefaultExprProcessor.validateUDF(ASTNode expr,
boolean isFunction,
TypeCheckCtx ctx,
FunctionInfo fi,
List<ExprNodeDesc> children,
GenericUDF genericUDF) |
Constructor and Description |
---|
JoinTypeCheckCtx(RowResolver leftRR,
RowResolver rightRR,
JoinType hiveJoinType) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.aggressiveDedup(ReduceSinkOperator cRS,
ReduceSinkOperator pRS,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected static void |
QueryPlanTreeTransformation.applyCorrelation(ParseContext pCtx,
CorrelationOptimizer.CorrelationNodeProcCtx corrCtx,
IntraQueryCorrelation correlation)
Based on the correlation, we transform the query plan tree (operator tree).
|
protected static <T extends Operator<?>> |
CorrelationUtilities.findParents(JoinOperator join,
Class<T> target) |
protected static <T extends Operator<?>> |
CorrelationUtilities.findPossibleParent(Operator<?> start,
Class<T> target,
boolean trustScript) |
protected static <T extends Operator<?>> |
CorrelationUtilities.findPossibleParents(Operator<?> start,
Class<T> target,
boolean trustScript) |
static List<Operator<? extends OperatorDesc>> |
CorrelationUtilities.findSiblingOperators(Operator<? extends OperatorDesc> op)
Find all sibling operators (which have the same child operator of op) of op (op
included).
|
static List<ReduceSinkOperator> |
CorrelationUtilities.findSiblingReduceSinkOperators(ReduceSinkOperator op)
Find all sibling ReduceSinkOperators (which have the same child operator of op) of op (op
included).
|
protected static Operator<?> |
CorrelationUtilities.getSingleChild(Operator<?> operator) |
protected static Operator<?> |
CorrelationUtilities.getSingleChild(Operator<?> operator,
boolean throwException) |
protected static <T> T |
CorrelationUtilities.getSingleChild(Operator<?> operator,
Class<T> type) |
protected static Operator<?> |
CorrelationUtilities.getSingleParent(Operator<?> operator) |
protected static Operator<?> |
CorrelationUtilities.getSingleParent(Operator<?> operator,
boolean throwException) |
protected static <T> T |
CorrelationUtilities.getSingleParent(Operator<?> operator,
Class<T> type) |
protected static Operator<?> |
CorrelationUtilities.getStartForGroupBy(ReduceSinkOperator cRS,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected static boolean |
CorrelationUtilities.hasGroupingSet(ReduceSinkOperator cRS) |
protected static int |
CorrelationUtilities.indexOf(ExprNodeDesc cexpr,
ExprNodeDesc[] pexprs,
Operator child,
Operator[] parents,
boolean[] sorted) |
protected static void |
CorrelationUtilities.insertOperatorBetween(Operator<?> newOperator,
Operator<?> parent,
Operator<?> child) |
protected static void |
CorrelationUtilities.isNullOperator(Operator<?> operator)
Throws an exception if the input operator is null
|
protected boolean |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.merge(ReduceSinkOperator cRS,
JoinOperator pJoin,
int minReducer) |
protected boolean |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.merge(ReduceSinkOperator cRS,
ReduceSinkOperator pRS,
int minReducer)
Current RSDedup remove/replace child RS.
|
Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected abstract Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(ReduceSinkOperator cRS,
GroupByOperator cGBY,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected abstract Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(ReduceSinkOperator cRS,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected static void |
CorrelationUtilities.removeReduceSinkForGroupBy(ReduceSinkOperator cRS,
GroupByOperator cGBYr,
ParseContext context,
org.apache.hadoop.hive.ql.optimizer.correlation.AbstractCorrelationProcCtx procCtx) |
protected static SelectOperator |
CorrelationUtilities.replaceReduceSinkWithSelectOperator(ReduceSinkOperator childRS,
ParseContext context,
org.apache.hadoop.hive.ql.optimizer.correlation.AbstractCorrelationProcCtx procCtx) |
protected Integer |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.sameKeys(List<ExprNodeDesc> cexprs,
List<ExprNodeDesc> pexprs,
Operator<?> child,
Operator<?> parent) |
ParseContext |
ReduceSinkDeDuplication.transform(ParseContext pctx) |
ParseContext |
CorrelationOptimizer.transform(ParseContext pctx)
Detect correlations and transform the query tree.
|
Modifier and Type | Method and Description |
---|---|
static Operator<? extends OperatorDesc> |
RewriteParseContextGenerator.generateOperatorTree(QueryState queryState,
String command)
Parse the input
String command and generate an operator tree. |
void |
RewriteQueryUsingAggregateIndexCtx.invokeRewriteQueryProc() |
ParseContext |
RewriteGBUsingIndex.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
static LineageInfo.Dependency |
ExprProcFactory.getExprDependency(LineageCtx lctx,
Operator<? extends OperatorDesc> inpOp,
ExprNodeDesc expr)
Gets the expression dependencies for the expression.
|
Object |
OpProcFactory.TransformLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.TableScanLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.JoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.LateralViewJoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.SelectLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.GroupByLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.UnionLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.ReduceSinkLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.FilterLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.DefaultLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprProcFactory.GenericExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
Generator.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
static List<List<String>> |
ListBucketingPruner.DynamicMultiDimensionalCollection.flat(List<List<String>> uniqSkewedElements)
Flat a dynamic-multi-dimension collection.
|
static List<List<String>> |
ListBucketingPruner.DynamicMultiDimensionalCollection.generateCollection(List<List<String>> values)
Find out complete skewed-element collection
For example:
1.
|
protected void |
LBProcFactory.LBPRFilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
protected void |
LBPartitionProcFactory.LBPRPartitionFilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
static ExprNodeDesc |
LBExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred,
Partition part)
Generates the list bucketing pruner for the expression tree.
|
ParseContext |
ListBucketingPruner.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
boolean |
OpTraitsRulesProcFactory.TableScanRule.checkBucketedTable(Table tbl,
ParseContext pGraphContext,
PrunedPartitionList prunedParts) |
Object |
OpTraitsRulesProcFactory.DefaultRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.ReduceSinkRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.TableScanRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.GroupByRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.SelectRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.JoinRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.MultiParentRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
AnnotateWithOpTraits.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
Object |
PcrOpProcFactory.FilterPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrOpProcFactory.DefaultPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
PartitionConditionRemover.transform(ParseContext pctx) |
static PcrExprProcFactory.NodeInfoWrapper |
PcrExprProcFactory.walkExprTree(String tabAlias,
ArrayList<Partition> parts,
List<VirtualColumn> vcs,
ExprNodeDesc pred)
Remove partition conditions when necessary from the the expression tree.
|
Modifier and Type | Method and Description |
---|---|
Object |
SparkCrossProductCheck.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
SerializeFilter.Serializer.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
NullScanTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
MemoryDecider.MemoryCalculator.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
CrossProductCheck.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
AbstractJoinTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
long |
AbstractJoinTaskDispatcher.getTotalKnownInputSize(Context context,
MapWork currWork,
Map<String,ArrayList<String>> pathToAliases,
HashMap<String,Long> aliasToSize) |
PhysicalContext |
PhysicalOptimizer.optimize()
invoke all the resolvers one-by-one, and alter the physical plan.
|
Object |
SkewJoinProcFactory.SkewJoinJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
SkewJoinProcFactory.SkewJoinDefaultProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
SerializeFilter.Serializer.DefaultRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
MemoryDecider.MemoryCalculator.DefaultRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
LocalMapJoinProcFactory.MapJoinFollowedByGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
LocalMapJoinProcFactory.LocalMapJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
CrossProductCheck.MapJoinCheck.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
CrossProductCheck.ExtractReduceSinkInfo.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.DefaultInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.JoinInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.SelectInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.FileSinkInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.MultiGroupByInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.GroupByInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.ForwardingInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Task<? extends Serializable> |
SortMergeJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context) |
Task<? extends Serializable> |
CommonJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context) |
abstract Task<? extends Serializable> |
AbstractJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context) |
static void |
GenMRSkewJoinProcessor.processSkewJoin(JoinOperator joinOp,
Task<? extends Serializable> currTask,
ParseContext parseCtx)
Create tasks for processing skew joins.
|
static void |
GenSparkSkewJoinProcessor.processSkewJoin(JoinOperator joinOp,
Task<? extends Serializable> currTask,
ReduceWork reduceWork,
ParseContext parseCtx) |
PhysicalContext |
Vectorizer.resolve(PhysicalContext physicalContext) |
PhysicalContext |
StageIDsRearranger.resolve(PhysicalContext pctx) |
PhysicalContext |
SparkMapJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SparkCrossProductCheck.resolve(PhysicalContext pctx) |
PhysicalContext |
SortMergeJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SkewJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SerializeFilter.resolve(PhysicalContext pctx) |
PhysicalContext |
SamplingOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
PhysicalPlanResolver.resolve(PhysicalContext pctx)
All physical plan resolvers have to implement this entry method.
|
PhysicalContext |
NullScanOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
MetadataOnlyOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
MemoryDecider.resolve(PhysicalContext pctx) |
PhysicalContext |
MapJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
LlapDecider.resolve(PhysicalContext pctx) |
PhysicalContext |
IndexWhereResolver.resolve(PhysicalContext physicalContext) |
PhysicalContext |
CrossProductCheck.resolve(PhysicalContext pctx) |
PhysicalContext |
CommonJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
BucketingSortingInferenceOptimizer.resolve(PhysicalContext pctx) |
Modifier and Type | Method and Description |
---|---|
Object |
IndexWhereTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
IndexWhereProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Modifier and Type | Method and Description |
---|---|
protected void |
OpProcFactory.FilterPPR.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
static ExprNodeDesc |
ExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred)
Generates the partition pruner for the expression tree.
|
static PrunedPartitionList |
PartitionPruner.prune(Table tab,
ExprNodeDesc prunerExpr,
HiveConf conf,
String alias,
Map<String,PrunedPartitionList> prunedPartitionsMap)
Get the partition list for the table that satisfies the partition pruner
condition.
|
static PrunedPartitionList |
PartitionPruner.prune(TableScanOperator ts,
ParseContext parseCtx,
String alias)
Get the partition list for the TS operator that satisfies the partition pruner
condition.
|
ParseContext |
PartitionPruner.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
static void |
SparkSortMergeJoinFactory.annotateMapWork(GenSparkProcContext context,
MapWork mapWork,
SMBMapJoinOperator smbMapJoinOp,
TableScanOperator ts,
boolean local)
Annotate MapWork, input is a SMBJoinOp that is part of a MapWork, and its root TS operator.
|
protected boolean |
SparkSortMergeJoinOptimizer.canConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext,
ParseContext pGraphContext,
Stack<Node> stack) |
MapJoinOperator |
SparkMapJoinOptimizer.convertJoinMapJoin(JoinOperator joinOp,
OptimizeSparkProcContext context,
int bigTablePosition) |
protected SMBMapJoinOperator |
SparkSortMergeJoinOptimizer.convertJoinToSMBJoinAndReturn(JoinOperator joinOp,
SortBucketJoinProcCtx smbJoinContext) |
Object |
SparkSortMergeJoinOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkSMBJoinHintOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkSkewJoinProcFactory.SparkSkewJoinJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkReduceSinkMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SparkReduceSinkMapJoinProc.SparkMapJoinFollowedByGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkMapJoinOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkJoinOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkJoinHintOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SetSparkReducerParallelism.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
PhysicalContext |
SplitSparkWorkResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SparkSkewJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
CombineEquivalentWorkResolver.resolve(PhysicalContext pctx) |
Modifier and Type | Method and Description |
---|---|
Object |
StatsRulesProcFactory.TableScanStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.SelectStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.FilterStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.GroupByStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.JoinStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.LimitStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.ReduceSinkStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.DefaultStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
AnnotateWithStatistics.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
Object |
UnionProcFactory.MapRedUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.MapUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.UnknownUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.UnionNoProcessFile.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.NoUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
UnionProcessor.transform(ParseContext pCtx)
Transform the query tree.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
RowResolver.add(RowResolver rrToAddTo,
RowResolver rrToAddFrom) |
static boolean |
RowResolver.add(RowResolver rrToAddTo,
RowResolver rrToAddFrom,
int numColumns) |
protected static ArrayList<PTFInvocationSpec.OrderExpression> |
PTFTranslator.addPartitionExpressionsToOrderList(ArrayList<PTFInvocationSpec.PartitionExpression> partCols,
ArrayList<PTFInvocationSpec.OrderExpression> orderCols) |
void |
ColumnStatsSemanticAnalyzer.analyze(ASTNode ast,
Context origCtx) |
void |
BaseSemanticAnalyzer.analyze(ASTNode ast,
Context ctx) |
ColumnAccessInfo |
ColumnAccessAnalyzer.analyzeColumnAccess(ColumnAccessInfo columnAccessInfo) |
protected void |
BaseSemanticAnalyzer.analyzeDDLSkewedValues(List<List<String>> skewedValues,
ASTNode child)
Handle skewed values in DDL.
|
void |
UpdateDeleteSemanticAnalyzer.analyzeInternal(ASTNode tree) |
void |
SemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
MacroSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
LoadSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ImportSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
FunctionSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ExportSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ExplainSQRewriteSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ExplainSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
DDLSemanticAnalyzer.analyzeInternal(ASTNode input) |
void |
CalcitePlanner.analyzeInternal(ASTNode ast) |
abstract void |
BaseSemanticAnalyzer.analyzeInternal(ASTNode ast) |
protected List<String> |
BaseSemanticAnalyzer.analyzeSkewedTablDDLColNames(List<String> skewedColNames,
ASTNode child)
Analyze list bucket column names
|
TableAccessInfo |
TableAccessAnalyzer.analyzeTableAccess() |
List<HivePrivilegeObject> |
TableMask.applyRowFilterAndColumnMasking(List<HivePrivilegeObject> privObjs) |
protected static RowResolver |
PTFTranslator.buildRowResolverForNoop(String tabAlias,
StructObjectInspector rowObjectInspector,
RowResolver inputRowResolver) |
protected static RowResolver |
PTFTranslator.buildRowResolverForPTF(String tbFnName,
String tabAlias,
StructObjectInspector rowObjectInspector,
List<String> outputColNames,
RowResolver inputRR) |
protected RowResolver |
PTFTranslator.buildRowResolverForWindowing(WindowTableFunctionDef def) |
static String |
BaseSemanticAnalyzer.charSetString(String charSetName,
String charSetString) |
protected void |
SemanticAnalyzer.checkAcidTxnManager(Table table) |
void |
TaskCompiler.compile(ParseContext pCtx,
List<Task<? extends Serializable>> rootTasks,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
static ArrayList<PTFInvocationSpec> |
PTFTranslator.componentize(PTFInvocationSpec ptfInvocation) |
String |
TableMask.create(HivePrivilegeObject privObject,
MaskAndFilterInfo maskAndFilterInfo) |
static ExprNodeDesc |
ParseUtils.createConversionCast(ExprNodeDesc column,
PrimitiveTypeInfo tableFieldTypeInfo) |
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
Iterable<Partition> partitions,
ReplicationSpec replicationSpec) |
protected static Hive |
BaseSemanticAnalyzer.createHiveDB(HiveConf conf) |
MapWork |
GenTezUtils.createMapWork(GenTezProcContext context,
Operator<?> root,
TezWork tezWork,
PrunedPartitionList partitions) |
protected void |
TezCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
protected abstract void |
TaskCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
protected void |
MapReduceCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
static void |
EximUtil.doCheckCompatibility(String currVersion,
String version,
String fcVersion) |
boolean |
SemanticAnalyzer.doPhase1(ASTNode ast,
QB qb,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.Phase1Ctx ctx_1,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.PlannerContext plannerCtx)
Phase 1: (including, but not limited to):
1.
|
void |
SemanticAnalyzer.doPhase1QBExpr(ASTNode ast,
QBExpr qbexpr,
String id,
String alias) |
void |
SemanticAnalyzer.doPhase1QBExpr(ASTNode ast,
QBExpr qbexpr,
String id,
String alias,
boolean insideView) |
static String |
ParseUtils.ensureClassExists(String className) |
protected void |
WindowingSpec.WindowSpec.ensureOrderSpec(WindowingSpec.WindowFunctionSpec wFn) |
protected void |
StorageFormat.fillDefaultStorageFormat(boolean isExternal) |
boolean |
StorageFormat.fillStorageFormat(ASTNode child)
Returns true if the passed token was a storage format token
and thus was processed accordingly.
|
Map<ASTNode,ExprNodeDesc> |
SemanticAnalyzer.genAllExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptors for the expression and children of it
with default TypeCheckCtx.
|
Map<ASTNode,ExprNodeDesc> |
SemanticAnalyzer.genAllExprNodeDesc(ASTNode expr,
RowResolver input,
TypeCheckCtx tcCtx)
Generates all of the expression node descriptors for the expression and children of it
passed in the arguments.
|
protected void |
TezCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected abstract void |
TaskCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
MapReduceCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
static Map<ASTNode,ExprNodeDesc> |
TypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx) |
protected static Map<ASTNode,ExprNodeDesc> |
TypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx,
TypeCheckProcFactory tf) |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptor for the expression with TypeCheckCtx.
|
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
boolean useCaching) |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
boolean useCaching,
boolean foldExpr) |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
TypeCheckCtx tcCtx)
Returns expression node descriptor for the expression.
|
protected Operator |
SemanticAnalyzer.genFileSinkPlan(String dest,
QB qb,
Operator input) |
Operator |
SemanticAnalyzer.genPlan(QB qb) |
Operator |
SemanticAnalyzer.genPlan(QB qb,
boolean skipAmbiguityCheck) |
static QBSubQuery.SubQueryType |
QBSubQuery.SubQueryType.get(ASTNode opNode) |
static BaseSemanticAnalyzer |
SemanticAnalyzerFactory.get(QueryState queryState,
ASTNode tree) |
ColumnInfo |
RowResolver.get(String tab_alias,
String col_alias)
Gets the column Info to tab_alias.col_alias type of a column reference.
|
static ASTNode |
PTFTranslator.getASTNode(ColumnInfo cInfo,
RowResolver rr) |
static CharTypeInfo |
ParseUtils.getCharTypeInfo(ASTNode node) |
protected List<Order> |
BaseSemanticAnalyzer.getColumnNamesOrder(ASTNode ast) |
protected List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast) |
static List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast,
boolean lowerCase)
Get the list of FieldSchema out of the ASTNode.
|
static List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast,
boolean lowerCase,
List<SQLPrimaryKey> primaryKeys,
List<SQLForeignKey> foreignKeys)
Get the list of FieldSchema out of the ASTNode.
|
static RowResolver |
RowResolver.getCombinedRR(RowResolver leftRR,
RowResolver rightRR)
Return a new row resolver that is combination of left RR and right RR.
|
protected Database |
BaseSemanticAnalyzer.getDatabase(String dbName) |
protected Database |
BaseSemanticAnalyzer.getDatabase(String dbName,
boolean throwException) |
static DecimalTypeInfo |
ParseUtils.getDecimalTypeTypeInfo(ASTNode node) |
static String |
BaseSemanticAnalyzer.getDotName(String[] qname) |
ColumnInfo |
RowResolver.getExpression(ASTNode node)
Retrieves the ColumnInfo corresponding to a source expression which
exactly matches the string rendering of the given ASTNode.
|
static GenericUDAFEvaluator |
SemanticAnalyzer.getGenericUDAFEvaluator(String aggName,
ArrayList<ExprNodeDesc> aggParameters,
ASTNode aggTree,
boolean isDistinct,
boolean isAllColumns)
Returns the GenericUDAFEvaluator for the aggregation.
|
static SemanticAnalyzer.GenericUDAFInfo |
SemanticAnalyzer.getGenericUDAFInfo(GenericUDAFEvaluator evaluator,
GenericUDAFEvaluator.Mode emode,
ArrayList<ExprNodeDesc> aggParameters)
Returns the GenericUDAFInfo struct for the aggregation.
|
protected List<Integer> |
SemanticAnalyzer.getGroupingSets(List<ASTNode> groupByExpr,
QBParseInfo parseInfo,
String dest) |
void |
SemanticAnalyzer.getMaterializationMetadata(QB qb) |
void |
SemanticAnalyzer.getMetaData(QB qb) |
void |
SemanticAnalyzer.getMetaData(QB qb,
boolean enableMaterialization) |
protected Partition |
BaseSemanticAnalyzer.getPartition(Table table,
Map<String,String> partSpec,
boolean throwException) |
protected List<Partition> |
BaseSemanticAnalyzer.getPartitions(Table table,
Map<String,String> partSpec,
boolean throwException) |
static Map<String,String> |
AnalyzeCommandUtils.getPartKeyValuePairsFromAST(Table tbl,
ASTNode tree,
HiveConf hiveConf) |
static HashMap<String,String> |
DDLSemanticAnalyzer.getPartSpec(ASTNode partspec) |
PrunedPartitionList |
ParseContext.getPrunedPartitions(String alias,
TableScanOperator ts) |
PrunedPartitionList |
ParseContext.getPrunedPartitions(TableScanOperator ts) |
static String[] |
BaseSemanticAnalyzer.getQualifiedTableName(ASTNode tabNameNode) |
protected List<String> |
BaseSemanticAnalyzer.getSkewedValuesFromASTNode(Node node)
Retrieve skewed values from ASTNode.
|
static Table |
AnalyzeCommandUtils.getTable(ASTNode tree,
BaseSemanticAnalyzer sa) |
protected Table |
BaseSemanticAnalyzer.getTable(String tblName) |
protected Table |
BaseSemanticAnalyzer.getTable(String[] qualified) |
protected Table |
BaseSemanticAnalyzer.getTable(String[] qualified,
boolean throwException) |
protected Table |
BaseSemanticAnalyzer.getTable(String tblName,
boolean throwException) |
protected Table |
BaseSemanticAnalyzer.getTable(String database,
String tblName,
boolean throwException) |
static String |
DDLSemanticAnalyzer.getTypeName(ASTNode node) |
protected static String |
BaseSemanticAnalyzer.getTypeStringFromAST(ASTNode typeNode) |
static HashMap<String,String> |
DDLSemanticAnalyzer.getValidatedPartSpec(Table table,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
static VarcharTypeInfo |
ParseUtils.getVarcharTypeInfo(ASTNode node) |
protected ExprNodeDesc |
TypeCheckProcFactory.DefaultExprProcessor.getXpathOrFuncExprNodeDesc(ASTNode expr,
boolean isFunction,
ArrayList<ExprNodeDesc> children,
TypeCheckCtx ctx) |
RowResolver |
SemanticAnalyzer.handleInsertStatementSpec(List<ExprNodeDesc> col_list,
String dest,
RowResolver outputRR,
RowResolver inputRR,
QB qb,
ASTNode selExprList)
This modifies the Select projections when the Select is part of an insert statement and
the insert statement specifies a column list for the target table, e.g.
|
void |
ColumnStatsAutoGatherContext.insertAnalyzePipeline() |
boolean |
TableMask.isEnabled() |
boolean |
TableMask.needTransform() |
WindowingSpec |
WindowingComponentizer.next(HiveConf hCfg,
SemanticAnalyzer semAly,
UnparseTranslator unparseT,
RowResolver inputRR) |
protected void |
TezCompiler.optimizeOperatorPlan(ParseContext pCtx,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
TaskCompiler.optimizeOperatorPlan(ParseContext pCtxSet,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
TezCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
protected abstract void |
TaskCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
protected void |
MapReduceCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
static ArrayList<WindowingSpec.WindowExpressionSpec> |
SemanticAnalyzer.parseSelect(String selectExprStr) |
void |
HiveSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
Invoked after Hive performs its own semantic analysis on a
statement (including optimization).
|
void |
AbstractSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks) |
ASTNode |
HiveSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
Invoked before Hive performs its own semantic analysis on
a statement.
|
ASTNode |
AbstractSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast) |
Object |
UnionProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.NullExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.NumExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.StrExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.BoolExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.DateTimeExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.IntervalExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.SubQueryExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ProcessAnalyzeTable.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
PrintOpTreeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
GenTezWork.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
FileSinkProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
AppMasterEventProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
static void |
GenTezUtils.processFileSink(GenTezProcContext context,
FileSinkOperator fileSink) |
protected static void |
BaseSemanticAnalyzer.processForeignKeys(ASTNode parent,
ASTNode child,
List<SQLForeignKey> foreignKeys)
Process the foreign keys from the AST and populate the foreign keys in the SQLForeignKey list
|
static ExprNodeDesc |
TypeCheckProcFactory.processGByExpr(Node nd,
Object procCtx)
Function to do groupby subexpression elimination.
|
protected void |
SemanticAnalyzer.processNoScanCommand(ASTNode tree)
process analyze ...
|
protected void |
SemanticAnalyzer.processPartialScanCommand(ASTNode tree)
process analyze ...
|
protected static void |
BaseSemanticAnalyzer.processPrimaryKeys(ASTNode parent,
ASTNode child,
List<SQLPrimaryKey> primaryKeys)
Process the primary keys from the ast nodes and populate the SQLPrimaryKey list.
|
protected ExprNodeDesc |
TypeCheckProcFactory.DefaultExprProcessor.processQualifiedColRef(TypeCheckCtx ctx,
ASTNode expr,
Object... nodeOutputs) |
protected void |
StorageFormat.processStorageFormat(String name) |
boolean |
RowResolver.putWithCheck(String tabAlias,
String colAlias,
String internalName,
ColumnInfo newCI)
Adds column to RR, checking for duplicate columns.
|
static EximUtil.ReadMetaData |
EximUtil.readMetaData(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath) |
static String |
EximUtil.relativeToAbsolutePath(HiveConf conf,
String location) |
static void |
GenTezUtils.removeUnionOperators(GenTezProcContext context,
BaseWork work) |
ASTNode |
ColumnStatsSemanticAnalyzer.rewriteAST(ASTNode ast,
ColumnStatsAutoGatherContext context) |
ASTNode |
SemanticAnalyzer.rewriteASTWithMaskAndFilter(ASTNode ast) |
protected void |
GenTezUtils.setupMapWork(MapWork mapWork,
GenTezProcContext context,
PrunedPartitionList partitions,
TableScanOperator root,
String alias) |
void |
GenTezWorkWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
protected ReadEntity |
BaseSemanticAnalyzer.toReadEntity(org.apache.hadoop.fs.Path location) |
protected ReadEntity |
BaseSemanticAnalyzer.toReadEntity(String location) |
protected WriteEntity |
BaseSemanticAnalyzer.toWriteEntity(org.apache.hadoop.fs.Path location) |
protected WriteEntity |
BaseSemanticAnalyzer.toWriteEntity(String location) |
PTFDesc |
PTFTranslator.translate(PTFInvocationSpec qSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
UnparseTranslator unparseT) |
PTFDesc |
PTFTranslator.translate(WindowingSpec wdwSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
UnparseTranslator unparseT) |
void |
SemanticAnalyzer.validate() |
void |
BaseSemanticAnalyzer.validate() |
void |
WindowingSpec.validateAndMakeEffective() |
static List<String> |
ParseUtils.validateColumnNameUniqueness(List<FieldSchema> fieldSchemas) |
protected static void |
PTFTranslator.validateComparable(ObjectInspector OI,
String errMsg) |
static void |
PTFTranslator.validateNoLeadLagInValueBoundarySpec(ASTNode node) |
static void |
BaseSemanticAnalyzer.validatePartColumnType(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf) |
static void |
BaseSemanticAnalyzer.validatePartSpec(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
protected void |
TypeCheckProcFactory.DefaultExprProcessor.validateUDF(ASTNode expr,
boolean isFunction,
TypeCheckCtx ctx,
FunctionInfo fi,
List<ExprNodeDesc> children,
GenericUDF genericUDF) |
protected void |
TezWalker.walk(Node nd)
Walk the given operator.
|
protected void |
GenTezWorkWalker.walk(Node nd)
Walk the given operator.
|
protected void |
GenMapRedWalker.walk(Node nd)
Walk the given operator.
|
Modifier and Type | Method and Description |
---|---|
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createCreateRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createDropRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createGrantRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createGrantTask(ASTNode ast,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createGrantTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createRevokeRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createRevokeTask(ASTNode ast,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createRevokeTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createSetRoleTask(String roleName,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createSetRoleTask(String roleName,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowCurrentRoleTask(HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs,
org.apache.hadoop.fs.Path resFile) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowCurrentRoleTask(HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs,
org.apache.hadoop.fs.Path resFile) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowGrantTask(ASTNode ast,
org.apache.hadoop.fs.Path resultFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowGrantTask(ASTNode node,
org.apache.hadoop.fs.Path resultFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowRoleGrantTask(ASTNode node,
org.apache.hadoop.fs.Path resultFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowRolePrincipalsTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowRolePrincipalsTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowRolesTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowRolesTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
protected PrivilegeObjectDesc |
HiveAuthorizationTaskFactoryImpl.parsePrivObject(ASTNode ast) |
Modifier and Type | Method and Description |
---|---|
void |
GenSparkUtils.annotateMapWork(GenSparkProcContext context)
Fill MapWork with 'local' work and bucket information for SMB Join.
|
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions) |
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions,
boolean deferSetup) |
ReduceWork |
GenSparkUtils.createReduceWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork) |
protected void |
SparkCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
protected void |
SparkCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
TODO: need to turn on rules that's commented out and add more if necessary.
|
static <T> T |
GenSparkUtils.getChildOperator(Operator<?> op,
Class<T> klazz) |
static SparkEdgeProperty |
GenSparkUtils.getEdgeProperty(ReduceSinkOperator reduceSink,
ReduceWork reduceWork) |
protected void |
SparkCompiler.optimizeOperatorPlan(ParseContext pCtx,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
SparkCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
Object |
SplitOpTreeForDPP.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkProcessAnalyzeTable.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SparkFileSinkProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenSparkWork.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
void |
GenSparkUtils.processFileSink(GenSparkProcContext context,
FileSinkOperator fileSink) |
void |
GenSparkUtils.removeUnionOperators(GenSparkProcContext context,
BaseWork work) |
protected void |
GenSparkUtils.setupMapWork(MapWork mapWork,
GenSparkProcContext context,
PrunedPartitionList partitions,
TableScanOperator root,
String alias) |
void |
GenSparkWorkWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
protected void |
GenSparkWorkWalker.walk(Node nd)
Walk the given operator.
|
Modifier and Type | Method and Description |
---|---|
static ExprNodeDesc |
ExprNodeDescUtils.backtrack(ExprNodeDesc source,
Operator<?> current,
Operator<?> terminal) |
static ExprNodeDesc |
ExprNodeDescUtils.backtrack(ExprNodeDesc source,
Operator<?> current,
Operator<?> terminal,
boolean foldExpr) |
static ArrayList<ExprNodeDesc> |
ExprNodeDescUtils.backtrack(List<ExprNodeDesc> sources,
Operator<?> current,
Operator<?> terminal)
Convert expressions in current operator to those in terminal operator, which
is an ancestor of current or null (back to top operator).
|
static ArrayList<ExprNodeDesc> |
ExprNodeDescUtils.backtrack(List<ExprNodeDesc> sources,
Operator<?> current,
Operator<?> terminal,
boolean foldExpr) |
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
ArrayList<ExprNodeDesc> valueCols,
List<String> outputColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers,
AcidUtils.Operation writeType)
Create the reduce sink descriptor.
|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
int numKeys,
ArrayList<ExprNodeDesc> valueCols,
List<List<Integer>> distinctColIndices,
List<String> outputKeyColumnNames,
List<String> outputValueColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers,
AcidUtils.Operation writeType)
Create the reduce sink descriptor.
|
static Operator<?> |
ExprNodeDescUtils.getSingleParent(Operator<?> current,
Operator<?> terminal) |
void |
AlterTableDesc.validate()
Validate alter table description.
|
void |
CreateTableDesc.validate(HiveConf conf) |
static void |
ValidationUtility.validateSkewedColNames(List<String> colNames,
List<String> skewedColNames)
Skewed column name should be a valid column defined.
|
static void |
ValidationUtility.validateSkewedColNameValueNumberMatch(List<String> skewedColNames,
List<List<String>> skewedColValues)
Skewed column name and value should match.
|
static void |
ValidationUtility.validateSkewedColumnNameUniqueness(List<String> names)
Find out duplicate name.
|
static void |
ValidationUtility.validateSkewedInformation(List<String> colNames,
List<String> skewedColNames,
List<List<String>> skewedColValues)
Validate skewed table information.
|
Constructor and Description |
---|
DynamicPartitionCtx(Table tbl,
Map<String,String> partSpec,
String defaultPartName,
int maxParts) |
Modifier and Type | Method and Description |
---|---|
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends OperatorDesc> op,
ExprNodeDesc pred) |
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends OperatorDesc> op,
List<ExprNodeDesc> preds)
Extracts pushdown predicates from the given list of predicate expression.
|
protected Set<String> |
OpProcFactory.JoinerPPD.getAliases(Node nd) |
protected Object |
OpProcFactory.JoinerPPD.handlePredicates(Node nd,
ExprWalkerInfo prunePreds,
OpWalkerInfo owi) |
protected ExprWalkerInfo |
OpProcFactory.DefaultPPD.mergeChildrenPred(Node nd,
OpWalkerInfo owi,
Set<String> excludedAliases,
boolean ignoreAliases) |
protected boolean |
OpProcFactory.DefaultPPD.mergeWithChildrenPred(Node nd,
OpWalkerInfo owi,
ExprWalkerInfo ewi,
Set<String> aliases)
Take current operators pushdown predicates and merges them with
children's pushdown predicates.
|
Object |
OpProcFactory.ScriptPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.PTFPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.UDTFPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.LateralViewForwardPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.TableScanPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.FilterPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.SimpleFilterPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.JoinerPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.ReduceSinkPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.DefaultPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprWalkerProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Converts the reference from child row resolver to current row resolver.
|
Object |
ExprWalkerProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprWalkerProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprWalkerProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
SyntheticJoinPredicate.transform(ParseContext pctx) |
ParseContext |
SimplePredicatePushDown.transform(ParseContext pctx) |
ParseContext |
PredicateTransitivePropagate.transform(ParseContext pctx) |
ParseContext |
PredicatePushDown.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
List<HivePrivilegeObject> |
HiveV1Authorizer.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
List<HivePrivilegeObject> |
HiveAuthorizerImpl.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
List<HivePrivilegeObject> |
HiveAuthorizer.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs)
applyRowFilterAndColumnMasking is called once for each table in a query.
|
List<HivePrivilegeObject> |
HiveAuthorizationValidator.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
Modifier and Type | Method and Description |
---|---|
List<HivePrivilegeObject> |
SQLStdHiveAuthorizationValidator.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
List<HivePrivilegeObject> |
DummyHiveAuthorizationValidator.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
Modifier and Type | Method and Description |
---|---|
static int |
StatsUtils.getNumBitVectorsForNDVEstimation(HiveConf conf) |
Modifier and Type | Method and Description |
---|---|
void |
LineageInfo.getLineageInfo(String query)
parses given query and gets the lineage info.
|
static void |
LineageInfo.main(String[] args) |
Object |
LineageInfo.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Implements the process method for the NodeProcessor interface.
|
Modifier and Type | Method and Description |
---|---|
static ExprNodeDesc |
MatchPath.ResultExpressionParser.buildExprNode(ASTNode expr,
TypeCheckCtx typeCheckCtx) |
protected static RowResolver |
MatchPath.createSelectListRR(MatchPath evaluator,
PTFInputDef inpDef) |
abstract List<String> |
TableFunctionResolver.getOutputColumnNames() |
List<String> |
TableFunctionResolver.getRawInputColumnNames() |
ArrayList<String> |
NoopWithMap.NoopWithMapResolver.getRawInputColumnNames() |
List<String> |
TableFunctionResolver.getReferencedColumns()
Provide referenced columns names to be used in partition function
|
List<String> |
MatchPath.MatchPathResolver.getReferencedColumns() |
void |
TableFunctionResolver.initialize(HiveConf cfg,
PTFDesc ptfDesc,
PartitionedTableFunctionDef tDef) |
void |
MatchPath.SymbolParser.parse() |
void |
WindowingTableFunction.WindowingTableFunctionResolver.setupOutputOI() |
abstract void |
TableFunctionResolver.setupOutputOI() |
void |
NoopWithMap.NoopWithMapResolver.setupOutputOI() |
void |
Noop.NoopResolver.setupOutputOI() |
void |
MatchPath.MatchPathResolver.setupOutputOI()
check structure of Arguments:
First arg should be a String
then there should be an even number of Arguments:
String, expression; expression should be Convertible to Boolean.
|
void |
TableFunctionResolver.setupRawInputOI() |
void |
NoopWithMap.NoopWithMapResolver.setupRawInputOI() |
void |
MatchPath.ResultExpressionParser.translate() |
Modifier and Type | Method and Description |
---|---|
protected void |
HCatSemanticAnalyzerBase.authorize(Database db,
Privilege priv) |
protected void |
HCatSemanticAnalyzerBase.authorize(Partition part,
Privilege priv) |
protected void |
HCatSemanticAnalyzerBase.authorize(Privilege[] inputPrivs,
Privilege[] outputPrivs) |
protected void |
HCatSemanticAnalyzerBase.authorize(Table table,
Privilege priv) |
protected void |
HCatSemanticAnalyzerBase.authorizeDDL(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
Checks for the given rootTasks, and calls authorizeDDLWork() for each DDLWork to
be authorized.
|
void |
HCatSemanticAnalyzerBase.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks) |
void |
HCatSemanticAnalyzer.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks) |
ASTNode |
HCatSemanticAnalyzer.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast) |
Copyright © 2016 The Apache Software Foundation. All rights reserved.