Modifier and Type | Method and Description |
---|---|
protected Object |
AccumuloRangeGenerator.getIndexedRowIds(GenericUDF genericUdf,
ExprNodeDesc leftHandNode,
String columnName,
ConstantObjectInspector objInspector) |
Object |
AccumuloRangeGenerator.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected Object |
AccumuloRangeGenerator.processExpression(ExprNodeGenericFuncDesc func,
Object[] nodeOutputs) |
Modifier and Type | Class and Description |
---|---|
class |
AmbiguousMethodException
Exception thrown by the UDF and UDAF method resolvers in case a unique method
is not found.
|
class |
NoMatchingMethodException
Exception thrown by the UDF and UDAF method resolvers in case no matching method
is found.
|
class |
UDFArgumentException
exception class, thrown when udf argument have something wrong.
|
class |
UDFArgumentLengthException
exception class, thrown when udf arguments have wrong length.
|
class |
UDFArgumentTypeException
exception class, thrown when udf arguments have wrong types.
|
Modifier and Type | Method and Description |
---|---|
static String |
Utilities.getDatabaseName(String dbTableName)
Accepts qualified name which is in the form of dbname.tablename and returns dbname from it
|
static String[] |
Utilities.getDbTableName(String dbtable)
Extract db and table name from dbtable string, where db and table are separated by "."
If there is no db name part, set the current sessions default db
|
static String[] |
Utilities.getDbTableName(String defaultDb,
String dbtable) |
FunctionInfo |
Registry.getFunctionInfo(String functionName)
Looks up the function name in the registry.
|
static FunctionInfo |
FunctionRegistry.getFunctionInfo(String functionName) |
static Set<String> |
FunctionRegistry.getFunctionSynonyms(String funcName)
Returns the set of synonyms of the supplied function.
|
void |
Registry.getFunctionSynonyms(String funcName,
FunctionInfo funcInfo,
Set<String> synonyms)
Adds to the set of synonyms of the supplied function.
|
static GenericUDAFEvaluator |
FunctionRegistry.getGenericUDAFEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns)
Get the GenericUDAF evaluator for the name and argumentClasses.
|
GenericUDAFEvaluator |
Registry.getGenericUDAFEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isWindowing,
boolean isDistinct,
boolean isAllColumns)
Get the GenericUDAF evaluator for the name and argumentClasses.
|
GenericUDAFResolver |
Registry.getGenericUDAFResolver(String functionName) |
static GenericUDAFResolver |
FunctionRegistry.getGenericUDAFResolver(String functionName) |
GenericUDAFEvaluator |
Registry.getGenericWindowingEvaluator(String functionName,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns) |
static GenericUDAFEvaluator |
FunctionRegistry.getGenericWindowingEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns) |
static String |
FunctionRegistry.getNormalizedFunctionName(String fn) |
static TableFunctionResolver |
FunctionRegistry.getTableFunctionResolver(String functionName) |
static String |
Utilities.getTableName(String dbTableName)
Accepts qualified name which is in the form of dbname.tablename and returns tablename from it
|
static FunctionInfo |
FunctionRegistry.getTemporaryFunctionInfo(String functionName) |
WindowFunctionInfo |
Registry.getWindowFunctionInfo(String functionName) |
static WindowFunctionInfo |
FunctionRegistry.getWindowFunctionInfo(String functionName) |
static TableFunctionResolver |
FunctionRegistry.getWindowingTableFunction() |
static boolean |
FunctionRegistry.impliesOrder(String functionName)
Both UDF and UDAF functions can imply order for analytical functions
|
static boolean |
FunctionRegistry.isRankingFunction(String name)
Use this to check if function is ranking function
|
static boolean |
FunctionRegistry.isTableFunction(String functionName) |
static boolean |
FunctionRegistry.pivotResult(String functionName) |
void |
Operator.removeChildAndAdoptItsChildren(Operator<? extends OperatorDesc> child)
Remove a child and add all of the child's children to the location of the child
|
static void |
Utilities.reworkMapRedWork(Task<? extends Serializable> task,
boolean reworkMapredWork,
HiveConf conf)
The check here is kind of not clean.
|
static void |
Utilities.validateColumnNames(List<String> colNames,
List<String> checkCols) |
Modifier and Type | Method and Description |
---|---|
static Map<Integer,List<ExprNodeGenericFuncDesc>> |
ReplUtils.genPartSpecs(Table table,
List<Map<String,String>> partitions) |
static Task<?> |
ReplUtils.getTableCheckpointTask(ImportTableDesc tableDesc,
HashMap<String,String> partSpec,
String dumpRoot,
HiveConf conf) |
static Task<?> |
ReplUtils.getTableReplLogTask(ImportTableDesc tableDesc,
ReplLogger replLogger,
HiveConf conf) |
Modifier and Type | Method and Description |
---|---|
Database |
DatabaseEvent.dbInMetadata(String dbNameToOverride) |
List<AddPartitionDesc> |
TableEvent.partitionDescriptions(ImportTableDesc tblDesc) |
List<String> |
TableEvent.partitions(ImportTableDesc tblDesc) |
ImportTableDesc |
TableEvent.tableDesc(String dbName) |
Modifier and Type | Method and Description |
---|---|
Database |
FSDatabaseEvent.dbInMetadata(String dbNameToOverride) |
List<AddPartitionDesc> |
FSTableEvent.partitionDescriptions(ImportTableDesc tblDesc) |
List<AddPartitionDesc> |
FSPartitionEvent.partitionDescriptions(ImportTableDesc tblDesc) |
List<String> |
FSTableEvent.partitions(ImportTableDesc tblDesc) |
List<String> |
FSPartitionEvent.partitions(ImportTableDesc tblDesc) |
ImportTableDesc |
FSTableEvent.tableDesc(String dbName) |
ImportTableDesc |
FSPartitionEvent.tableDesc(String dbName) |
Modifier and Type | Method and Description |
---|---|
TaskTracker |
LoadConstraint.tasks() |
TaskTracker |
LoadFunction.tasks() |
TaskTracker |
LoadDatabase.tasks() |
TaskTracker |
LoadDatabase.AlterDatabase.tasks() |
Modifier and Type | Method and Description |
---|---|
TaskTracker |
LoadPartitions.tasks() |
TaskTracker |
LoadTable.tasks() |
Constructor and Description |
---|
LoadTable(TableEvent event,
Context context,
ReplLogger replLogger,
TableContext tableContext,
TaskTracker limiter) |
Modifier and Type | Method and Description |
---|---|
int |
RuleRegExp.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack.
|
int |
TypeRule.cost(Stack<Node> stack) |
int |
Rule.cost(Stack<Node> stack) |
int |
RuleExactMatch.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack.
|
void |
DefaultGraphWalker.dispatch(Node nd,
Stack<Node> ndStack)
Dispatch the current operator.
|
Object |
DefaultRuleDispatcher.dispatch(Node nd,
Stack<Node> ndStack,
Object... nodeOutputs)
Dispatcher function.
|
Object |
Dispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
Dispatcher function.
|
void |
TaskGraphWalker.dispatch(Node nd,
Stack<Node> ndStack,
TaskGraphWalker.TaskGraphWalkerContext walkerCtx)
Dispatch the current operator.
|
<T> T |
DefaultGraphWalker.dispatchAndReturn(Node nd,
Stack<Node> ndStack)
Returns dispatch result
|
Object |
CompositeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
NodeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Generic process for all ops that don't have specific implementations.
|
void |
GraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
TaskGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
LevelOrderWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
DefaultGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
void |
PreOrderOnceWalker.walk(Node nd)
Walk the current operator and its descendants.
|
protected void |
PreOrderWalker.walk(Node nd)
Walk the current operator and its descendants.
|
void |
TaskGraphWalker.walk(Node nd)
walk the current operator and its descendants.
|
protected void |
ExpressionWalker.walk(Node nd)
walk the current operator and its descendants.
|
protected void |
ForwardWalker.walk(Node nd)
walk the current operator and its descendants.
|
protected void |
DefaultGraphWalker.walk(Node nd)
walk the current operator and its descendants.
|
Modifier and Type | Class and Description |
---|---|
class |
Table.ValidationFailureSemanticException
Marker SemanticException, so that processing that allows for table validation failures
and appropriately handles them can recover from these types of SemanticExceptions
|
Modifier and Type | Method and Description |
---|---|
void |
Table.validatePartColumnNames(Map<String,String> spec,
boolean shouldBeFull) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
AbstractSMBJoinProc.canConvertBucketMapJoinToSMBJoin(MapJoinOperator mapJoinOp,
Stack<Node> stack,
SortBucketJoinProcCtx smbJoinContext,
Object... nodeOutputs) |
protected boolean |
AbstractSMBJoinProc.canConvertJoinToBucketMapJoin(JoinOperator joinOp,
SortBucketJoinProcCtx context) |
protected boolean |
AbstractSMBJoinProc.canConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext) |
protected boolean |
AbstractBucketJoinProc.canConvertMapJoinToBucketMapJoin(MapJoinOperator mapJoinOp,
BucketJoinProcCtx context) |
static void |
BucketMapjoinProc.checkAndConvertBucketMapJoin(ParseContext pGraphContext,
MapJoinOperator mapJoinOp,
String baseBigAlias,
List<String> joinAliases)
Check if a mapjoin can be converted to a bucket mapjoin,
and do the version if possible.
|
protected boolean |
AbstractBucketJoinProc.checkConvertBucketMapJoin(BucketJoinProcCtx context,
Map<String,Operator<? extends OperatorDesc>> aliasToOpInfo,
Map<Byte,List<ExprNodeDesc>> keysMap,
String baseBigAlias,
List<String> joinAliases) |
protected boolean |
AbstractSMBJoinProc.checkConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext) |
protected GroupByOptimizer.GroupByOptimizerSortMatch |
GroupByOptimizer.SortGroupByProcessor.checkSortGroupBy(Stack<Node> stack,
GroupByOperator groupByOp) |
MapJoinOperator |
ConvertJoinMapJoin.convertJoinMapJoin(JoinOperator joinOp,
OptimizeTezProcContext context,
int bigTablePosition,
boolean removeReduceSink) |
static MapJoinOperator |
MapJoinProcessor.convertJoinOpMapJoinOp(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin) |
static MapJoinOperator |
MapJoinProcessor.convertJoinOpMapJoinOp(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean adjustParentsChildren) |
protected MapJoinOperator |
AbstractSMBJoinProc.convertJoinToBucketMapJoin(JoinOperator joinOp,
SortBucketJoinProcCtx joinContext) |
protected void |
AbstractSMBJoinProc.convertJoinToSMBJoin(JoinOperator joinOp,
SortBucketJoinProcCtx smbJoinContext) |
MapJoinOperator |
MapJoinProcessor.convertMapJoin(HiveConf conf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean validateMapJoinTree)
convert a regular join to a a map-side join.
|
MapJoinOperator |
SparkMapJoinProcessor.convertMapJoin(HiveConf conf,
JoinOperator op,
boolean leftSrc,
String[] baseSrc,
List<String> mapAliases,
int bigTablePos,
boolean noCheckOuterJoin,
boolean validateMapJoinTree)
convert a regular join to a a map-side join.
|
protected void |
AbstractBucketJoinProc.convertMapJoinToBucketMapJoin(MapJoinOperator mapJoinOp,
BucketJoinProcCtx context) |
static MapJoinOperator |
MapJoinProcessor.convertSMBJoinToMapJoin(HiveConf hconf,
SMBMapJoinOperator smbJoinOp,
int bigTablePos,
boolean noCheckOuterJoin)
convert a sortmerge join to a a map-side join.
|
static MapWork |
GenMapRedUtils.createMergeTask(FileSinkDesc fsInputDesc,
org.apache.hadoop.fs.Path finalName,
boolean hasDynamicPartitions,
CompilationOpContext ctx)
Create a block level merge task for RCFiles or stripe level merge task for
ORCFiles
|
static void |
GenMapRedUtils.createMRWorkForMergingFiles(FileSinkOperator fsInput,
org.apache.hadoop.fs.Path finalName,
DependencyCollectionTask dependencyTask,
List<Task<MoveWork>> mvTasks,
HiveConf conf,
Task<? extends Serializable> currTask,
LineageState lineageState) |
List<FieldNode> |
ColumnPrunerProcCtx.genColLists(Operator<? extends OperatorDesc> curOp)
Creates the list of internal column names(represented by field nodes,
these names are used in the RowResolver and are different from the
external column names) that are needed in the subtree.
|
List<FieldNode> |
ColumnPrunerProcCtx.genColLists(Operator<? extends OperatorDesc> curOp,
Operator<? extends OperatorDesc> child)
Creates the list of internal column names (represented by field nodes,
these names are used in the RowResolver and are different from the
external column names) that are needed in the subtree.
|
MapJoinOperator |
MapJoinProcessor.generateMapJoinOperator(ParseContext pctx,
JoinOperator op,
int mapJoinPos) |
protected abstract void |
PrunerOperatorFactory.FilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top)
Generate predicate.
|
protected void |
FixedBucketPruningOptimizer.FixedBucketPartitionWalker.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
protected void |
FixedBucketPruningOptimizer.BucketBitsetGenerator.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
static void |
MapJoinProcessor.genLocalWorkForMapJoin(MapredWork newWork,
MapJoinOperator newMapJoinOp,
int mapJoinPos) |
static void |
MapJoinProcessor.genMapJoinOpAndLocalWork(HiveConf conf,
MapredWork newWork,
JoinOperator op,
int mapJoinPos)
Convert the join to a map-join and also generate any local work needed.
|
protected void |
MapJoinProcessor.genSelectPlan(ParseContext pctx,
MapJoinOperator input) |
int |
AvgPartitionSizeBasedBigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseCtx,
JoinOperator joinOp,
Set<Integer> bigTableCandidates) |
int |
TableSizeBasedBigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseCtx,
JoinOperator joinOp,
Set<Integer> bigTableCandidates) |
int |
BigTableSelectorForAutoSMJ.getBigTablePosition(ParseContext parseContext,
JoinOperator joinOp,
Set<Integer> joinCandidates) |
static List<String> |
AbstractBucketJoinProc.getBucketFilePathsOfPartition(org.apache.hadoop.fs.Path location,
ParseContext pGraphContext) |
int |
ConvertJoinMapJoin.getMapJoinConversionPos(JoinOperator joinOp,
OptimizeTezProcContext context,
int buckets,
boolean skipJoinTypeChecks,
long maxSize,
boolean checkMapJoinThresholds)
Obtain big table position for join.
|
static MapJoinDesc |
MapJoinProcessor.getMapJoinDesc(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin) |
static MapJoinDesc |
MapJoinProcessor.getMapJoinDesc(HiveConf hconf,
JoinOperator op,
boolean leftInputJoin,
String[] baseSrc,
List<String> mapAliases,
int mapJoinPos,
boolean noCheckOuterJoin,
boolean adjustParentsChildren) |
List<FieldNode> |
ColumnPrunerProcCtx.getSelectColsFromLVJoin(RowSchema rs,
List<FieldNode> colList)
Create the list of internal columns for select tag of LV
|
void |
ColumnPrunerProcCtx.handleFilterUnionChildren(Operator<? extends OperatorDesc> curOp)
If the input filter operator has direct child(ren) which are union operator,
and the filter's column is not the same as union's
create select operator between them.
|
static void |
GenMapRedUtils.initPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Initialize the current plan by adding it to root tasks.
|
static void |
GenMapRedUtils.initUnionPlan(GenMRProcContext opProcCtx,
UnionOperator currUnionOp,
Task<? extends Serializable> currTask,
boolean local) |
static void |
GenMapRedUtils.initUnionPlan(ReduceSinkOperator op,
UnionOperator currUnionOp,
GenMRProcContext opProcCtx,
Task<? extends Serializable> unionTask)
Initialize the current union plan.
|
static void |
GenMapRedUtils.joinPlan(Task<? extends Serializable> currTask,
Task<? extends Serializable> oldTask,
GenMRProcContext opProcCtx)
Merge the current task into the old task for the reducer
|
static void |
GenMapRedUtils.joinUnionPlan(GenMRProcContext opProcCtx,
UnionOperator currUnionOp,
Task<? extends Serializable> currentUnionTask,
Task<? extends Serializable> existingTask,
boolean local) |
static SamplePruner.LimitPruneRetStatus |
SamplePruner.limitPrune(Partition part,
long sizeLimit,
int fileLimit,
Collection<org.apache.hadoop.fs.Path> retPathList)
Try to generate a list of subset of files in the partition to reach a size
limit with number of files less than fileLimit
|
ParseContext |
Optimizer.optimize()
Invoke all the transformations one-by-one, and alter the query plan.
|
Object |
RemoveDynamicPruningBySize.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
ConvertJoinMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenMRTableScan1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Table Sink encountered.
|
Object |
PrunerOperatorFactory.FilterPruner.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerOperatorFactory.DefaultPruner.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenMRRedSink2.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered.
|
Object |
MergeJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GroupByOptimizer.SortGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GroupByOptimizer.SortGroupBySkewProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingReduceSinkOptimizer.BucketSortReduceSinkProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketMapjoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenMROperator.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Reduce Scan encountered.
|
Object |
SetReducerParallelism.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateFilterProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateGroupByProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateDefaultProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateSelectProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateFileSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateStopProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateReduceSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ConstantPropagateProcFactory.ConstantPropagateTableScanProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
GenMRRedSink3.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered.
|
Object |
SamplePruner.FilterPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SamplePruner.DefaultPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SkewJoinOptimizer.SkewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
MapJoinProcessor.CurrentMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in the context.
|
Object |
MapJoinProcessor.MapJoinFS.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in a list of mapjoins followed by a filesink.
|
Object |
MapJoinProcessor.MapJoinDefault.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the mapjoin in a rejected list.
|
Object |
MapJoinProcessor.Default.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Nothing to do.
|
Object |
GenMRRedSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Sink encountered.
|
Object |
PrunerExpressionOperatorFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerExpressionOperatorFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerExpressionOperatorFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PrunerExpressionOperatorFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SortedMergeBucketMapjoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenMRFileSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
File Sink Operator encountered.
|
Object |
GenMRUnion1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Union Operator encountered .
|
Object |
FixedBucketPruningOptimizer.NoopWalker.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
abstract Object |
AbstractBucketJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ReduceSinkMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SortedMergeJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
CountDistinctRewriteProc.CountDistinctProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
abstract Object |
AbstractSMBJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerFilterProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerGroupByProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerScriptProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerLimitProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerPTFProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerDefaultProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerTableScanProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerReduceSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewForwardProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerSelectProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ColumnPrunerProcFactory.ColumnPrunerUnionProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
DynamicPartitionPruningOptimization.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkRemoveDynamicPruning.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
protected void |
CountDistinctRewriteProc.CountDistinctProcessor.processGroupBy(GroupByOperator mGby,
ReduceSinkOperator rs,
GroupByOperator rGby,
int indexOfDist) |
protected void |
GroupByOptimizer.SortGroupByProcessor.processGroupBy(GroupByOptimizer.GroupByOptimizerContext ctx,
Stack<Node> stack,
GroupByOperator groupByOp,
int depth) |
static Object |
ReduceSinkMapJoinProc.processReduceSinkToHashJoin(ReduceSinkOperator parentRS,
MapJoinOperator mapJoinOp,
GenTezProcContext context) |
static org.apache.hadoop.fs.Path[] |
SamplePruner.prune(Partition part,
FilterDesc.SampleDesc sampleDescr)
Prunes to get all the files in the partition that satisfy the TABLESAMPLE
clause.
|
static void |
GenMapRedUtils.setMapWork(MapWork plan,
ParseContext parseCtx,
Set<ReadEntity> inputs,
PrunedPartitionList partsList,
TableScanOperator tsOp,
String alias_id,
HiveConf conf,
boolean local)
initialize MapWork
|
static void |
GenMapRedUtils.setTaskPlan(org.apache.hadoop.fs.Path path,
String alias,
Operator<? extends OperatorDesc> topOp,
MapWork plan,
boolean local,
TableDesc tt_desc)
set the current task in the mapredWork.
|
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
TableScanOperator topOp,
Task<?> task,
boolean local,
GenMRProcContext opProcCtx)
set the current task in the mapredWork.
|
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
TableScanOperator topOp,
Task<?> task,
boolean local,
GenMRProcContext opProcCtx,
PrunedPartitionList pList)
set the current task in the mapredWork.
|
static void |
ColumnPrunerProcFactory.setupNeededColumns(TableScanOperator scanOp,
RowSchema inputRS,
List<FieldNode> cols)
Sets up needed columns for TSOP.
|
ParseContext |
GroupByOptimizer.transform(ParseContext pctx) |
ParseContext |
BucketingSortingReduceSinkOptimizer.transform(ParseContext pctx) |
ParseContext |
ColumnPruner.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
NonBlockingOpDeDupProc.transform(ParseContext pctx) |
ParseContext |
SortedDynPartitionOptimizer.transform(ParseContext pCtx) |
ParseContext |
SamplePruner.transform(ParseContext pctx) |
ParseContext |
SkewJoinOptimizer.transform(ParseContext pctx) |
ParseContext |
MapJoinProcessor.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
JoinReorder.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
IdentityProjectRemover.transform(ParseContext pctx) |
abstract ParseContext |
Transform.transform(ParseContext pctx)
All transformation steps implement this interface.
|
ParseContext |
RedundantDynamicPruningConditionsRemoval.transform(ParseContext pctx)
Transform the query tree.
|
ParseContext |
SortedDynPartitionTimeGranularityOptimizer.transform(ParseContext pCtx) |
ParseContext |
SimpleFetchAggregation.transform(ParseContext pctx) |
ParseContext |
SharedWorkOptimizer.transform(ParseContext pctx) |
ParseContext |
SimpleFetchOptimizer.transform(ParseContext pctx) |
ParseContext |
PartitionColumnsSeparator.transform(ParseContext pctx) |
ParseContext |
ConstantPropagate.transform(ParseContext pactx)
Transform the query tree.
|
ParseContext |
FixedBucketPruningOptimizer.transform(ParseContext pctx) |
ParseContext |
BucketMapJoinOptimizer.transform(ParseContext pctx) |
ParseContext |
PointLookupOptimizer.transform(ParseContext pctx) |
ParseContext |
SortedMergeBucketMapJoinOptimizer.transform(ParseContext pctx) |
ParseContext |
CountDistinctRewriteProc.transform(ParseContext pctx) |
ParseContext |
LimitPushdownOptimizer.transform(ParseContext pctx) |
ParseContext |
StatsOptimizer.transform(ParseContext pctx) |
ParseContext |
GlobalLimitOptimizer.transform(ParseContext pctx) |
protected void |
ColumnPruner.ColumnPrunerWalker.walk(Node nd)
Walk the given operator.
|
protected void |
ConstantPropagate.ConstantPropagateWalker.walk(Node nd) |
static Map<Node,Object> |
PrunerUtils.walkExprTree(ExprNodeDesc pred,
NodeProcessorCtx ctx,
NodeProcessor colProc,
NodeProcessor fieldProc,
NodeProcessor genFuncProc,
NodeProcessor defProc)
Walk expression tree for pruner generation.
|
static void |
PrunerUtils.walkOperatorTree(ParseContext pctx,
NodeProcessorCtx opWalkerCtx,
NodeProcessor filterProc,
NodeProcessor defaultProc)
Walk operator tree for pruner generation.
|
Modifier and Type | Class and Description |
---|---|
class |
CalciteSemanticException
Exception from SemanticAnalyzer.
|
class |
CalciteSubquerySemanticException
Exception from SemanticAnalyzer.
|
class |
CalciteViewSemanticException
Exception from SemanticAnalyzer.
|
Modifier and Type | Method and Description |
---|---|
static HiveTableFunctionScan |
HiveCalciteUtil.createUDTFForSetOp(org.apache.calcite.plan.RelOptCluster cluster,
org.apache.calcite.rel.RelNode input) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.Frame |
HiveRelDecorrelator.decorrelateRel(org.apache.calcite.rel.core.Aggregate rel)
Rewrites a
Aggregate . |
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.Frame |
HiveRelDecorrelator.decorrelateRel(HiveAggregate rel) |
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.Frame |
HiveRelDecorrelator.decorrelateRel(HiveFilter rel) |
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.Frame |
HiveRelDecorrelator.decorrelateRel(HiveJoin rel) |
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.Frame |
HiveRelDecorrelator.decorrelateRel(HiveProject rel) |
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.Frame |
HiveRelDecorrelator.decorrelateRel(org.apache.calcite.rel.core.Project rel)
Rewrite Project.
|
Modifier and Type | Method and Description |
---|---|
protected org.apache.calcite.rex.RexNode |
RexNodeConverter.convert(ExprNodeColumnDesc col) |
org.apache.calcite.rex.RexNode |
RexNodeConverter.convert(ExprNodeDesc expr) |
Operator |
HiveOpConverter.convert(org.apache.calcite.rel.RelNode root) |
static org.apache.calcite.rex.RexNode |
RexNodeConverter.convert(org.apache.calcite.plan.RelOptCluster cluster,
ExprNodeDesc joinCondnExprNode,
List<org.apache.calcite.rel.RelNode> inputRels,
LinkedHashMap<org.apache.calcite.rel.RelNode,RowResolver> relToHiveRR,
Map<org.apache.calcite.rel.RelNode,com.google.common.collect.ImmutableMap<String,Integer>> relToHiveColNameCalcitePosMap,
boolean flattenExpr) |
static Map<ASTNode,ExprNodeDesc> |
JoinCondTypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx) |
static org.apache.calcite.sql.SqlOperator |
SqlFunctionConverter.getCalciteOperator(String funcTextName,
GenericUDF hiveUDF,
com.google.common.collect.ImmutableList<org.apache.calcite.rel.type.RelDataType> calciteArgTypes,
org.apache.calcite.rel.type.RelDataType retType) |
static org.apache.calcite.sql.SqlOperator |
SqlFunctionConverter.getCalciteOperator(String funcTextName,
GenericUDTF hiveUDTF,
com.google.common.collect.ImmutableList<org.apache.calcite.rel.type.RelDataType> calciteArgTypes,
org.apache.calcite.rel.type.RelDataType retType) |
Object |
JoinCondTypeCheckProcFactory.JoinCondColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected ExprNodeColumnDesc |
JoinCondTypeCheckProcFactory.JoinCondDefaultExprProcessor.processQualifiedColRef(TypeCheckCtx ctx,
ASTNode expr,
Object... nodeOutputs) |
ParseContext |
HiveOpConverterPostProc.transform(ParseContext pctx) |
Constructor and Description |
---|
JoinTypeCheckCtx(RowResolver leftRR,
RowResolver rightRR,
JoinType hiveJoinType) |
Modifier and Type | Method and Description |
---|---|
protected static boolean |
ReduceSinkDeDuplicationUtils.aggressiveDedup(ReduceSinkOperator cRS,
ReduceSinkOperator pRS,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected static void |
QueryPlanTreeTransformation.applyCorrelation(ParseContext pCtx,
CorrelationOptimizer.CorrelationNodeProcCtx corrCtx,
IntraQueryCorrelation correlation)
Based on the correlation, we transform the query plan tree (operator tree).
|
protected static <T extends Operator<?>> |
CorrelationUtilities.findFirstPossibleParent(Operator<?> start,
Class<T> target,
boolean trustScript) |
protected static <T extends Operator<?>> |
CorrelationUtilities.findFirstPossibleParentPreserveSortOrder(Operator<?> start,
Class<T> target,
boolean trustScript) |
protected static <T extends Operator<?>> |
CorrelationUtilities.findParents(JoinOperator join,
Class<T> target) |
protected static <T extends Operator<?>> |
CorrelationUtilities.findPossibleParent(Operator<?> start,
Class<T> target,
boolean trustScript) |
protected static <T extends Operator<?>> |
CorrelationUtilities.findPossibleParents(Operator<?> start,
Class<T> target,
boolean trustScript) |
static List<Operator<? extends OperatorDesc>> |
CorrelationUtilities.findSiblingOperators(Operator<? extends OperatorDesc> op)
Find all sibling operators (which have the same child operator of op) of op (op
included).
|
static List<ReduceSinkOperator> |
CorrelationUtilities.findSiblingReduceSinkOperators(ReduceSinkOperator op)
Find all sibling ReduceSinkOperators (which have the same child operator of op) of op (op
included).
|
protected static Operator<?> |
CorrelationUtilities.getSingleChild(Operator<?> operator) |
protected static Operator<?> |
CorrelationUtilities.getSingleChild(Operator<?> operator,
boolean throwException) |
protected static <T> T |
CorrelationUtilities.getSingleChild(Operator<?> operator,
Class<T> type) |
protected static Operator<?> |
CorrelationUtilities.getSingleParent(Operator<?> operator) |
protected static Operator<?> |
CorrelationUtilities.getSingleParent(Operator<?> operator,
boolean throwException) |
protected static <T> T |
CorrelationUtilities.getSingleParent(Operator<?> operator,
Class<T> type) |
protected static Operator<?> |
CorrelationUtilities.getStartForGroupBy(ReduceSinkOperator cRS,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected static boolean |
CorrelationUtilities.hasGroupingSet(ReduceSinkOperator cRS) |
protected static int |
CorrelationUtilities.indexOf(ExprNodeDesc cexpr,
ExprNodeDesc[] pexprs,
Operator child,
Operator[] parents,
boolean[] sorted) |
protected static void |
CorrelationUtilities.insertOperatorBetween(Operator<?> newOperator,
Operator<?> parent,
Operator<?> child) |
protected static void |
CorrelationUtilities.isNullOperator(Operator<?> operator)
Throws an exception if the input operator is null
|
static boolean |
ReduceSinkDeDuplicationUtils.merge(ReduceSinkOperator cRS,
JoinOperator pJoin,
int minReducer) |
static boolean |
ReduceSinkDeDuplicationUtils.merge(ReduceSinkOperator cRS,
ReduceSinkOperator pRS,
int minReducer)
Current RSDedup remove/replace child RS.
|
Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected abstract Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(ReduceSinkOperator cRS,
GroupByOperator cGBY,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected abstract Object |
ReduceSinkDeDuplication.AbsctractReducerReducerProc.process(ReduceSinkOperator cRS,
ReduceSinkDeDuplication.ReduceSinkDeduplicateProcCtx dedupCtx) |
protected static void |
CorrelationUtilities.removeReduceSinkForGroupBy(ReduceSinkOperator cRS,
GroupByOperator cGBYr,
ParseContext context,
org.apache.hadoop.hive.ql.optimizer.correlation.AbstractCorrelationProcCtx procCtx) |
protected static SelectOperator |
CorrelationUtilities.replaceReduceSinkWithSelectOperator(ReduceSinkOperator childRS,
ParseContext context,
org.apache.hadoop.hive.ql.optimizer.correlation.AbstractCorrelationProcCtx procCtx) |
protected static Integer |
ReduceSinkDeDuplicationUtils.sameKeys(List<ExprNodeDesc> cexprs,
List<ExprNodeDesc> pexprs,
Operator<?> child,
Operator<?> parent) |
static boolean |
ReduceSinkDeDuplicationUtils.strictMerge(ReduceSinkOperator cRS,
List<ReduceSinkOperator> pRSs) |
static boolean |
ReduceSinkDeDuplicationUtils.strictMerge(ReduceSinkOperator cRS,
ReduceSinkOperator pRS)
This is a more strict version of the merge check, where:
- cRS and pRS should have exactly the same keys in the same positions, and
- cRS and pRS should have exactly the same partition columns in the same positions, and
- cRS and pRS should have exactly the same bucket columns in the same positions, and
- cRS and pRS should sort in the same direction
|
ParseContext |
CorrelationOptimizer.transform(ParseContext pctx)
Detect correlations and transform the query tree.
|
ParseContext |
ReduceSinkDeDuplication.transform(ParseContext pctx) |
ParseContext |
ReduceSinkJoinDeDuplication.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
static LineageInfo.Dependency |
ExprProcFactory.getExprDependency(LineageCtx lctx,
Operator<? extends OperatorDesc> inpOp,
ExprNodeDesc expr)
Gets the expression dependencies for the expression.
|
static LineageInfo.Dependency |
ExprProcFactory.getExprDependency(LineageCtx lctx,
Operator<? extends OperatorDesc> inpOp,
ExprNodeDesc expr,
HashMap<Node,Object> outputMap)
Gets the expression dependencies for the expression.
|
Object |
OpProcFactory.TransformLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.TableScanLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.JoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.LateralViewJoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.SelectLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.GroupByLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.UnionLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.ReduceSinkLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.FilterLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.DefaultLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprProcFactory.GenericExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
Generator.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
static List<List<String>> |
ListBucketingPruner.DynamicMultiDimensionalCollection.flat(List<List<String>> uniqSkewedElements)
Flat a dynamic-multi-dimension collection.
|
static List<List<String>> |
ListBucketingPruner.DynamicMultiDimensionalCollection.generateCollection(List<List<String>> values)
Find out complete skewed-element collection
For example:
1.
|
protected void |
LBProcFactory.LBPRFilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
protected void |
LBPartitionProcFactory.LBPRPartitionFilterPruner.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
static ExprNodeDesc |
LBExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred,
Partition part)
Generates the list bucketing pruner for the expression tree.
|
ParseContext |
ListBucketingPruner.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
boolean |
OpTraitsRulesProcFactory.TableScanRule.checkBucketedTable(Table tbl,
ParseContext pGraphContext,
PrunedPartitionList prunedParts) |
Object |
OpTraitsRulesProcFactory.DefaultRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.ReduceSinkRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.TableScanRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.GroupByRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.SelectRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.JoinRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpTraitsRulesProcFactory.MultiParentRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
AnnotateWithOpTraits.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
Object |
PcrOpProcFactory.FilterPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrOpProcFactory.DefaultPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
PcrExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
PartitionConditionRemover.transform(ParseContext pctx) |
static PcrExprProcFactory.NodeInfoWrapper |
PcrExprProcFactory.walkExprTree(String tabAlias,
ArrayList<Partition> parts,
List<VirtualColumn> vcs,
ExprNodeDesc pred)
Remove partition conditions when necessary from the the expression tree.
|
Modifier and Type | Method and Description |
---|---|
Object |
SerializeFilter.Serializer.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
NullScanTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
SparkCrossProductCheck.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
CrossProductHandler.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
MemoryDecider.MemoryCalculator.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
Object |
AbstractJoinTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs) |
long |
AbstractJoinTaskDispatcher.getTotalKnownInputSize(Context context,
MapWork currWork,
Map<org.apache.hadoop.fs.Path,ArrayList<String>> pathToAliases,
HashMap<String,Long> aliasToSize) |
PhysicalContext |
PhysicalOptimizer.optimize()
invoke all the resolvers one-by-one, and alter the physical plan.
|
Object |
SerializeFilter.Serializer.DefaultRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.DefaultInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.JoinInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.SelectInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.FileSinkInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.MultiGroupByInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.GroupByInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
BucketingSortingOpProcFactory.ForwardingInferrer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
LocalMapJoinProcFactory.MapJoinFollowedByGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
LocalMapJoinProcFactory.LocalMapJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
CrossProductHandler.MapJoinCheck.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
CrossProductHandler.ExtractReduceSinkInfo.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SkewJoinProcFactory.SkewJoinJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
SkewJoinProcFactory.SkewJoinDefaultProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
MemoryDecider.MemoryCalculator.DefaultRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Task<? extends Serializable> |
CommonJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context) |
Task<? extends Serializable> |
SortMergeJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context) |
abstract Task<? extends Serializable> |
AbstractJoinTaskDispatcher.processCurrentTask(MapRedTask currTask,
ConditionalTask conditionalTask,
Context context) |
static void |
GenMRSkewJoinProcessor.processSkewJoin(JoinOperator joinOp,
Task<? extends Serializable> currTask,
ParseContext parseCtx)
Create tasks for processing skew joins.
|
static void |
GenSparkSkewJoinProcessor.processSkewJoin(JoinOperator joinOp,
Task<? extends Serializable> currTask,
ReduceWork reduceWork,
ParseContext parseCtx) |
PhysicalContext |
PhysicalPlanResolver.resolve(PhysicalContext pctx)
All physical plan resolvers have to implement this entry method.
|
PhysicalContext |
SerializeFilter.resolve(PhysicalContext pctx) |
PhysicalContext |
NullScanOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
SortMergeJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SparkDynamicPartitionPruningResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
LlapPreVectorizationPass.resolve(PhysicalContext pctx) |
PhysicalContext |
SparkMapJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SkewJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
MetadataOnlyOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
SamplingOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
AnnotateRunTimeStatsOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
MapJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
LlapDecider.resolve(PhysicalContext pctx) |
PhysicalContext |
SparkCrossProductCheck.resolve(PhysicalContext pctx) |
PhysicalContext |
CrossProductHandler.resolve(PhysicalContext pctx) |
PhysicalContext |
BucketingSortingInferenceOptimizer.resolve(PhysicalContext pctx) |
PhysicalContext |
CommonJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
MemoryDecider.resolve(PhysicalContext pctx) |
PhysicalContext |
Vectorizer.resolve(PhysicalContext physicalContext) |
PhysicalContext |
StageIDsRearranger.resolve(PhysicalContext pctx) |
void |
AnnotateRunTimeStatsOptimizer.resolve(Set<Operator<?>> opSet,
ParseContext pctx) |
static void |
AnnotateRunTimeStatsOptimizer.setOrAnnotateStats(Set<Operator<? extends OperatorDesc>> ops,
ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
protected void |
OpProcFactory.FilterPPR.generatePredicate(NodeProcessorCtx procCtx,
FilterOperator fop,
TableScanOperator top) |
static ExprNodeDesc |
ExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred)
Generates the partition pruner for the expression tree.
|
static PrunedPartitionList |
PartitionPruner.prune(Table tab,
ExprNodeDesc prunerExpr,
HiveConf conf,
String alias,
Map<String,PrunedPartitionList> prunedPartitionsMap)
Get the partition list for the table that satisfies the partition pruner
condition.
|
static PrunedPartitionList |
PartitionPruner.prune(TableScanOperator ts,
ParseContext parseCtx,
String alias)
Get the partition list for the TS operator that satisfies the partition pruner
condition.
|
ParseContext |
PartitionPruner.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
static void |
SparkSortMergeJoinFactory.annotateMapWork(GenSparkProcContext context,
MapWork mapWork,
SMBMapJoinOperator smbMapJoinOp,
TableScanOperator ts,
boolean local)
Annotate MapWork, input is a SMBJoinOp that is part of a MapWork, and its root TS operator.
|
protected boolean |
SparkSortMergeJoinOptimizer.canConvertJoinToSMBJoin(JoinOperator joinOperator,
SortBucketJoinProcCtx smbJoinContext,
ParseContext pGraphContext,
Stack<Node> stack) |
MapJoinOperator |
SparkMapJoinOptimizer.convertJoinMapJoin(JoinOperator joinOp,
OptimizeSparkProcContext context,
int bigTablePosition) |
protected SMBMapJoinOperator |
SparkSortMergeJoinOptimizer.convertJoinToSMBJoinAndReturn(JoinOperator joinOp,
SortBucketJoinProcCtx smbJoinContext) |
Object |
SparkSMBJoinHintOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SetSparkReducerParallelism.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SparkSortMergeJoinOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkSkewJoinProcFactory.SparkSkewJoinJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkJoinHintOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkJoinOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkMapJoinOptimizer.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkReduceSinkMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SparkReduceSinkMapJoinProc.SparkMapJoinFollowedByGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
PhysicalContext |
CombineEquivalentWorkResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SparkSkewJoinResolver.resolve(PhysicalContext pctx) |
PhysicalContext |
SplitSparkWorkResolver.resolve(PhysicalContext pctx) |
Modifier and Type | Method and Description |
---|---|
protected long |
StatsRulesProcFactory.FilterStatsRule.evaluateExpression(Statistics stats,
ExprNodeDesc pred,
AnnotateStatsProcCtx aspCtx,
List<String> neededCols,
Operator<?> op,
long currNumRows) |
Object |
StatsRulesProcFactory.TableScanStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.SelectStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.FilterStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.GroupByStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.JoinStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.LimitStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.ReduceSinkStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
StatsRulesProcFactory.DefaultStatsRule.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
AnnotateWithStatistics.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
Object |
UnionProcFactory.MapRedUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.MapUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.UnknownUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.UnionNoProcessFile.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcFactory.NoUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
UnionProcessor.transform(ParseContext pCtx)
Transform the query tree.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
RowResolver.add(RowResolver rrToAddTo,
RowResolver rrToAddFrom) |
static boolean |
RowResolver.add(RowResolver rrToAddTo,
RowResolver rrToAddFrom,
int numColumns) |
protected static ArrayList<PTFInvocationSpec.OrderExpression> |
PTFTranslator.addPartitionExpressionsToOrderList(ArrayList<PTFInvocationSpec.PartitionExpression> partCols,
ArrayList<PTFInvocationSpec.OrderExpression> orderCols) |
void |
BaseSemanticAnalyzer.analyze(ASTNode ast,
Context ctx) |
void |
ColumnStatsSemanticAnalyzer.analyze(ASTNode ast,
Context origCtx) |
ColumnAccessInfo |
ColumnAccessAnalyzer.analyzeColumnAccess(ColumnAccessInfo columnAccessInfo) |
protected ASTNode |
SemanticAnalyzer.analyzeCreateView(ASTNode ast,
QB qb,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.PlannerContext plannerCtx) |
protected void |
BaseSemanticAnalyzer.analyzeDDLSkewedValues(List<List<String>> skewedValues,
ASTNode child)
Handle skewed values in DDL.
|
void |
MaterializedViewRebuildSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ExplainSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ExportSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ExplainSQRewriteSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
LoadSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
FunctionSemanticAnalyzer.analyzeInternal(ASTNode ast) |
abstract void |
BaseSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ReplicationSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
SemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
UpdateDeleteSemanticAnalyzer.analyzeInternal(ASTNode tree) |
void |
CalcitePlanner.analyzeInternal(ASTNode ast) |
void |
DDLSemanticAnalyzer.analyzeInternal(ASTNode input) |
void |
MacroSemanticAnalyzer.analyzeInternal(ASTNode ast) |
void |
ImportSemanticAnalyzer.analyzeInternal(ASTNode ast) |
protected List<String> |
BaseSemanticAnalyzer.analyzeSkewedTablDDLColNames(List<String> skewedColNames,
ASTNode child)
Analyze list bucket column names
|
TableAccessInfo |
TableAccessAnalyzer.analyzeTableAccess() |
List<HivePrivilegeObject> |
TableMask.applyRowFilterAndColumnMasking(List<HivePrivilegeObject> privObjs) |
protected static RowResolver |
PTFTranslator.buildRowResolverForNoop(String tabAlias,
StructObjectInspector rowObjectInspector,
RowResolver inputRowResolver) |
protected static RowResolver |
PTFTranslator.buildRowResolverForPTF(String tbFnName,
String tabAlias,
StructObjectInspector rowObjectInspector,
List<String> outputColNames,
RowResolver inputRR) |
protected RowResolver |
PTFTranslator.buildRowResolverForWindowing(WindowTableFunctionDef def) |
static String |
BaseSemanticAnalyzer.charSetString(String charSetName,
String charSetString) |
protected void |
SemanticAnalyzer.checkAcidTxnManager(Table table) |
static void |
SubQueryUtils.checkForTopLevelSubqueries(ASTNode selExprList) |
static void |
ImportSemanticAnalyzer.checkTargetLocationEmpty(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path targetPath,
ReplicationSpec replicationSpec,
org.slf4j.Logger logger) |
static Map<Node,Object> |
GenTezUtils.collectDynamicPruningConditions(ExprNodeDesc pred,
NodeProcessorCtx ctx) |
void |
TaskCompiler.compile(ParseContext pCtx,
List<Task<? extends Serializable>> rootTasks,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
static ArrayList<PTFInvocationSpec> |
PTFTranslator.componentize(PTFInvocationSpec ptfInvocation) |
String |
TableMask.create(HivePrivilegeObject privObject,
MaskAndFilterInfo maskAndFilterInfo) |
static ExprNodeDesc |
ParseUtils.createConversionCast(ExprNodeDesc column,
PrimitiveTypeInfo tableFieldTypeInfo) |
static void |
EximUtil.createDbExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Database dbObj,
ReplicationSpec replicationSpec) |
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
Iterable<Partition> partitions,
ReplicationSpec replicationSpec,
HiveConf hiveConf) |
protected static Hive |
BaseSemanticAnalyzer.createHiveDB(HiveConf conf) |
MapWork |
GenTezUtils.createMapWork(GenTezProcContext context,
Operator<?> root,
TezWork tezWork,
PrunedPartitionList partitions) |
protected abstract void |
TaskCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
protected void |
TezCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
protected void |
MapReduceCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
protected List<ExprNodeDesc> |
SemanticAnalyzer.determineSprayKeys(QBParseInfo qbp,
String dest,
RowResolver inputRR) |
static void |
EximUtil.doCheckCompatibility(String currVersion,
String version,
String fcVersion) |
boolean |
SemanticAnalyzer.doPhase1(ASTNode ast,
QB qb,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.Phase1Ctx ctx_1,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.PlannerContext plannerCtx)
Phase 1: (including, but not limited to):
1.
|
void |
SemanticAnalyzer.doPhase1QBExpr(ASTNode ast,
QBExpr qbexpr,
String id,
String alias) |
void |
SemanticAnalyzer.doPhase1QBExpr(ASTNode ast,
QBExpr qbexpr,
String id,
String alias,
boolean insideView) |
static String |
ParseUtils.ensureClassExists(String className) |
protected void |
WindowingSpec.WindowSpec.ensureOrderSpec(WindowingSpec.WindowFunctionSpec wFn) |
protected void |
StorageFormat.fillDefaultStorageFormat(boolean isExternal,
boolean isMaterializedView) |
boolean |
StorageFormat.fillStorageFormat(ASTNode child)
Returns true if the passed token was a storage format token
and thus was processed accordingly.
|
Map<ASTNode,ExprNodeDesc> |
SemanticAnalyzer.genAllExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptors for the expression and children of it
with default TypeCheckCtx.
|
Map<ASTNode,ExprNodeDesc> |
SemanticAnalyzer.genAllExprNodeDesc(ASTNode expr,
RowResolver input,
TypeCheckCtx tcCtx)
Generates all of the expression node descriptors for the expression and children of it
passed in the arguments.
|
protected void |
TaskCompiler.genColumnStatsTask(BaseSemanticAnalyzer.AnalyzeRewriteContext analyzeRewrite,
List<LoadFileDesc> loadFileWork,
Map<String,StatsTask> map,
int outerQueryLimit,
int numBitVector)
A helper function to generate a column stats task on top of map-red task.
|
protected abstract void |
TaskCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
TezCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
MapReduceCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
static Map<ASTNode,ExprNodeDesc> |
TypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx) |
protected static Map<ASTNode,ExprNodeDesc> |
TypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx,
TypeCheckProcFactory tf) |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptor for the expression with TypeCheckCtx.
|
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
boolean useCaching) |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
boolean useCaching,
boolean foldExpr) |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
RowResolver outerRR,
Map<ASTNode,org.apache.calcite.rel.RelNode> subqueryToRelNode,
boolean useCaching) |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
TypeCheckCtx tcCtx)
Returns expression node descriptor for the expression.
|
protected Operator |
SemanticAnalyzer.genFileSinkPlan(String dest,
QB qb,
Operator input) |
org.apache.calcite.rel.RelNode |
CalcitePlanner.genLogicalPlan(ASTNode ast)
This method is useful if we want to obtain the logical plan after being parsed and
optimized by Calcite.
|
String |
SemanticAnalyzer.genPartValueString(String partColType,
String partVal) |
Operator |
SemanticAnalyzer.genPlan(QB qb) |
Operator |
SemanticAnalyzer.genPlan(QB qb,
boolean skipAmbiguityCheck) |
static QBSubQuery.SubQueryType |
QBSubQuery.SubQueryType.get(ASTNode opNode) |
static BaseSemanticAnalyzer |
SemanticAnalyzerFactory.get(QueryState queryState,
ASTNode tree) |
ColumnInfo |
RowResolver.get(String tab_alias,
String col_alias)
Gets the column Info to tab_alias.col_alias type of a column reference.
|
static ASTNode |
PTFTranslator.getASTNode(ColumnInfo cInfo,
RowResolver rr) |
static CharTypeInfo |
ParseUtils.getCharTypeInfo(ASTNode node) |
protected List<Order> |
BaseSemanticAnalyzer.getColumnNamesOrder(ASTNode ast) |
protected List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast) |
static List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast,
boolean lowerCase,
org.apache.hadoop.conf.Configuration conf)
Get the list of FieldSchema out of the ASTNode.
|
static List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast,
boolean lowerCase,
org.antlr.runtime.TokenRewriteStream tokenRewriteStream,
List<SQLPrimaryKey> primaryKeys,
List<SQLForeignKey> foreignKeys,
List<SQLUniqueConstraint> uniqueConstraints,
List<SQLNotNullConstraint> notNullConstraints,
List<SQLDefaultConstraint> defaultConstraints,
List<SQLCheckConstraint> checkConstraints,
org.apache.hadoop.conf.Configuration conf)
Get the list of FieldSchema out of the ASTNode.
|
static RowResolver |
RowResolver.getCombinedRR(RowResolver leftRR,
RowResolver rightRR)
Return a new row resolver that is combination of left RR and right RR.
|
protected Database |
BaseSemanticAnalyzer.getDatabase(String dbName) |
protected Database |
BaseSemanticAnalyzer.getDatabase(String dbName,
boolean throwException) |
static DecimalTypeInfo |
ParseUtils.getDecimalTypeTypeInfo(ASTNode node) |
static String |
BaseSemanticAnalyzer.getDotName(String[] qname) |
protected Table |
SemanticAnalyzer.getDummyTable() |
ColumnInfo |
RowResolver.getExpression(ASTNode node)
Retrieves the ColumnInfo corresponding to a source expression which
exactly matches the string rendering of the given ASTNode.
|
protected String |
SemanticAnalyzer.getFullTableNameForSQL(ASTNode n) |
static GenericUDAFEvaluator |
SemanticAnalyzer.getGenericUDAFEvaluator(String aggName,
ArrayList<ExprNodeDesc> aggParameters,
ASTNode aggTree,
boolean isDistinct,
boolean isAllColumns)
Returns the GenericUDAFEvaluator for the aggregation.
|
static SemanticAnalyzer.GenericUDAFInfo |
SemanticAnalyzer.getGenericUDAFInfo(GenericUDAFEvaluator evaluator,
GenericUDAFEvaluator.Mode emode,
ArrayList<ExprNodeDesc> aggParameters)
Returns the GenericUDAFInfo struct for the aggregation.
|
protected List<Long> |
SemanticAnalyzer.getGroupingSets(List<ASTNode> groupByExpr,
QBParseInfo parseInfo,
String dest) |
void |
SemanticAnalyzer.getMaterializationMetadata(QB qb) |
void |
SemanticAnalyzer.getMetaData(QB qb) |
void |
SemanticAnalyzer.getMetaData(QB qb,
boolean enableMaterialization) |
protected Partition |
BaseSemanticAnalyzer.getPartition(Table table,
Map<String,String> partSpec,
boolean throwException) |
protected List<Partition> |
BaseSemanticAnalyzer.getPartitions(Table table,
Map<String,String> partSpec,
boolean throwException) |
static Map<String,String> |
AnalyzeCommandUtils.getPartKeyValuePairsFromAST(Table tbl,
ASTNode tree,
HiveConf hiveConf) |
static HashMap<String,String> |
DDLSemanticAnalyzer.getPartSpec(ASTNode partspec) |
PrunedPartitionList |
ParseContext.getPrunedPartitions(String alias,
TableScanOperator ts) |
PrunedPartitionList |
ParseContext.getPrunedPartitions(TableScanOperator ts) |
static String[] |
BaseSemanticAnalyzer.getQualifiedTableName(ASTNode tabNameNode) |
protected List<String> |
BaseSemanticAnalyzer.getSkewedValuesFromASTNode(Node node)
Retrieve skewed values from ASTNode.
|
static Table |
AnalyzeCommandUtils.getTable(ASTNode tree,
BaseSemanticAnalyzer sa) |
protected Table |
BaseSemanticAnalyzer.getTable(String tblName) |
protected Table |
BaseSemanticAnalyzer.getTable(String[] qualified) |
protected Table |
BaseSemanticAnalyzer.getTable(String[] qualified,
boolean throwException) |
protected Table |
BaseSemanticAnalyzer.getTable(String tblName,
boolean throwException) |
protected Table |
BaseSemanticAnalyzer.getTable(String database,
String tblName,
boolean throwException) |
static String |
DDLSemanticAnalyzer.getTypeName(ASTNode node) |
protected static String |
BaseSemanticAnalyzer.getTypeStringFromAST(ASTNode typeNode) |
static HashMap<String,String> |
DDLSemanticAnalyzer.getValidatedPartSpec(Table table,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
static URI |
EximUtil.getValidatedURI(HiveConf conf,
String dcPath)
Initialize the URI where the exported data collection is
to created for export, or is present for import
|
static VarcharTypeInfo |
ParseUtils.getVarcharTypeInfo(ASTNode node) |
protected ExprNodeDesc |
TypeCheckProcFactory.DefaultExprProcessor.getXpathOrFuncExprNodeDesc(ASTNode expr,
boolean isFunction,
ArrayList<ExprNodeDesc> children,
TypeCheckCtx ctx) |
RowResolver |
SemanticAnalyzer.handleInsertStatementSpec(List<ExprNodeDesc> col_list,
String dest,
RowResolver outputRR,
RowResolver inputRR,
QB qb,
ASTNode selExprList)
This modifies the Select projections when the Select is part of an insert statement and
the insert statement specifies a column list for the target table, e.g.
|
void |
ColumnStatsAutoGatherContext.insertAnalyzePipeline() |
static boolean |
UpdateDeleteSemanticAnalyzer.isAcidExport(ASTNode tree)
Exporting an Acid table is more complicated than a flat table.
|
boolean |
TableMask.isEnabled() |
static ExprNodeGenericFuncDesc |
DDLSemanticAnalyzer.makeBinaryPredicate(String fn,
ExprNodeDesc left,
ExprNodeDesc right) |
static ExprNodeGenericFuncDesc |
DDLSemanticAnalyzer.makeUnaryPredicate(String fn,
ExprNodeDesc arg) |
boolean |
TableMask.needTransform() |
WindowingSpec |
WindowingComponentizer.next(HiveConf hCfg,
SemanticAnalyzer semAly,
UnparseTranslator unparseT,
RowResolver inputRR) |
protected void |
TaskCompiler.optimizeOperatorPlan(ParseContext pCtxSet,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
TezCompiler.optimizeOperatorPlan(ParseContext pCtx,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected abstract void |
TaskCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
protected void |
TezCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
protected void |
MapReduceCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
static ArrayList<WindowingSpec.WindowExpressionSpec> |
SemanticAnalyzer.parseSelect(String selectExprStr) |
void |
AbstractSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks) |
void |
HiveSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
Invoked after Hive performs its own semantic analysis on a
statement (including optimization).
|
ASTNode |
AbstractSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast) |
ASTNode |
HiveSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
Invoked before Hive performs its own semantic analysis on
a statement.
|
Object |
FileSinkProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenTezWork.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
PrintOpTreeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs) |
Object |
ProcessAnalyzeTable.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.NullExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.NumExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.StrExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.BoolExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.DateTimeExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.IntervalExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
TypeCheckProcFactory.SubQueryExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
GenTezUtils.DynamicPartitionPrunerProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
process simply remembers all the dynamic partition pruning expressions
found
|
Object |
AppMasterEventProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
UnionProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
protected static void |
BaseSemanticAnalyzer.processCheckConstraints(String catName,
String databaseName,
String tableName,
ASTNode child,
List<String> columnNames,
List<SQLCheckConstraint> checkConstraints,
ASTNode typeChild,
org.antlr.runtime.TokenRewriteStream tokenRewriteStream) |
protected static void |
BaseSemanticAnalyzer.processDefaultConstraints(String catName,
String databaseName,
String tableName,
ASTNode child,
List<String> columnNames,
List<SQLDefaultConstraint> defaultConstraints,
ASTNode typeChild,
org.antlr.runtime.TokenRewriteStream tokenRewriteStream) |
static void |
GenTezUtils.processDynamicSemiJoinPushDownOperator(GenTezProcContext procCtx,
RuntimeValuesInfo runtimeValuesInfo,
ReduceSinkOperator rs) |
static void |
GenTezUtils.processFileSink(GenTezProcContext context,
FileSinkOperator fileSink) |
protected static void |
BaseSemanticAnalyzer.processForeignKeys(String databaseName,
String tableName,
ASTNode child,
List<SQLForeignKey> foreignKeys)
Process the foreign keys from the AST and populate the foreign keys in the SQLForeignKey list
|
static ExprNodeDesc |
TypeCheckProcFactory.processGByExpr(Node nd,
Object procCtx)
Function to do groupby subexpression elimination.
|
protected void |
SemanticAnalyzer.processNoScanCommand(ASTNode tree)
process analyze ...
|
protected static void |
BaseSemanticAnalyzer.processNotNullConstraints(String catName,
String databaseName,
String tableName,
ASTNode child,
List<String> columnNames,
List<SQLNotNullConstraint> notNullConstraints) |
void |
SemanticAnalyzer.processPositionAlias(ASTNode ast) |
protected static void |
BaseSemanticAnalyzer.processPrimaryKeys(String databaseName,
String tableName,
ASTNode child,
List<SQLPrimaryKey> primaryKeys)
Process the primary keys from the ast node and populate the SQLPrimaryKey list.
|
protected static void |
BaseSemanticAnalyzer.processPrimaryKeys(String databaseName,
String tableName,
ASTNode child,
List<String> columnNames,
List<SQLPrimaryKey> primaryKeys) |
protected ExprNodeDesc |
TypeCheckProcFactory.DefaultExprProcessor.processQualifiedColRef(TypeCheckCtx ctx,
ASTNode expr,
Object... nodeOutputs) |
protected void |
StorageFormat.processStorageFormat(String name) |
protected static void |
BaseSemanticAnalyzer.processUniqueConstraints(String catName,
String databaseName,
String tableName,
ASTNode child,
List<SQLUniqueConstraint> uniqueConstraints)
Process the unique constraints from the ast node and populate the SQLUniqueConstraint list.
|
protected static void |
BaseSemanticAnalyzer.processUniqueConstraints(String catName,
String databaseName,
String tableName,
ASTNode child,
List<String> columnNames,
List<SQLUniqueConstraint> uniqueConstraints) |
boolean |
RowResolver.putWithCheck(String tabAlias,
String colAlias,
String internalName,
ColumnInfo newCI)
Adds column to RR, checking for duplicate columns.
|
static MetaData |
EximUtil.readMetaData(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath) |
static String |
EximUtil.relativeToAbsolutePath(HiveConf conf,
String location) |
static void |
GenTezUtils.removeSemiJoinOperator(ParseContext context,
AppMasterEventOperator eventOp,
TableScanOperator ts) |
static void |
GenTezUtils.removeSemiJoinOperator(ParseContext context,
ReduceSinkOperator rs,
TableScanOperator ts) |
static void |
GenTezUtils.removeUnionOperators(GenTezProcContext context,
BaseWork work,
int indexForTezUnion) |
static String |
SemanticAnalyzer.replaceDefaultKeywordForMerge(String valueClause,
Table targetTable) |
ASTNode |
ColumnStatsSemanticAnalyzer.rewriteAST(ASTNode ast,
ColumnStatsAutoGatherContext context) |
protected static ASTNode |
SemanticAnalyzer.rewriteASTWithMaskAndFilter(TableMask tableMask,
ASTNode ast,
org.antlr.runtime.TokenRewriteStream tokenRewriteStream,
Context ctx,
Hive db,
Map<String,Table> tabNameToTabObject,
Set<Integer> ignoredTokens) |
protected static ASTNode |
SemanticAnalyzer.rewriteGroupingFunctionAST(List<ASTNode> grpByAstExprs,
ASTNode targetNode,
boolean noneSet) |
protected String |
SemanticAnalyzer.rewriteQueryWithQualifiedNames(ASTNode ast,
org.antlr.runtime.TokenRewriteStream tokenRewriteStream) |
protected void |
SemanticAnalyzer.saveViewDefinition() |
protected void |
GenTezUtils.setupMapWork(MapWork mapWork,
GenTezProcContext context,
PrunedPartitionList partitions,
TableScanOperator root,
String alias) |
void |
GenTezWorkWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
protected ReadEntity |
BaseSemanticAnalyzer.toReadEntity(org.apache.hadoop.fs.Path location) |
static ReadEntity |
BaseSemanticAnalyzer.toReadEntity(org.apache.hadoop.fs.Path location,
HiveConf conf) |
protected ReadEntity |
BaseSemanticAnalyzer.toReadEntity(String location) |
protected WriteEntity |
BaseSemanticAnalyzer.toWriteEntity(org.apache.hadoop.fs.Path location) |
static WriteEntity |
BaseSemanticAnalyzer.toWriteEntity(org.apache.hadoop.fs.Path location,
HiveConf conf) |
protected WriteEntity |
BaseSemanticAnalyzer.toWriteEntity(String location) |
PTFDesc |
PTFTranslator.translate(PTFInvocationSpec qSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
UnparseTranslator unparseT) |
PTFDesc |
PTFTranslator.translate(WindowingSpec wdwSpec,
SemanticAnalyzer semAly,
HiveConf hCfg,
RowResolver inputRR,
UnparseTranslator unparseT) |
void |
BaseSemanticAnalyzer.validate() |
void |
SemanticAnalyzer.validate() |
void |
WindowingSpec.validateAndMakeEffective() |
static void |
BaseSemanticAnalyzer.validateCheckConstraint(List<FieldSchema> cols,
List<SQLCheckConstraint> checkConstraints,
org.apache.hadoop.conf.Configuration conf) |
static List<String> |
ParseUtils.validateColumnNameUniqueness(List<FieldSchema> fieldSchemas) |
protected static void |
PTFTranslator.validateComparable(ObjectInspector OI,
String errMsg) |
static void |
PTFTranslator.validateNoLeadLagInValueBoundarySpec(ASTNode node) |
static void |
BaseSemanticAnalyzer.validatePartColumnType(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf) |
static void |
BaseSemanticAnalyzer.validatePartSpec(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
protected void |
TypeCheckProcFactory.DefaultExprProcessor.validateUDF(ASTNode expr,
boolean isFunction,
TypeCheckCtx ctx,
FunctionInfo fi,
List<ExprNodeDesc> children,
GenericUDF genericUDF) |
protected void |
TezWalker.walk(Node nd)
Walk the given operator.
|
protected void |
GenMapRedWalker.walk(Node nd)
Walk the given operator.
|
protected void |
GenTezWorkWalker.walk(Node nd)
Walk the given operator.
|
Modifier and Type | Method and Description |
---|---|
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createCreateRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createDropRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createGrantRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createGrantTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createGrantTask(ASTNode ast,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createRevokeRoleTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createRevokeTask(ASTNode node,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createRevokeTask(ASTNode ast,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createSetRoleTask(String roleName,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createSetRoleTask(String roleName,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowCurrentRoleTask(HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs,
org.apache.hadoop.fs.Path resFile) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowCurrentRoleTask(HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs,
org.apache.hadoop.fs.Path resFile) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowGrantTask(ASTNode node,
org.apache.hadoop.fs.Path resultFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowGrantTask(ASTNode ast,
org.apache.hadoop.fs.Path resultFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowRoleGrantTask(ASTNode node,
org.apache.hadoop.fs.Path resultFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowRolePrincipalsTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowRolePrincipalsTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactory.createShowRolesTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
Task<? extends Serializable> |
HiveAuthorizationTaskFactoryImpl.createShowRolesTask(ASTNode ast,
org.apache.hadoop.fs.Path resFile,
HashSet<ReadEntity> inputs,
HashSet<WriteEntity> outputs) |
protected PrivilegeObjectDesc |
HiveAuthorizationTaskFactoryImpl.parsePrivObject(ASTNode ast) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.fs.Path |
PathBuilder.fullyQualifiedHDFSUri(org.apache.hadoop.fs.Path input,
org.apache.hadoop.fs.FileSystem hdfsFileSystem) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.fs.Path |
TableExport.Paths.exportRootDir()
Access to the
TableExport.Paths._exportRootDir should only be done via this method
since the creation of the directory is delayed until we figure out if we want
to write something or not. |
TableExport.AuthEntities |
TableExport.getAuthEntities() |
boolean |
TableExport.write() |
static void |
Utils.writeOutput(List<String> values,
org.apache.hadoop.fs.Path outputFile,
HiveConf hiveConf) |
Constructor and Description |
---|
Paths(String astRepresentationForErrorMsg,
org.apache.hadoop.fs.Path dbRoot,
String tblName,
HiveConf conf,
boolean shouldWriteData) |
Paths(String astRepresentationForErrorMsg,
String path,
HiveConf conf,
boolean shouldWriteData) |
Modifier and Type | Method and Description |
---|---|
void |
ConstraintsSerializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
void |
JsonWriter.Serializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
void |
ReplicationSpecSerializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
void |
DBSerializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
void |
FunctionSerializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
void |
VersionCompatibleSerializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
void |
TableSerializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
void |
PartitionSerializer.writeTo(JsonWriter writer,
ReplicationSpec additionalPropertiesProvider) |
Modifier and Type | Method and Description |
---|---|
DumpType |
DumpMetaData.getDumpType() |
Long |
DumpMetaData.getEventFrom() |
Long |
DumpMetaData.getEventTo() |
String |
DumpMetaData.getPayload() |
boolean |
DumpMetaData.isIncrementalDump() |
void |
DumpMetaData.write() |
Constructor and Description |
---|
MetadataJson(String message) |
Modifier and Type | Method and Description |
---|---|
void |
GenSparkUtils.annotateMapWork(GenSparkProcContext context)
Fill MapWork with 'local' work and bucket information for SMB Join.
|
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions) |
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions,
boolean deferSetup) |
ReduceWork |
GenSparkUtils.createReduceWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork) |
protected void |
SparkCompiler.decideExecMode(List<Task<? extends Serializable>> rootTasks,
Context ctx,
GlobalLimitCtx globalLimitCtx) |
protected void |
SparkCompiler.generateTaskTree(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
List<Task<MoveWork>> mvTask,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
TODO: need to turn on rules that's commented out and add more if necessary.
|
static <T> T |
GenSparkUtils.getChildOperator(Operator<?> root,
Class<T> klazz) |
static SparkEdgeProperty |
GenSparkUtils.getEdgeProperty(HiveConf conf,
ReduceSinkOperator reduceSink,
ReduceWork reduceWork) |
protected void |
SparkCompiler.optimizeOperatorPlan(ParseContext pCtx,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
protected void |
SparkCompiler.optimizeTaskPlan(List<Task<? extends Serializable>> rootTasks,
ParseContext pCtx,
Context ctx) |
Object |
SplitOpTreeForDPP.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
SparkProcessAnalyzeTable.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
GenSparkWork.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procContext,
Object... nodeOutputs) |
Object |
SparkFileSinkProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
void |
GenSparkUtils.processFileSink(GenSparkProcContext context,
FileSinkOperator fileSink) |
void |
GenSparkUtils.removeUnionOperators(GenSparkProcContext context,
BaseWork work) |
protected void |
GenSparkUtils.setupMapWork(MapWork mapWork,
GenSparkProcContext context,
PrunedPartitionList partitions,
TableScanOperator root,
String alias_id) |
void |
GenSparkWorkWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking.
|
protected void |
GenSparkWorkWalker.walk(Node nd)
Walk the given operator.
|
Modifier and Type | Method and Description |
---|---|
static ExprNodeDesc |
ExprNodeDescUtils.backtrack(ExprNodeDesc source,
Operator<?> current,
Operator<?> terminal) |
static ExprNodeDesc |
ExprNodeDescUtils.backtrack(ExprNodeDesc source,
Operator<?> current,
Operator<?> terminal,
boolean foldExpr) |
static ArrayList<ExprNodeDesc> |
ExprNodeDescUtils.backtrack(List<ExprNodeDesc> sources,
Operator<?> current,
Operator<?> terminal)
Convert expressions in current operator to those in terminal operator, which
is an ancestor of current or null (back to top operator).
|
static ArrayList<ExprNodeDesc> |
ExprNodeDescUtils.backtrack(List<ExprNodeDesc> sources,
Operator<?> current,
Operator<?> terminal,
boolean foldExpr) |
static boolean |
ExprNodeDescUtils.checkPrefixKeys(List<ExprNodeDesc> childKeys,
List<ExprNodeDesc> parentKeys,
Operator<? extends OperatorDesc> childOp,
Operator<? extends OperatorDesc> parentOp)
Checks whether the keys of a parent operator are a prefix of the keys of a
child operator.
|
static boolean |
ExprNodeDescUtils.checkPrefixKeysUpstream(List<ExprNodeDesc> childKeys,
List<ExprNodeDesc> parentKeys,
Operator<? extends OperatorDesc> childOp,
Operator<? extends OperatorDesc> parentOp)
Checks whether the keys of a child operator are a prefix of the keys of a
parent operator.
|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
ArrayList<ExprNodeDesc> valueCols,
List<String> outputColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers,
AcidUtils.Operation writeType)
Create the reduce sink descriptor.
|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
int numKeys,
ArrayList<ExprNodeDesc> valueCols,
List<List<Integer>> distinctColIndices,
List<String> outputKeyColumnNames,
List<String> outputValueColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers,
AcidUtils.Operation writeType)
Create the reduce sink descriptor.
|
static Operator<?> |
ExprNodeDescUtils.getSingleParent(Operator<?> current,
Operator<?> terminal) |
String |
ImportTableDesc.getTableName() |
static void |
ExprNodeDescUtils.replaceEqualDefaultPartition(ExprNodeDesc origin,
String defaultPartitionName) |
void |
AlterTableDesc.setOldName(String oldName) |
void |
ImportTableDesc.setTableName(String tableName) |
void |
AlterTableDesc.validate()
Validate alter table description.
|
void |
CreateTableDesc.validate(HiveConf conf) |
static void |
ValidationUtility.validateSkewedColNames(List<String> colNames,
List<String> skewedColNames)
Skewed column name should be a valid column defined.
|
static void |
ValidationUtility.validateSkewedColNameValueNumberMatch(List<String> skewedColNames,
List<List<String>> skewedColValues)
Skewed column name and value should match.
|
static void |
ValidationUtility.validateSkewedColumnNameUniqueness(List<String> names)
Find out duplicate name.
|
static void |
ValidationUtility.validateSkewedInformation(List<String> colNames,
List<String> skewedColNames,
List<List<String>> skewedColValues)
Validate skewed table information.
|
Constructor and Description |
---|
AlterTableDesc(String tableName,
boolean sortingOff,
HashMap<String,String> partSpec) |
AlterTableDesc(String tableName,
boolean turnOffSkewed,
List<String> skewedColNames,
List<List<String>> skewedColValues) |
AlterTableDesc(String tableName,
HashMap<String,String> partSpec,
int numBuckets) |
AlterTableDesc(String name,
HashMap<String,String> partSpec,
List<FieldSchema> newCols,
AlterTableDesc.AlterTableTypes alterType,
boolean isCascade) |
AlterTableDesc(String tblName,
HashMap<String,String> partSpec,
String oldColName,
String newColName,
String newType,
String newComment,
boolean first,
String afterCol,
boolean isCascade) |
AlterTableDesc(String tblName,
HashMap<String,String> partSpec,
String oldColName,
String newColName,
String newType,
String newComment,
boolean first,
String afterCol,
boolean isCascade,
List<SQLPrimaryKey> primaryKeyCols,
List<SQLForeignKey> foreignKeyCols,
List<SQLUniqueConstraint> uniqueConstraintCols,
List<SQLNotNullConstraint> notNullConstraintCols,
List<SQLDefaultConstraint> defaultConstraints,
List<SQLCheckConstraint> checkConstraints) |
AlterTableDesc(String tableName,
int numBuckets,
List<String> bucketCols,
List<Order> sortCols,
HashMap<String,String> partSpec) |
AlterTableDesc(String tableName,
List<SQLPrimaryKey> primaryKeyCols,
List<SQLForeignKey> foreignKeyCols,
List<SQLUniqueConstraint> uniqueConstraintCols,
List<SQLNotNullConstraint> notNullConstraintCols,
List<SQLDefaultConstraint> defaultConstraints,
List<SQLCheckConstraint> checkConstraints,
ReplicationSpec replicationSpec) |
AlterTableDesc(String tableName,
List<SQLPrimaryKey> primaryKeyCols,
List<SQLForeignKey> foreignKeyCols,
List<SQLUniqueConstraint> uniqueConstraintCols,
ReplicationSpec replicationSpec) |
AlterTableDesc(String tableName,
Map<List<String>,String> locations,
HashMap<String,String> partSpec) |
AlterTableDesc(String oldName,
String newName,
boolean expectView,
ReplicationSpec replicationSpec) |
AlterTableDesc(String tableName,
String newLocation,
HashMap<String,String> partSpec) |
AlterTableDesc(String tableName,
String dropConstraintName,
ReplicationSpec replicationSpec) |
AlterTableDesc(String name,
String inputFormat,
String outputFormat,
String serdeName,
String storageHandler,
HashMap<String,String> partSpec) |
DynamicPartitionCtx(Table tbl,
Map<String,String> partSpec,
String defaultPartName,
int maxParts) |
Modifier and Type | Method and Description |
---|---|
protected static Object |
OpProcFactory.createFilter(Operator op,
ExprWalkerInfo pushDownPreds,
OpWalkerInfo owi) |
protected static Object |
OpProcFactory.createFilter(Operator op,
Map<String,List<ExprNodeDesc>> predicates,
OpWalkerInfo owi) |
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends OperatorDesc> op,
ExprNodeDesc pred) |
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends OperatorDesc> op,
List<ExprNodeDesc> preds)
Extracts pushdown predicates from the given list of predicate expression.
|
protected Set<String> |
OpProcFactory.JoinerPPD.getAliases(Node nd) |
protected Object |
OpProcFactory.JoinerPPD.handlePredicates(Node nd,
ExprWalkerInfo prunePreds,
OpWalkerInfo owi) |
protected ExprWalkerInfo |
OpProcFactory.DefaultPPD.mergeChildrenPred(Node nd,
OpWalkerInfo owi,
Set<String> excludedAliases,
boolean ignoreAliases) |
protected boolean |
OpProcFactory.DefaultPPD.mergeWithChildrenPred(Node nd,
OpWalkerInfo owi,
ExprWalkerInfo ewi,
Set<String> aliases)
Take current operators pushdown predicates and merges them with
children's pushdown predicates.
|
Object |
OpProcFactory.ScriptPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.PTFPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.UDTFPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.LateralViewForwardPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.TableScanPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.FilterPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.SimpleFilterPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.JoinerPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.ReduceSinkPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
OpProcFactory.DefaultPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprWalkerProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Converts the reference from child row resolver to current row resolver.
|
Object |
ExprWalkerProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprWalkerProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
Object |
ExprWalkerProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs) |
ParseContext |
PredicatePushDown.transform(ParseContext pctx) |
ParseContext |
SyntheticJoinPredicate.transform(ParseContext pctx) |
ParseContext |
SimplePredicatePushDown.transform(ParseContext pctx) |
ParseContext |
PredicateTransitivePropagate.transform(ParseContext pctx) |
Modifier and Type | Method and Description |
---|---|
List<HivePrivilegeObject> |
HiveAuthorizationValidator.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
List<HivePrivilegeObject> |
HiveV1Authorizer.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
List<HivePrivilegeObject> |
HiveAuthorizerImpl.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
List<HivePrivilegeObject> |
HiveAuthorizer.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs)
applyRowFilterAndColumnMasking is called once for each table in a query.
|
Modifier and Type | Method and Description |
---|---|
List<HivePrivilegeObject> |
FallbackHiveAuthorizer.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
Modifier and Type | Method and Description |
---|---|
List<HivePrivilegeObject> |
DummyHiveAuthorizationValidator.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
List<HivePrivilegeObject> |
SQLStdHiveAuthorizationValidator.applyRowFilterAndColumnMasking(HiveAuthzContext context,
List<HivePrivilegeObject> privObjs) |
Modifier and Type | Method and Description |
---|---|
void |
LineageInfo.getLineageInfo(String query)
parses given query and gets the lineage info.
|
static void |
LineageInfo.main(String[] args) |
Object |
LineageInfo.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Implements the process method for the NodeProcessor interface.
|
Modifier and Type | Method and Description |
---|---|
GenericUDAFEvaluator |
GenericUDAFSum.getEvaluator(GenericUDAFParameterInfo info) |
GenericUDAFEvaluator |
GenericUDAFPercentileApprox.getEvaluator(GenericUDAFParameterInfo info) |
GenericUDAFEvaluator |
GenericUDAFCount.getEvaluator(GenericUDAFParameterInfo paramInfo) |
GenericUDAFEvaluator |
GenericUDAFBloomFilter.getEvaluator(GenericUDAFParameterInfo info) |
GenericUDAFEvaluator |
GenericUDAFAverage.getEvaluator(GenericUDAFParameterInfo paramInfo) |
GenericUDAFEvaluator |
GenericUDAFResolver2.getEvaluator(GenericUDAFParameterInfo info)
Deprecated.
Get the evaluator for the parameter types.
|
GenericUDAFEvaluator |
GenericUDAFLeadLag.getEvaluator(GenericUDAFParameterInfo parameters) |
GenericUDAFEvaluator |
AbstractGenericUDAFResolver.getEvaluator(GenericUDAFParameterInfo info)
Deprecated.
|
GenericUDAFEvaluator |
GenericUDAFFirstValue.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFCollectSet.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFHistogramNumeric.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFCovariance.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFCollectList.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFStd.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFContextNGrams.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFComputeStats.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFSum.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFSumEmptyIsZero.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFStdSample.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFMin.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFRowNumber.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFVariance.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFVarianceSample.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFNTile.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFCount.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBloomFilter.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFCorrelation.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFAverage.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBridge.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFMax.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFResolver.getEvaluator(TypeInfo[] parameters)
Deprecated.
Get the evaluator for the parameter types.
|
GenericUDAFEvaluator |
GenericUDAFRank.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFCovarianceSample.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
AbstractGenericUDAFResolver.getEvaluator(TypeInfo[] info)
Deprecated.
|
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrCount.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrSXX.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrSYY.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrAvgX.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrAvgY.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrSlope.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrR2.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrSXY.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFBinarySetFunctions.RegrIntercept.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFLastValue.getEvaluator(TypeInfo[] parameters) |
GenericUDAFEvaluator |
GenericUDAFnGrams.getEvaluator(TypeInfo[] parameters) |
Modifier and Type | Method and Description |
---|---|
static ExprNodeDesc |
MatchPath.ResultExpressionParser.buildExprNode(ASTNode expr,
TypeCheckCtx typeCheckCtx) |
protected static RowResolver |
MatchPath.createSelectListRR(MatchPath evaluator,
PTFInputDef inpDef) |
abstract List<String> |
TableFunctionResolver.getOutputColumnNames() |
List<String> |
TableFunctionResolver.getRawInputColumnNames() |
ArrayList<String> |
NoopWithMap.NoopWithMapResolver.getRawInputColumnNames() |
List<String> |
MatchPath.MatchPathResolver.getReferencedColumns() |
List<String> |
TableFunctionResolver.getReferencedColumns()
Provide referenced columns names to be used in partition function
|
void |
TableFunctionResolver.initialize(HiveConf cfg,
PTFDesc ptfDesc,
PartitionedTableFunctionDef tDef) |
void |
MatchPath.SymbolParser.parse() |
void |
MatchPath.MatchPathResolver.setupOutputOI()
check structure of Arguments:
First arg should be a String
then there should be an even number of Arguments:
String, expression; expression should be Convertible to Boolean.
|
void |
Noop.NoopResolver.setupOutputOI() |
abstract void |
TableFunctionResolver.setupOutputOI() |
void |
WindowingTableFunction.WindowingTableFunctionResolver.setupOutputOI() |
void |
NoopWithMap.NoopWithMapResolver.setupOutputOI() |
void |
TableFunctionResolver.setupRawInputOI() |
void |
NoopWithMap.NoopWithMapResolver.setupRawInputOI() |
void |
MatchPath.ResultExpressionParser.translate() |
Modifier and Type | Method and Description |
---|---|
protected void |
HCatSemanticAnalyzerBase.authorize(Database db,
Privilege priv) |
protected void |
HCatSemanticAnalyzerBase.authorize(Partition part,
Privilege priv) |
protected void |
HCatSemanticAnalyzerBase.authorize(Privilege[] inputPrivs,
Privilege[] outputPrivs) |
protected void |
HCatSemanticAnalyzerBase.authorize(Table table,
Privilege priv) |
protected void |
HCatSemanticAnalyzerBase.authorizeDDL(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
Checks for the given rootTasks, and calls authorizeDDLWork() for each DDLWork to
be authorized.
|
void |
HCatSemanticAnalyzerBase.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks) |
void |
HCatSemanticAnalyzer.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks) |
ASTNode |
HCatSemanticAnalyzer.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast) |
Copyright © 2022 The Apache Software Foundation. All rights reserved.