public class VectorSparkPartitionPruningSinkOperator extends SparkPartitionPruningSinkOperator
Operator.OperatorFunc, Operator.State
Modifier and Type | Field and Description |
---|---|
protected boolean |
firstBatch |
protected Object[] |
singleRow |
protected VectorExtractRow |
vectorExtractRow |
buffer, LOG, serializer
abortOp, alias, asyncInitOperations, cContext, childOperators, childOperatorsArray, childOperatorsTag, colExprMap, conf, CONTEXT_NAME_KEY, done, groupKeyObject, HIVECOUNTERCREATEDFILES, HIVECOUNTERFATAL, id, inputObjInspectors, isLogDebugEnabled, isLogInfoEnabled, isLogTraceEnabled, operatorId, out, outputObjInspector, parentOperators, PLOG, reporter, state, statsMap
Constructor and Description |
---|
VectorSparkPartitionPruningSinkOperator()
Kryo ctor.
|
VectorSparkPartitionPruningSinkOperator(CompilationOpContext ctx) |
VectorSparkPartitionPruningSinkOperator(CompilationOpContext ctx,
VectorizationContext context,
OperatorDesc conf) |
Modifier and Type | Method and Description |
---|---|
void |
initializeOp(org.apache.hadoop.conf.Configuration hconf)
Operator specific initialization.
|
void |
process(Object data,
int tag)
Process the row.
|
closeOp, getName, getOperatorName, getType
abort, acceptLimitPushdown, allInitializedParentsAreClosed, areAllParentsInitialized, augmentPlan, cleanUpInputFileChanged, cleanUpInputFileChangedOp, clone, cloneOp, cloneRecursiveChildren, close, columnNamesRowResolvedCanBeObtained, completeInitializationOp, createDummy, defaultEndGroup, defaultStartGroup, dump, dump, endGroup, flush, forward, getAdditionalCounters, getChildOperators, getChildren, getColumnExprMap, getCompilationOpContext, getConf, getConfiguration, getDone, getExecContext, getGroupKeyObject, getIdentifier, getInputObjInspectors, getIsReduceSink, getNextCntr, getNumChild, getNumParent, getOperatorId, getOpTraits, getOutputObjInspector, getParentOperators, getReduceOutputName, getSchema, getStatistics, getStats, initEvaluators, initEvaluators, initEvaluatorsAndReturnStruct, initialize, initialize, initializeChildren, initializeLocalWork, initOperatorId, isUseBucketizedHiveInputFormat, jobClose, jobCloseOp, logStats, opAllowedAfterMapJoin, opAllowedBeforeMapJoin, opAllowedBeforeSortMergeJoin, opAllowedConvertMapJoin, passExecContext, preorderMap, processGroup, removeChild, removeChildAndAdoptItsChildren, removeParent, removeParents, replaceChild, replaceParent, reset, resetStats, setAlias, setChildOperators, setColumnExprMap, setCompilationOpContext, setConf, setDone, setExecContext, setGroupKeyObject, setId, setInputContext, setInputObjInspectors, setOperatorId, setOpTraits, setOutputCollector, setParentOperators, setReporter, setSchema, setStatistics, setUseBucketizedHiveInputFormat, startGroup, supportAutomaticSortMergeJoin, supportSkewJoinOptimization, supportUnionRemoveOptimization, toString, toString
protected transient boolean firstBatch
protected transient VectorExtractRow vectorExtractRow
protected transient Object[] singleRow
public VectorSparkPartitionPruningSinkOperator(CompilationOpContext ctx, VectorizationContext context, OperatorDesc conf)
public VectorSparkPartitionPruningSinkOperator()
public VectorSparkPartitionPruningSinkOperator(CompilationOpContext ctx)
public void initializeOp(org.apache.hadoop.conf.Configuration hconf) throws HiveException
Operator
initializeOp
in class SparkPartitionPruningSinkOperator
HiveException
public void process(Object data, int tag) throws HiveException
Operator
process
in class SparkPartitionPruningSinkOperator
data
- The object representing the row.tag
- The tag of the row usually means which parent this row comes from.
Rows with the same tag should have exactly the same rowInspector
all the time.HiveException
Copyright © 2016 The Apache Software Foundation. All rights reserved.