Modifier and Type | Method and Description |
---|---|
HiveAuthorizationProvider |
AccumuloStorageHandler.getAuthorizationProvider() |
Modifier and Type | Method and Description |
---|---|
Object |
GenericUDFAdd10.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFDBOutput.evaluate(GenericUDF.DeferredObject[] arguments) |
Modifier and Type | Method and Description |
---|---|
void |
GenericUDTFExplode2.close() |
void |
GenericUDTFCount2.close() |
void |
GenericUDTFExplode2.process(Object[] o) |
void |
GenericUDTFCount2.process(Object[] args) |
Modifier and Type | Method and Description |
---|---|
void |
FunctionLocalizer.startLocalizeAllFunctions() |
Modifier and Type | Method and Description |
---|---|
boolean |
DriverContext.addToRunnable(Task<? extends Serializable> tsk) |
static void |
Driver.doAuthorization(HiveOperation op,
BaseSemanticAnalyzer sem,
String command)
Do authorization using post semantic analysis information in the semantic analyzer
The original command is also passed so that authorization interface can provide
more useful information in logs.
|
Task<? extends Serializable> |
DriverContext.getRunnable(int maxthreads) |
void |
DriverContext.launching(TaskRunner runner) |
void |
HookRunner.runFailureHooks(HookContext hookContext) |
static void |
DriverUtils.runOnDriver(HiveConf conf,
String user,
SessionState sessionState,
String query,
org.apache.hadoop.hive.common.ValidWriteIdList writeIds) |
void |
HookRunner.runPostAnalyzeHooks(HiveSemanticAnalyzerHookContext hookCtx,
List<Task<? extends Serializable>> allRootTasks) |
void |
HookRunner.runPostDriverHooks(HiveDriverRunHookContext hookContext) |
void |
HookRunner.runPostExecHooks(HookContext hookContext) |
ASTNode |
HookRunner.runPreAnalyzeHooks(HiveSemanticAnalyzerHookContext hookCtx,
ASTNode tree) |
void |
HookRunner.runPreDriverHooks(HiveDriverRunHookContext hookContext) |
void |
HookRunner.runPreHooks(HookContext hookContext) |
Modifier and Type | Class and Description |
---|---|
class |
AmbiguousMethodException
Exception thrown by the UDF and UDAF method resolvers in case a unique method
is not found.
|
class |
NoMatchingMethodException
Exception thrown by the UDF and UDAF method resolvers in case no matching method
is found.
|
class |
UDFArgumentException
exception class, thrown when udf argument have something wrong.
|
class |
UDFArgumentLengthException
exception class, thrown when udf arguments have wrong length.
|
class |
UDFArgumentTypeException
exception class, thrown when udf arguments have wrong types.
|
Modifier and Type | Method and Description |
---|---|
protected Object |
ExprNodeEvaluatorRef._evaluate(Object row,
int version) |
protected Object |
ExprNodeEvaluatorHead._evaluate(Object row,
int version) |
protected Object |
ExprNodeConstantEvaluator._evaluate(Object row,
int version) |
protected Object |
ExprNodeColumnEvaluator._evaluate(Object row,
int version) |
protected Object |
ExprNodeFieldEvaluator._evaluate(Object row,
int version) |
protected abstract Object |
ExprNodeEvaluator._evaluate(Object row,
int version)
Evaluate value
|
protected Object |
ExprNodeGenericFuncEvaluator._evaluate(Object row,
int version) |
protected Object |
ExprNodeDynamicValueEvaluator._evaluate(Object row,
int version) |
int |
PTFTopNHash._tryStoreKey(HiveKey key,
boolean partColsIsNull,
int batchIndex) |
void |
FileSinkOperator.FSPaths.abortWriters(org.apache.hadoop.fs.FileSystem fs,
boolean abort,
boolean delete) |
static void |
FunctionTask.addFunctionResources(FunctionInfo.FunctionResource[] resources) |
static URI |
ArchiveUtils.addSlash(URI u)
Makes sure, that URI points to directory by adding slash to it.
|
void |
PTFRollingPartition.append(Object o) |
void |
PTFPartition.append(Object o) |
protected void |
CommonJoinOperator.checkAndGenObject() |
void |
Operator.cleanUpInputFileChanged() |
void |
TableScanOperator.cleanUpInputFileChangedOp() |
void |
SMBMapJoinOperator.cleanUpInputFileChangedOp() |
void |
MapOperator.cleanUpInputFileChangedOp() |
void |
MapJoinOperator.cleanUpInputFileChangedOp() |
void |
Operator.cleanUpInputFileChangedOp() |
void |
FetchTask.clearFetch()
Clear the Fetch Operator.
|
void |
FetchOperator.clearFetchContext()
Clear the context, if anything needs to be done.
|
void |
CommonMergeJoinOperator.close(boolean abort) |
void |
Operator.close(boolean abort) |
void |
ScriptOperator.close(boolean abort) |
void |
SkewJoinHandler.close(boolean abort) |
void |
TableScanOperator.closeOp(boolean abort) |
protected void |
UDTFOperator.closeOp(boolean abort) |
void |
SMBMapJoinOperator.closeOp(boolean abort) |
void |
AbstractFileMergeOperator.closeOp(boolean abort) |
protected void |
DemuxOperator.closeOp(boolean abort) |
void |
AbstractMapOperator.closeOp(boolean abort) |
void |
AbstractMapJoinOperator.closeOp(boolean abort) |
void |
LimitOperator.closeOp(boolean abort) |
protected void |
MuxOperator.closeOp(boolean abort) |
void |
FileSinkOperator.closeOp(boolean abort) |
void |
AppMasterEventOperator.closeOp(boolean abort) |
void |
CommonMergeJoinOperator.closeOp(boolean abort) |
void |
MapJoinOperator.closeOp(boolean abort) |
void |
HashTableDummyOperator.closeOp(boolean abort) |
void |
CommonJoinOperator.closeOp(boolean abort)
All done.
|
void |
HashTableSinkOperator.closeOp(boolean abort) |
protected void |
Operator.closeOp(boolean abort)
Operator specific close routine.
|
protected void |
ReduceSinkOperator.closeOp(boolean abort) |
void |
RCFileMergeOperator.closeOp(boolean abort) |
protected void |
PTFOperator.closeOp(boolean abort) |
void |
GroupByOperator.closeOp(boolean abort)
We need to forward all the aggregations to children.
|
void |
SparkHashTableSinkOperator.closeOp(boolean abort) |
void |
OrcFileMergeOperator.closeOp(boolean abort) |
void |
JoinOperator.closeOp(boolean abort)
All done.
|
void |
FetchOperator.closeOperator() |
void |
FileSinkOperator.FSPaths.closeWriters(boolean abort) |
Integer |
ExprNodeGenericFuncEvaluator.compare(Object row)
If the genericUDF is a base comparison, it returns an integer based on the result of comparing
the two sides of the UDF, like the compareTo method in Comparable.
|
protected void |
MapJoinOperator.completeInitializationOp(Object[] os) |
protected void |
Operator.completeInitializationOp(Object[] os)
This method can be used to retrieve the results from async operations
started at init time - before the operator pipeline is started.
|
static ArrayList<Object> |
JoinUtil.computeKeys(Object row,
List<ExprNodeEvaluator> keyFields,
List<ObjectInspector> keyFieldsOI)
Return the key as a standard object.
|
static Object[] |
JoinUtil.computeMapJoinValues(Object row,
List<ExprNodeEvaluator> valueFields,
List<ObjectInspector> valueFieldsOI,
List<ExprNodeEvaluator> filters,
List<ObjectInspector> filtersOI,
int[] filterMap)
Return the value as a standard object.
|
static List<Object> |
JoinUtil.computeValues(Object row,
List<ExprNodeEvaluator> valueFields,
List<ObjectInspector> valueFieldsOI,
boolean hasFilter)
Return the value as a standard object.
|
static String |
ArchiveUtils.conflictingArchiveNameOrNull(Hive db,
Table tbl,
LinkedHashMap<String,String> partSpec)
Determines if one can insert into partition(s), or there's a conflict with
archive.
|
static void |
PTFOperator.connectLeadLagFunctionsToPartition(LeadLagInfo leadLagInfo,
PTFPartition.PTFPartitionIterator<Object> pItr) |
static StandardStructObjectInspector |
Utilities.constructVectorizedReduceRowOI(StructObjectInspector keyInspector,
StructObjectInspector valueInspector)
Create row key and value object inspectors for reduce vectorization.
|
static void |
Utilities.copyTableJobPropertiesToConf(TableDesc tbl,
org.apache.hadoop.mapred.JobConf job)
Copies the storage handler properties configured for a table descriptor to a runtime job
configuration.
|
static void |
Utilities.copyTablePropertiesToConf(TableDesc tbl,
org.apache.hadoop.mapred.JobConf job)
Copies the storage handler proeprites configured for a table descriptor to a runtime job
configuration.
|
static PTFPartition |
PTFPartition.create(org.apache.hadoop.conf.Configuration cfg,
AbstractSerDe serDe,
StructObjectInspector inputOI,
StructObjectInspector outputOI) |
static ArchiveUtils.PartSpecInfo |
ArchiveUtils.PartSpecInfo.create(Table tbl,
Map<String,String> partSpec)
Extract partial prefix specification from table and key-value map
|
protected void |
FileSinkOperator.createBucketFiles(FileSinkOperator.FSPaths fsp) |
protected void |
FileSinkOperator.createBucketForFileIdx(FileSinkOperator.FSPaths fsp,
int filesIdx) |
org.apache.hadoop.fs.Path |
ArchiveUtils.PartSpecInfo.createPath(Table tbl)
Creates path where partitions matching prefix should lie in filesystem
|
static PTFRollingPartition |
PTFPartition.createRolling(org.apache.hadoop.conf.Configuration cfg,
AbstractSerDe serDe,
StructObjectInspector inputOI,
StructObjectInspector outputOI,
int precedingSpan,
int followingSpan) |
static FetchOperator |
PartitionKeySampler.createSampler(FetchWork work,
org.apache.hadoop.mapred.JobConf job,
Operator<?> operator) |
protected void |
Operator.defaultEndGroup() |
protected void |
Operator.defaultStartGroup() |
void |
DemuxOperator.endGroup() |
void |
MuxOperator.endGroup() |
void |
CommonMergeJoinOperator.endGroup() |
void |
MapJoinOperator.endGroup() |
void |
CommonJoinOperator.endGroup()
Forward a record of join results.
|
void |
Operator.endGroup() |
void |
JoinOperator.endGroup()
Forward a record of join results.
|
Object |
ExprNodeEvaluator.evaluate(Object row) |
protected Object |
ExprNodeEvaluator.evaluate(Object row,
int version)
Evaluate the expression given the row.
|
void |
TopNHash.flush()
Flushes all the rows cached in the heap.
|
void |
Operator.flush() |
void |
GroupByOperator.flush()
Forward all aggregations to children.
|
void |
PTFTopNHash.flush() |
void |
Operator.flushRecursive() |
protected void |
FetchOperator.flushRow() |
protected void |
TemporaryHashSinkOperator.flushToFile() |
protected void |
HashTableSinkOperator.flushToFile() |
boolean |
MapOperator.MapOpCtx.forward(Object row) |
void |
DemuxOperator.forward(Object row,
ObjectInspector rowInspector) |
void |
MuxOperator.forward(Object row,
ObjectInspector rowInspector) |
protected void |
Operator.forward(Object row,
ObjectInspector rowInspector) |
protected void |
Operator.forward(Object row,
ObjectInspector rowInspector,
boolean isVectorized) |
protected void |
Operator.forward(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch vrg,
ObjectInspector rowInspector) |
void |
UDTFOperator.forwardUDTFOutput(Object o)
forwardUDTFOutput is typically called indirectly by the GenericUDTF when
the GenericUDTF has generated output rows that should be passed on to the
next operator(s) in the DAG.
|
void |
MapJoinOperator.generateMapMetaData() |
static ExprNodeEvaluator |
ExprNodeEvaluatorFactory.get(ExprNodeDesc desc) |
static ExprNodeEvaluator |
ExprNodeEvaluatorFactory.get(ExprNodeDesc desc,
org.apache.hadoop.conf.Configuration conf) |
static int |
ArchiveUtils.getArchivingLevel(Partition p)
Returns archiving level, which is how many fields were set in partial
specification ARCHIVE was run for
|
Object |
PTFRollingPartition.getAt(int i) |
Object |
PTFPartition.getAt(int i) |
protected FileSinkOperator.FSPaths |
FileSinkOperator.getDynOutPaths(List<String> row,
String lbDir) |
protected List<Object> |
CommonJoinOperator.getFilteredValue(byte alias,
Object row) |
static List<LinkedHashMap<String,String>> |
Utilities.getFullDPSpecs(org.apache.hadoop.conf.Configuration conf,
DynamicPartitionCtx dpCtx)
Construct a list of full partition spec from Dynamic Partition Context and the directory names
corresponding to these dynamic partitions.
|
String |
ArchiveUtils.PartSpecInfo.getName()
Generates name for prefix partial partition specification.
|
abstract void |
KeyWrapper.getNewKey(Object row,
ObjectInspector rowInspector) |
static List<ObjectInspector>[] |
JoinUtil.getObjectInspectorsFromEvaluators(List<ExprNodeEvaluator>[] exprEntries,
ObjectInspector[] inputObjInspector,
int posBigTableAlias,
int tagLen) |
static String |
ArchiveUtils.getPartialName(Partition p,
int level)
Get a prefix of the given parition's string representation.
|
static PartitionDesc |
Utilities.getPartitionDesc(Partition part) |
static PartitionDesc |
Utilities.getPartitionDesc(Partition part,
TableDesc tableDesc) |
static PartitionDesc |
Utilities.getPartitionDescFromTableDesc(TableDesc tblDesc,
Partition part,
boolean usePartSchemaProperties) |
static String[] |
FunctionUtils.getQualifiedFunctionNameParts(String name) |
static String |
Utilities.getQualifiedPath(HiveConf conf,
org.apache.hadoop.fs.Path path)
Convert path to qualified path.
|
static RowContainer<List<Object>> |
JoinUtil.getRowContainer(org.apache.hadoop.conf.Configuration hconf,
List<ObjectInspector> structFieldObjectInspectors,
Byte alias,
int containerSize,
TableDesc[] spillTableDesc,
JoinDesc conf,
boolean noFilter,
org.apache.hadoop.mapred.Reporter reporter) |
static <T extends OperatorDesc> |
OperatorFactory.getVectorOperator(Class<? extends Operator<?>> opClass,
CompilationOpContext cContext,
T conf,
VectorizationContext vContext,
VectorDesc vectorDesc) |
static <T extends OperatorDesc> |
OperatorFactory.getVectorOperator(CompilationOpContext cContext,
T conf,
VectorizationContext vContext,
VectorDesc vectorDesc) |
static void |
Utilities.handleMmTableFinalPath(org.apache.hadoop.fs.Path specPath,
String unionSuffix,
org.apache.hadoop.conf.Configuration hconf,
boolean success,
int dpLevels,
int lbLevels,
Utilities.MissingBucketsContext mbc,
long writeId,
int stmtId,
org.apache.hadoop.mapred.Reporter reporter,
boolean isMmTable,
boolean isMmCtas,
boolean isInsertOverwrite) |
void |
SkewJoinHandler.handleSkew(int tag) |
protected void |
AppMasterEventOperator.initDataBuffer(boolean skipPruning) |
protected static ObjectInspector[] |
Operator.initEvaluators(ExprNodeEvaluator<?>[] evals,
int start,
int length,
ObjectInspector rowInspector)
Initialize an array of ExprNodeEvaluator from start, for specified length
and return the result ObjectInspectors.
|
protected static ObjectInspector[] |
Operator.initEvaluators(ExprNodeEvaluator<?>[] evals,
ObjectInspector rowInspector)
Initialize an array of ExprNodeEvaluator and return the result
ObjectInspectors.
|
protected static StructObjectInspector |
ReduceSinkOperator.initEvaluatorsAndReturnStruct(ExprNodeEvaluator[] evals,
List<List<Integer>> distinctColIndices,
List<String> outputColNames,
int length,
ObjectInspector rowInspector)
Initializes array of ExprNodeEvaluator.
|
protected static StructObjectInspector |
Operator.initEvaluatorsAndReturnStruct(ExprNodeEvaluator<?>[] evals,
List<String> outputColName,
ObjectInspector rowInspector)
Initialize an array of ExprNodeEvaluator and put the return values into a
StructObjectInspector with integer field names.
|
void |
Operator.initialize(org.apache.hadoop.conf.Configuration hconf,
ObjectInspector[] inputOIs)
Initializes operators only if all parents have been initialized.
|
protected void |
Operator.initialize(org.apache.hadoop.conf.Configuration hconf,
ObjectInspector inputOI,
int parentId)
Collects all the parent's output object inspectors and calls actual
initialization method.
|
ObjectInspector |
ExprNodeEvaluatorRef.initialize(ObjectInspector rowInspector) |
ObjectInspector |
ExprNodeEvaluatorHead.initialize(ObjectInspector rowInspector) |
ObjectInspector |
ExprNodeConstantEvaluator.initialize(ObjectInspector rowInspector) |
ObjectInspector |
ExprNodeColumnEvaluator.initialize(ObjectInspector rowInspector) |
ObjectInspector |
ExprNodeFieldEvaluator.initialize(ObjectInspector rowInspector) |
abstract ObjectInspector |
ExprNodeEvaluator.initialize(ObjectInspector rowInspector)
Initialize should be called once and only once.
|
ObjectInspector |
ExprNodeGenericFuncEvaluator.initialize(ObjectInspector rowInspector) |
ObjectInspector |
ExprNodeDynamicValueEvaluator.initialize(ObjectInspector rowInspector) |
protected void |
DemuxOperator.initializeChildren(org.apache.hadoop.conf.Configuration hconf) |
protected void |
MuxOperator.initializeChildren(org.apache.hadoop.conf.Configuration hconf)
Calls initialize on each of the children with outputObjetInspector as the
output row format.
|
protected void |
Operator.initializeChildren(org.apache.hadoop.conf.Configuration hconf)
Calls initialize on each of the children with outputObjetInspector as the
output row format.
|
abstract void |
AbstractMapOperator.initializeContexts() |
void |
SMBMapJoinOperator.initializeLocalWork(org.apache.hadoop.conf.Configuration hconf) |
void |
CommonMergeJoinOperator.initializeLocalWork(org.apache.hadoop.conf.Configuration hconf) |
void |
Operator.initializeLocalWork(org.apache.hadoop.conf.Configuration hconf) |
void |
MapOperator.initializeMapOperator(org.apache.hadoop.conf.Configuration hconf) |
void |
AbstractMapOperator.initializeMapOperator(org.apache.hadoop.conf.Configuration hconf) |
void |
SMBMapJoinOperator.initializeMapredLocalWork(MapJoinDesc mjConf,
org.apache.hadoop.conf.Configuration hconf,
MapredLocalWork localWork,
org.slf4j.Logger l4j) |
protected void |
TableScanOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
UDTFOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
SMBMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
MapOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
ForwardOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
AbstractFileMergeOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
DemuxOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
AbstractMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
LimitOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
MuxOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
FileSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
AppMasterEventOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
CommonMergeJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
MapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
DummyStoreOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
LateralViewJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
HashTableDummyOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
CommonJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
HashTableSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
LateralViewForwardOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
Operator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
Operator specific initialization.
|
protected void |
ListSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
ReduceSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
SelectOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
PTFOperator.initializeOp(org.apache.hadoop.conf.Configuration jobConf) |
protected void |
ScriptOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
GroupByOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
UnionOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
UnionOperator will transform the input rows if the inputObjInspectors from
different parents are different.
|
protected void |
SparkHashTableSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
CollectOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
FilterOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
JoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
CommonJoinOperator.internalForward(Object row,
ObjectInspector outputOI) |
static Object |
FunctionRegistry.invoke(Method m,
Object thisObject,
Object... arguments) |
protected static boolean |
JoinUtil.isFiltered(Object row,
List<ExprNodeEvaluator> filters,
List<ObjectInspector> filtersOIs)
Returns true if the row does not pass through filters.
|
protected static short |
JoinUtil.isFiltered(Object row,
List<ExprNodeEvaluator> filters,
List<ObjectInspector> ois,
int[] filterMap)
Returns true if the row does not pass through filters.
|
PTFPartition.PTFPartitionIterator<Object> |
PTFRollingPartition.iterator() |
PTFPartition.PTFPartitionIterator<Object> |
PTFPartition.iterator() |
void |
Operator.jobClose(org.apache.hadoop.conf.Configuration conf,
boolean success)
Unlike other operator interfaces which are called from map or reduce task,
jobClose is called from the jobclient side once the job has completed.
|
void |
AbstractFileMergeOperator.jobCloseOp(org.apache.hadoop.conf.Configuration hconf,
boolean success) |
void |
FileSinkOperator.jobCloseOp(org.apache.hadoop.conf.Configuration hconf,
boolean success) |
void |
Operator.jobCloseOp(org.apache.hadoop.conf.Configuration conf,
boolean success) |
void |
JoinOperator.jobCloseOp(org.apache.hadoop.conf.Configuration hconf,
boolean success) |
T |
PTFPartition.PTFPartitionIterator.lag(int amt) |
T |
PTFPartition.PTFPartitionIterator.lead(int amt) |
void |
HashTableLoader.load(MapJoinTableContainer[] mapJoinTables,
MapJoinTableContainerSerDe[] mapJoinTableSerdes) |
protected org.apache.commons.lang3.tuple.Pair<MapJoinTableContainer[],MapJoinTableContainerSerDe[]> |
MapJoinOperator.loadHashTable(ExecMapperContext mapContext,
MapredContext mrContext) |
static void |
DDLTask.makeLocationQualified(String databaseName,
Table table,
HiveConf conf)
Make location in specified sd qualified.
|
static void |
Utilities.mvFileToFinalPath(org.apache.hadoop.fs.Path specPath,
org.apache.hadoop.conf.Configuration hconf,
boolean success,
org.slf4j.Logger log,
DynamicPartitionCtx dpCtx,
FileSinkDesc conf,
org.apache.hadoop.mapred.Reporter reporter) |
protected GenericUDAFEvaluator.AggregationBuffer[] |
GroupByOperator.newAggregations() |
Object |
PTFRollingPartition.nextOutputRow() |
static int |
JoinUtil.populateJoinKeyValue(List<ExprNodeEvaluator>[] outMap,
Map<Byte,List<ExprNodeDesc>> inputMap,
Byte[] order,
int posBigTableAlias,
org.apache.hadoop.conf.Configuration conf) |
static int |
JoinUtil.populateJoinKeyValue(List<ExprNodeEvaluator>[] outMap,
Map<Byte,List<ExprNodeDesc>> inputMap,
int posBigTableAlias,
org.apache.hadoop.conf.Configuration conf) |
Object |
MuxOperator.Handler.process(Object row) |
void |
TableScanOperator.process(Object row,
int tag)
Other than gathering statistics for the ANALYZE command, the table scan operator
does not do anything special other than just forwarding the row.
|
void |
UDTFOperator.process(Object row,
int tag) |
void |
SMBMapJoinOperator.process(Object row,
int tag) |
void |
MapOperator.process(Object row,
int tag) |
void |
TezDummyStoreOperator.process(Object row,
int tag)
Unlike the MR counterpoint, on Tez we want processOp to forward
the records.
|
void |
ForwardOperator.process(Object row,
int tag) |
void |
DemuxOperator.process(Object row,
int tag) |
void |
LimitOperator.process(Object row,
int tag) |
void |
MuxOperator.process(Object row,
int tag) |
void |
FileSinkOperator.process(Object row,
int tag) |
void |
AppMasterEventOperator.process(Object row,
int tag) |
void |
CommonMergeJoinOperator.process(Object row,
int tag) |
void |
MapJoinOperator.process(Object row,
int tag) |
void |
DummyStoreOperator.process(Object row,
int tag) |
void |
LateralViewJoinOperator.process(Object row,
int tag)
An important assumption for processOp() is that for a given row from the
TS, the LVJ will first get the row from the left select operator, followed
by all the corresponding rows from the UDTF operator.
|
void |
HashTableDummyOperator.process(Object row,
int tag) |
void |
HashTableSinkOperator.process(Object row,
int tag) |
void |
LateralViewForwardOperator.process(Object row,
int tag) |
abstract void |
Operator.process(Object row,
int tag)
Process the row.
|
void |
ListSinkOperator.process(Object row,
int tag) |
void |
ReduceSinkOperator.process(Object row,
int tag) |
void |
SelectOperator.process(Object row,
int tag) |
void |
RCFileMergeOperator.process(Object row,
int tag) |
void |
PTFOperator.process(Object row,
int tag) |
void |
ScriptOperator.process(Object row,
int tag) |
void |
GroupByOperator.process(Object row,
int tag) |
void |
UnionOperator.process(Object row,
int tag) |
void |
SparkHashTableSinkOperator.process(Object row,
int tag) |
void |
CollectOperator.process(Object row,
int tag) |
void |
OrcFileMergeOperator.process(Object row,
int tag) |
void |
FilterOperator.process(Object row,
int tag) |
void |
JoinOperator.process(Object row,
int tag) |
void |
MapOperator.process(org.apache.hadoop.io.Writable value) |
abstract void |
AbstractMapOperator.process(org.apache.hadoop.io.Writable value) |
void |
MuxOperator.processGroup(int tag) |
void |
Operator.processGroup(int tag) |
boolean |
FetchOperator.pushRow()
Get the next row and push down it to operator tree.
|
protected void |
FetchOperator.pushRow(InspectableObject row) |
protected void |
PTFOperator.reconstructQueryDef(org.apache.hadoop.conf.Configuration hiveConf)
Initialize the visitor to use the QueryDefDeserializer Use the order
defined in QueryDefWalker to visit the QueryDef
|
protected void |
MapJoinOperator.reloadHashTable(byte pos,
int partitionId)
Reload hashtable from the hash partition.
|
static void |
Utilities.rename(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Rename src to dst, or in the case dst already exists, move files in src to dst.
|
static void |
Utilities.renameOrMoveFiles(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Rename src to dst, or in the case dst already exists, move files in src to dst.
|
protected void |
MapJoinOperator.reProcessBigTable(int partitionId)
Iterate over the big table row container and feed process() with leftover rows
|
void |
PTFRollingPartition.reset() |
void |
PTFPartition.reset() |
void |
PTFPartition.PTFPartitionIterator.reset() |
protected void |
GroupByOperator.resetAggregations(GenericUDAFEvaluator.AggregationBuffer[] aggs) |
Object |
PTFPartition.PTFPartitionIterator.resetToIndex(int idx) |
<T> T |
ObjectCacheWrapper.retrieve(String key) |
<T> T |
ObjectCache.retrieve(String key)
Retrieve object from cache.
|
<T> T |
ObjectCacheWrapper.retrieve(String key,
Callable<T> fn) |
<T> T |
ObjectCache.retrieve(String key,
Callable<T> fn)
Retrieve object from cache.
|
<T> Future<T> |
ObjectCacheWrapper.retrieveAsync(String key,
Callable<T> fn) |
<T> Future<T> |
ObjectCache.retrieveAsync(String key,
Callable<T> fn)
Retrieve object from cache asynchronously.
|
protected JoinUtil.JoinResult |
MapJoinOperator.setMapJoinKey(MapJoinTableContainer.ReusableGetAdaptor dest,
Object row,
byte alias) |
void |
Operator.setNextVectorBatchGroupStatus(boolean isLastGroupBatch) |
protected void |
PTFOperator.setupKeysWrapper(ObjectInspector inputOI) |
int |
DDLTask.showColumns(Hive db,
ShowColumnsDesc showCols)
Write a list of the columns in the table to a file.
|
protected List<Object> |
SMBMapJoinOperator.smbJoinComputeKeys(Object row,
byte alias) |
protected void |
MapJoinOperator.spillBigTableRow(MapJoinTableContainer hybridHtContainer,
Object row)
Postpone processing the big table row temporarily by spilling it to a row container
|
static String[] |
FunctionUtils.splitQualifiedFunctionName(String functionName)
Splits a qualified function name into an array containing the database name and function name.
|
void |
DemuxOperator.startGroup() |
void |
MuxOperator.startGroup() |
void |
CommonMergeJoinOperator.startGroup() |
void |
MapJoinOperator.startGroup() |
void |
CommonJoinOperator.startGroup() |
void |
Operator.startGroup() |
int |
TopNHash.startVectorizedBatch(int size)
Perform basic checks and initialize TopNHash for the new vectorized row batch.
|
int |
PTFTopNHash.startVectorizedBatch(int size) |
static FunctionInfo.FunctionResource[] |
FunctionTask.toFunctionResource(List<ResourceUri> resources) |
int |
TopNHash.tryStoreKey(HiveKey key,
boolean partColsIsNull)
Try store the non-vectorized key.
|
int |
PTFTopNHash.tryStoreKey(HiveKey key,
boolean partColsIsNull) |
void |
TopNHash.tryStoreVectorizedKey(HiveKey key,
boolean partColsIsNull,
int batchIndex)
Try to put the key from the current vectorized batch into the heap.
|
void |
PTFTopNHash.tryStoreVectorizedKey(HiveKey key,
boolean partColsIsNull,
int batchIndex) |
void |
Registry.unregisterFunction(String functionName) |
void |
Registry.unregisterFunctions(String dbName)
Unregisters all the functions belonging to the specified database
|
static void |
FunctionRegistry.unregisterPermanentFunction(String functionName) |
static void |
FunctionRegistry.unregisterPermanentFunctions(String dbName)
Unregisters all the functions under the database dbName
|
static void |
FunctionRegistry.unregisterTemporaryUDF(String functionName) |
protected void |
GroupByOperator.updateAggregations(GenericUDAFEvaluator.AggregationBuffer[] aggs,
Object row,
ObjectInspector rowInspector,
boolean hashAggr,
boolean newEntryForHashAggr,
Object[][] lastInvoke) |
static void |
DDLTask.validateSerDe(String serdeName,
HiveConf conf)
Check if the given serde is valid.
|
static void |
Utilities.writeMmCommitManifest(List<org.apache.hadoop.fs.Path> commitPaths,
org.apache.hadoop.fs.Path specPath,
org.apache.hadoop.fs.FileSystem fs,
String taskId,
Long writeId,
int stmtId,
String unionSuffix,
boolean isInsertOverwrite) |
Constructor and Description |
---|
ExprNodeFieldEvaluator(ExprNodeFieldDesc desc,
org.apache.hadoop.conf.Configuration conf) |
ExprNodeGenericFuncEvaluator(ExprNodeGenericFuncDesc expr,
org.apache.hadoop.conf.Configuration conf) |
FetchOperator(FetchWork work,
org.apache.hadoop.mapred.JobConf job) |
FetchOperator(FetchWork work,
org.apache.hadoop.mapred.JobConf job,
Operator<?> operator,
List<VirtualColumn> vcCols) |
Handler(ObjectInspector inputObjInspector,
List<ExprNodeDesc> keyCols,
List<ExprNodeDesc> valueCols,
List<String> outputKeyColumnNames,
List<String> outputValueColumnNames,
Integer tag) |
HarPathHelper(HiveConf hconf,
URI archive,
URI originalBase)
Creates helper for archive.
|
PTFPartition(org.apache.hadoop.conf.Configuration cfg,
AbstractSerDe serDe,
StructObjectInspector inputOI,
StructObjectInspector outputOI) |
PTFPartition(org.apache.hadoop.conf.Configuration cfg,
AbstractSerDe serDe,
StructObjectInspector inputOI,
StructObjectInspector outputOI,
boolean createElemContainer) |
PTFRollingPartition(org.apache.hadoop.conf.Configuration cfg,
AbstractSerDe serDe,
StructObjectInspector inputOI,
StructObjectInspector outputOI,
int startPos,
int endPos) |
SecureCmdDoAs(HiveConf conf) |
Modifier and Type | Method and Description |
---|---|
void |
HashTableLoader.load(MapJoinTableContainer[] mapJoinTables,
MapJoinTableContainerSerDe[] mapJoinTableSerdes) |
static void |
ExecDriver.main(String[] args) |
<T> T |
ObjectCache.retrieve(String key) |
<T> T |
ObjectCache.retrieve(String key,
Callable<T> fn) |
<T> Future<T> |
ObjectCache.retrieveAsync(String key,
Callable<T> fn) |
Constructor and Description |
---|
ExecDriver(MapredWork plan,
org.apache.hadoop.mapred.JobConf job,
boolean isSilent)
Constructor/Initialization for invocation as independent utility.
|
MapredLocalTask(MapredLocalWork plan,
org.apache.hadoop.mapred.JobConf job,
boolean isSilent) |
Modifier and Type | Method and Description |
---|---|
void |
FlatRowContainer.add(MapJoinObjectSerDeContext context,
org.apache.hadoop.io.BytesWritable value)
Called when loading the hashtable.
|
void |
FlatRowContainer.addRow(List<Object> t) |
void |
UnwrapRowContainer.addRow(List<Object> t) |
void |
FlatRowContainer.addRow(Object[] value) |
void |
UnwrapRowContainer.addRow(Object[] value) |
void |
MapJoinRowContainer.addRow(Object[] value) |
void |
PTFRowContainer.addRow(Row t) |
void |
RowContainer.addRow(ROW t) |
void |
AbstractRowContainer.addRow(ROW t)
add a row into the RowContainer
|
void |
RowContainer.clearRows()
Remove all elements in the RowContainer.
|
void |
UnwrapRowContainer.clearRows() |
void |
PTFRowContainer.clearRows() |
void |
AbstractRowContainer.clearRows()
Remove all elements in the RowContainer.
|
protected void |
RowContainer.close() |
void |
PTFRowContainer.close() |
MapJoinRowContainer |
FlatRowContainer.copy() |
MapJoinRowContainer |
UnwrapRowContainer.copy() |
MapJoinRowContainer |
MapJoinRowContainer.copy() |
void |
RowContainer.copyToDFSDirecory(org.apache.hadoop.fs.FileSystem destFs,
org.apache.hadoop.fs.Path destPath) |
ROW |
RowContainer.first() |
List<Object> |
FlatRowContainer.first() |
List<Object> |
UnwrapRowContainer.first() |
Row |
PTFRowContainer.first() |
ROW |
AbstractRowContainer.RowIterator.first() |
byte |
FlatRowContainer.getAliasFilter() |
byte |
UnwrapRowContainer.getAliasFilter() |
byte |
MapJoinRowContainer.getAliasFilter() |
Row |
PTFRowContainer.getAt(int rowIdx) |
boolean |
FlatRowContainer.hasRows() |
boolean |
UnwrapRowContainer.hasRows() |
boolean |
AbstractRowContainer.hasRows() |
boolean |
FlatRowContainer.isSingleRow() |
boolean |
UnwrapRowContainer.isSingleRow() |
boolean |
AbstractRowContainer.isSingleRow() |
MapJoinTableContainer |
MapJoinTableContainerSerDe.load(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path folder,
org.apache.hadoop.conf.Configuration hconf)
Loads the table container from a folder.
|
MapJoinPersistableTableContainer |
MapJoinTableContainerSerDe.load(ObjectInputStream in) |
MapJoinTableContainer |
MapJoinTableContainerSerDe.loadFastContainer(MapJoinDesc mapJoinDesc,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path folder,
org.apache.hadoop.conf.Configuration hconf)
Loads the small table into a VectorMapJoinFastTableContainer.
|
ROW |
RowContainer.next() |
List<Object> |
UnwrapRowContainer.next() |
Row |
PTFRowContainer.next() |
ROW |
AbstractRowContainer.RowIterator.next() |
protected boolean |
RowContainer.nextBlock(int readIntoOffset) |
void |
MapJoinTableContainerSerDe.persist(ObjectOutputStream out,
MapJoinPersistableTableContainer tableContainer) |
MapJoinKey |
HybridHashTableContainer.putRow(org.apache.hadoop.io.Writable currentKey,
org.apache.hadoop.io.Writable currentValue) |
MapJoinKey |
MapJoinTableContainer.putRow(org.apache.hadoop.io.Writable currentKey,
org.apache.hadoop.io.Writable currentValue)
Adds row from input to the table.
|
MapJoinKey |
HashMapWrapper.putRow(org.apache.hadoop.io.Writable currentKey,
org.apache.hadoop.io.Writable currentValue) |
static MapJoinKey |
MapJoinKey.read(ByteStream.Output output,
MapJoinObjectSerDeContext context,
org.apache.hadoop.io.Writable writable) |
static MapJoinKey |
MapJoinKey.readFromRow(ByteStream.Output output,
MapJoinKey key,
Object[] keyObject,
List<ObjectInspector> keyFieldsOI,
boolean mayReuseKey) |
void |
MapJoinKeyObject.readFromRow(Object[] fieldObjs,
List<ObjectInspector> keyFieldsOI) |
static MapJoinKey |
MapJoinKey.readFromVector(ByteStream.Output output,
MapJoinKey key,
Object[] keyObject,
List<ObjectInspector> keyOIs,
boolean mayReuseKey) |
void |
MapJoinKeyObject.readFromVector(VectorHashKeyWrapper kw,
VectorExpressionWriter[] keyOutputWriters,
VectorHashKeyWrapperBatch keyWrapperBatch) |
int |
FlatRowContainer.rowCount() |
int |
UnwrapRowContainer.rowCount() |
int |
AbstractRowContainer.rowCount() |
AbstractRowContainer.RowIterator<List<Object>> |
FlatRowContainer.rowIter() |
AbstractRowContainer.RowIterator<List<Object>> |
UnwrapRowContainer.rowIter() |
AbstractRowContainer.RowIterator<ROW> |
AbstractRowContainer.rowIter() |
static ByteStream.Output |
MapJoinKey.serializeRow(ByteStream.Output byteStream,
Object[] fieldData,
List<ObjectInspector> fieldOis,
boolean[] sortableSortOrders,
byte[] nullMarkers,
byte[] notNullMarkers)
Serializes row to output.
|
static ByteStream.Output |
MapJoinKey.serializeVector(ByteStream.Output byteStream,
VectorHashKeyWrapper kw,
VectorExpressionWriter[] keyOutputWriters,
VectorHashKeyWrapperBatch keyWrapperBatch,
boolean[] nulls,
boolean[] sortableSortOrders,
byte[] nullMarkers,
byte[] notNullMarkers)
Serializes row to output for vectorized path.
|
JoinUtil.JoinResult |
MapJoinTableContainer.ReusableGetAdaptor.setFromOther(MapJoinTableContainer.ReusableGetAdaptor other)
Changes current rows to which adaptor is referring to the rows corresponding to
the key that another adaptor has already deserialized via setFromVector/setFromRow.
|
JoinUtil.JoinResult |
MapJoinTableContainer.ReusableGetAdaptor.setFromRow(Object row,
List<ExprNodeEvaluator> fields,
List<ObjectInspector> ois)
Changes current rows to which adaptor is referring to the rows corresponding to
the key represented by a row object, and fields and ois used to interpret it.
|
JoinUtil.JoinResult |
MapJoinTableContainer.ReusableGetAdaptor.setFromVector(VectorHashKeyWrapper kw,
VectorExpressionWriter[] keyOutputWriters,
VectorHashKeyWrapperBatch keyWrapperBatch)
Changes current rows to which adaptor is referring to the rows corresponding to
the key represented by a VHKW object, and writers and batch used to interpret it.
|
protected void |
RowContainer.setupWriter() |
Constructor and Description |
---|
PTFRowContainer(int bs,
org.apache.hadoop.conf.Configuration jc,
org.apache.hadoop.mapred.Reporter reporter) |
RowContainer(org.apache.hadoop.conf.Configuration jc,
org.apache.hadoop.mapred.Reporter reporter) |
RowContainer(int bs,
org.apache.hadoop.conf.Configuration jc,
org.apache.hadoop.mapred.Reporter reporter) |
Constructor and Description |
---|
LoadPartitions(Context context,
ReplLogger replLogger,
TableContext tableContext,
TaskTracker limiter,
TableEvent event,
String dbNameToLoadIn,
AddPartitionDesc lastReplicatedPartition) |
LoadPartitions(Context context,
ReplLogger replLogger,
TaskTracker tableTracker,
TableEvent event,
String dbNameToLoadIn,
TableContext tableContext) |
Modifier and Type | Method and Description |
---|---|
static SparkSession |
SparkUtilities.getSparkSession(HiveConf conf,
SparkSessionManager sparkSessionManager) |
void |
HashTableLoader.load(MapJoinTableContainer[] mapJoinTables,
MapJoinTableContainerSerDe[] mapJoinTableSerdes) |
void |
SparkDynamicPartitionPruner.prune(MapWork work,
org.apache.hadoop.mapred.JobConf jobConf) |
Modifier and Type | Method and Description |
---|---|
void |
SparkSessionManagerImpl.closeSession(SparkSession sparkSession) |
void |
SparkSessionManager.closeSession(SparkSession sparkSession)
Close the given session and return it to pool.
|
static SparkSessionManagerImpl |
SparkSessionManagerImpl.getInstance() |
SparkSession |
SparkSessionManagerImpl.getSession(SparkSession existingSession,
HiveConf conf,
boolean doOpen)
If the existingSession can be reused return it.
|
SparkSession |
SparkSessionManager.getSession(SparkSession existingSession,
HiveConf conf,
boolean doOpen)
Get a valid SparkSession.
|
void |
SparkSession.open(HiveConf conf)
Initializes a Spark session for DAG execution.
|
void |
SparkSessionImpl.open(HiveConf conf) |
void |
SparkSessionManagerImpl.returnSession(SparkSession sparkSession) |
void |
SparkSessionManager.returnSession(SparkSession sparkSession)
Return the given sparkSession to pool.
|
void |
SparkSessionManagerImpl.setup(HiveConf hiveConf) |
void |
SparkSessionManager.setup(HiveConf hiveConf)
Initialize based on given configuration.
|
Modifier and Type | Method and Description |
---|---|
Map<String,SparkStageProgress> |
SparkJobStatus.getSparkStageProgress() |
int[] |
SparkJobStatus.getStageIds() |
org.apache.spark.JobExecutionStatus |
SparkJobStatus.getState() |
Modifier and Type | Method and Description |
---|---|
Map<String,SparkStageProgress> |
RemoteSparkJobStatus.getSparkStageProgress() |
int[] |
RemoteSparkJobStatus.getStageIds() |
org.apache.spark.JobExecutionStatus |
RemoteSparkJobStatus.getState() |
Modifier and Type | Method and Description |
---|---|
void |
YarnQueueHelper.checkQueueAccess(String queueName,
String userName) |
List<BaseWork> |
RecordProcessor.getMergeWorkList(org.apache.hadoop.mapred.JobConf jconf,
String key,
String queryId,
ObjectCache cache,
List<String> cacheKeys) |
void |
HashTableLoader.load(MapJoinTableContainer[] mapJoinTables,
MapJoinTableContainerSerDe[] mapJoinTableSerdes) |
void |
DynamicPartitionPruner.prune() |
protected void |
DynamicPartitionPruner.prunePartitionSingleSource(String source,
org.apache.hadoop.hive.ql.exec.tez.DynamicPartitionPruner.SourceInfo si) |
boolean |
MapRecordSource.pushRecord() |
boolean |
ReduceRecordSource.pushRecord() |
boolean |
RecordSource.pushRecord() |
<T> T |
LlapObjectCache.retrieve(String key) |
<T> T |
ObjectCache.retrieve(String key) |
<T> T |
LlapObjectCache.retrieve(String key,
Callable<T> fn) |
<T> T |
ObjectCache.retrieve(String key,
Callable<T> fn) |
<T> Future<T> |
LlapObjectCache.retrieveAsync(String key,
Callable<T> fn) |
<T> Future<T> |
ObjectCache.retrieveAsync(String key,
Callable<T> fn) |
Constructor and Description |
---|
LlapObjectSubCache(ObjectCache cache,
String subCacheKey,
int numEntries) |
Modifier and Type | Method and Description |
---|---|
static void |
VectorizedBatchUtil.acidAddRowToBatch(Object row,
StructObjectInspector oi,
int rowIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorizedRowBatchCtx context,
org.apache.hadoop.io.DataOutputBuffer buffer)
Iterates thru all the columns in a given row and populates the batch
from a given offset
|
protected void |
VectorColumnSetInfo.addKey(TypeInfo typeInfo) |
static void |
VectorizedBatchUtil.addProjectedRowToBatchFrom(Object row,
StructObjectInspector oi,
int rowIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
org.apache.hadoop.io.DataOutputBuffer buffer)
Add only the projected column of a regular row to the specified vectorized row batch
|
static void |
VectorizedBatchUtil.addRowToBatchFrom(Object row,
StructObjectInspector oi,
int rowIndex,
int colOffset,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
org.apache.hadoop.io.DataOutputBuffer buffer)
Iterates thru all the columns in a given row and populates the batch
from a given offset
|
T |
VectorUtilBatchObjectPool.IAllocator.alloc() |
int |
VectorizationContext.allocateScratchColumn(TypeInfo typeInfo) |
void |
VectorColumnAssign.assignObjectValue(Object val,
int destIndex) |
void |
VectorHashKeyWrapperBatch.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int keyIndex,
VectorHashKeyWrapper kw) |
void |
VectorColumnAssign.assignVectorValue(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch inBatch,
int batchIndex,
int valueColumn,
int destIndex) |
static VectorColumnAssign[] |
VectorColumnAssignFactory.buildAssigners(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch outputBatch) |
static VectorColumnAssign[] |
VectorColumnAssignFactory.buildAssigners(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch outputBatch,
ObjectInspector outputOI,
Map<String,Integer> columnMap,
List<String> outputColumnNames)
Builds the assigners from an object inspector and from a list of columns.
|
static VectorColumnAssign[] |
VectorColumnAssignFactory.buildAssigners(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch outputBatch,
org.apache.hadoop.io.Writable[] writables) |
static VectorColumnAssign |
VectorColumnAssignFactory.buildObjectAssign(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch outputBatch,
int outColIndex,
ObjectInspector objInspector) |
static VectorColumnAssign |
VectorColumnAssignFactory.buildObjectAssign(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch outputBatch,
int outColIndex,
PrimitiveObjectInspector.PrimitiveCategory category) |
void |
VectorMapOperator.cleanUpInputFileChangedOp() |
void |
VectorMapJoinBaseOperator.closeOp(boolean aborted) |
void |
VectorGroupByOperator.closeOp(boolean aborted) |
void |
VectorMapOperator.closeOp(boolean abort) |
void |
VectorSMBMapJoinOperator.closeOp(boolean aborted) |
static String[] |
VectorizedBatchUtil.columnNamesFromStructObjectInspector(StructObjectInspector structObjectInspector) |
static VectorHashKeyWrapperBatch |
VectorHashKeyWrapperBatch.compileKeyWrapperBatch(VectorExpression[] keyExpressions) |
static VectorHashKeyWrapperBatch |
VectorHashKeyWrapperBatch.compileKeyWrapperBatch(VectorExpression[] keyExpressions,
TypeInfo[] typeInfos)
Prepares a VectorHashKeyWrapperBatch to work for a specific set of keys.
|
static StandardStructObjectInspector |
VectorizedBatchUtil.convertToStandardStructObjectInspector(StructObjectInspector structObjectInspector) |
void |
VectorGroupKeyHelper.copyGroupKey(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch inputBatch,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch outputBatch,
org.apache.hadoop.io.DataOutputBuffer buffer) |
void |
VectorGroupByOperator.endGroup() |
void |
VectorHashKeyWrapperBatch.evaluateBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch)
Processes a batch:
Evaluates each key vector expression.
Copies out each key's primitive values into the key wrappers
computes the hashcode of the key wrappers
|
void |
VectorHashKeyWrapperBatch.evaluateBatchGroupingSets(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean[] groupingSetsOverrideIsNulls) |
protected void |
VectorColumnSetInfo.finishAdding() |
TypeInfo[] |
VectorizationContext.getAllTypeInfos() |
static org.apache.hadoop.hive.ql.exec.vector.ColumnVector.Type |
VectorizationContext.getColumnVectorTypeFromTypeInfo(TypeInfo typeInfo) |
static org.apache.hadoop.hive.ql.exec.vector.ColumnVector.Type |
VectorizationContext.getColumnVectorTypeFromTypeInfo(TypeInfo typeInfo,
org.apache.hadoop.hive.common.type.DataTypePhysicalVariation dataTypePhysicalVariation) |
org.apache.hadoop.hive.common.type.DataTypePhysicalVariation |
VectorizationContext.getDataTypePhysicalVariation(int columnNum) |
T |
VectorUtilBatchObjectPool.getFromPool() |
static GenericUDF |
VectorizationContext.getGenericUDFForCast(TypeInfo castType) |
protected int |
VectorizationContext.getInputColumnIndex(ExprNodeColumnDesc colExpr) |
int |
VectorizationContext.getInputColumnIndex(String name) |
void |
VectorHashKeyWrapper.getNewKey(Object row,
ObjectInspector rowInspector) |
TypeInfo |
VectorizationContext.getTypeInfo(int columnNum) |
VectorExpression |
VectorizationContext.getVectorExpression(ExprNodeDesc exprDesc) |
VectorExpression |
VectorizationContext.getVectorExpression(ExprNodeDesc exprDesc,
VectorExpressionDescriptor.Mode mode)
Returns a vector expression for a given expression
description.
|
Class<?> |
VectorExpressionDescriptor.getVectorExpressionClass(Class<?> udf,
VectorExpressionDescriptor.Descriptor descriptor,
boolean useCheckedExpressionIfAvailable) |
VectorExpression[] |
VectorizationContext.getVectorExpressions(List<ExprNodeDesc> exprNodes) |
VectorExpression[] |
VectorizationContext.getVectorExpressions(List<ExprNodeDesc> exprNodes,
VectorExpressionDescriptor.Mode mode) |
VectorExpression[] |
VectorizationContext.getVectorExpressionsUpConvertDecimal64(List<ExprNodeDesc> exprNodes) |
Object |
VectorHashKeyWrapperBatch.getWritableKeyValue(VectorHashKeyWrapper kw,
int keyIndex,
VectorExpressionWriter keyOutputWriter)
Get the row-mode writable object value of a key from a key wrapper
|
boolean |
VectorizationContext.haveCandidateForDecimal64VectorExpression(int numChildren,
List<ExprNodeDesc> childExpr,
TypeInfo returnType) |
void |
VectorDeserializeRow.init() |
void |
VectorDeserializeRow.init(boolean[] columnsToIncludeTruncated) |
void |
VectorMapOperator.VectorDeserializePartitionContext.init(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorDeserializeRow.init(int startColumn) |
void |
VectorDeserializeRow.init(int[] outputColumns) |
void |
VectorDeserializeRow.init(List<Integer> outputColumns) |
void |
VectorSerializeRow.init(List<String> typeNames) |
void |
VectorAssignRow.init(List<String> typeNames) |
void |
VectorSerializeRow.init(List<String> typeNames,
int[] columnMap) |
void |
VectorAssignRow.init(StructObjectInspector structObjectInspector) |
void |
VectorAssignRow.init(StructObjectInspector structObjectInspector,
List<Integer> projectedColumns) |
void |
VectorExtractRow.init(StructObjectInspector structObjectInspector,
List<Integer> projectedColumns) |
void |
VectorizedRowBatchCtx.init(StructObjectInspector structObjectInspector,
String[] scratchColumnTypeNames)
Initializes the VectorizedRowBatch context based on an scratch column type names and
object inspector.
|
void |
VectorizedRowBatchCtx.init(StructObjectInspector structObjectInspector,
String[] scratchColumnTypeNames,
org.apache.hadoop.hive.common.type.DataTypePhysicalVariation[] scratchDataTypePhysicalVariations)
Initializes the VectorizedRowBatch context based on an scratch column type names and
object inspector.
|
void |
VectorSerializeRow.init(TypeInfo[] typeInfos) |
void |
VectorExtractRow.init(TypeInfo[] typeInfos) |
void |
VectorSerializeRow.init(TypeInfo[] typeInfos,
int[] columnMap) |
void |
VectorExtractRow.init(TypeInfo[] typeInfos,
int[] projectedColumns) |
void |
VectorAssignRow.init(TypeInfo typeInfo,
int outputColumnNum) |
void |
VectorCopyRow.init(VectorColumnMapping columnMapping) |
void |
VectorDeserializeRow.initConversion(TypeInfo[] targetTypeInfos,
boolean[] columnsToIncludeTruncated)
Initialize for converting the source data type that are going to be read with the
DeserializedRead interface passed to the constructor to the target data types desired in
the VectorizedRowBatch.
|
void |
VectorMapOperator.initializeContexts() |
void |
VectorMapOperator.initializeMapOperator(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorSparkHashTableSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorFilterOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorMapJoinBaseOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorAppMasterEventOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorReduceSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorSparkPartitionPruningSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorSelectOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorGroupByOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorFileSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorMapJoinOuterFilteredOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorSMBMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
VectorExpression |
VectorizationContext.instantiateExpression(Class<?> vclass,
TypeInfo returnTypeInfo,
org.apache.hadoop.hive.common.type.DataTypePhysicalVariation returnDataTypePhysicalVariation,
Object... args) |
protected void |
VectorMapJoinBaseOperator.internalForward(Object row,
ObjectInspector outputOI)
'forwards' the (row-mode) record into the (vectorized) output batch
|
protected void |
VectorSMBMapJoinOperator.internalForward(Object row,
ObjectInspector outputOI) |
static org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch |
VectorizedBatchUtil.makeLike(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch)
Make a new (scratch) batch, which is exactly "like" the batch provided, except that it's empty
|
static org.apache.hadoop.hive.ql.exec.vector.ColumnVector |
VectorizedBatchUtil.makeLikeColumnVector(org.apache.hadoop.hive.ql.exec.vector.ColumnVector source) |
void |
VectorSparkHashTableSinkOperator.process(Object row,
int tag) |
void |
VectorFilterOperator.process(Object row,
int tag) |
void |
VectorAppMasterEventOperator.process(Object data,
int tag) |
void |
VectorReduceSinkOperator.process(Object data,
int tag) |
void |
VectorSparkPartitionPruningSinkOperator.process(Object data,
int tag) |
void |
VectorSelectOperator.process(Object row,
int tag) |
void |
VectorGroupByOperator.process(Object row,
int tag) |
void |
VectorFileSinkOperator.process(Object data,
int tag) |
void |
VectorMapOperator.process(Object row,
int tag) |
void |
VectorMapJoinOuterFilteredOperator.process(Object data,
int tag) |
void |
VectorSMBMapJoinOperator.process(Object row,
int tag) |
void |
VectorMapJoinOperator.process(Object row,
int tag) |
void |
VectorLimitOperator.process(Object row,
int tag) |
void |
VectorMapOperator.process(org.apache.hadoop.io.Writable value) |
protected void |
VectorMapJoinBaseOperator.reProcessBigTable(int partitionId)
For a vectorized row batch from the rows feed from the super MapJoinOperator.
|
void |
VectorHashKeyWrapperBatch.setLongValue(VectorHashKeyWrapper kw,
int keyIndex,
Long value) |
protected JoinUtil.JoinResult |
VectorMapJoinOperator.setMapJoinKey(MapJoinTableContainer.ReusableGetAdaptor dest,
Object row,
byte alias) |
void |
VectorSelectOperator.setNextVectorBatchGroupStatus(boolean isLastGroupBatch) |
void |
VectorGroupByOperator.setNextVectorBatchGroupStatus(boolean isLastGroupBatch) |
protected List<Object> |
VectorSMBMapJoinOperator.smbJoinComputeKeys(Object row,
byte alias) |
protected void |
VectorMapJoinOperator.spillBigTableRow(MapJoinTableContainer hybridHtContainer,
Object row) |
void |
VectorGroupByOperator.startGroup() |
static TypeInfo[] |
VectorizedBatchUtil.typeInfosFromTypeNames(String[] typeNames) |
VectorExpression |
VectorizationContext.wrapWithDecimal64ToDecimalConversion(VectorExpression inputExpression) |
Modifier and Type | Method and Description |
---|---|
void |
IfExprCondExprBase.conditionalEvaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorExpression condVecExpr,
int[] condSelected,
int condSize) |
static void |
VectorExpression.doTransientInit(VectorExpression vecExpr) |
static void |
VectorExpression.doTransientInit(VectorExpression[] vecExprs) |
void |
FilterLongColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch)
Cast decimal(p1, s1) to decimal(p2, s2).
|
void |
VectorUDFDateDiffColCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFDateDiffColScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFDateAddColScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
SelectColumnIsNotNull.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLongToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFTimestampFieldString.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprColumnCondExpr.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupConcatColCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColConcatStringScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StructColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastTimestampToBoolean.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncDoubleToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterScalarAndColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncStringToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprLongColumnLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprStringGroupColumnStringScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColNotEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IdentityExpression.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFTimestampFieldTimestamp.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncRand.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFStructField.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
SelectStringColLikeStringScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastDateToTimestamp.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncRandNoSeed.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastTimestampToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleToStringUnaryUDF.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncTimestampToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastLongToDate.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFDateAddColCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprColumnNull.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprCondExprNull.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFMapIndexBaseScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastStringToDate.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastStringToIntervalDayTime.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprCondExprBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncDecimalToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterExprAndExpr.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractDateScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprStringScalarStringScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastStringToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongToStringUnaryUDF.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
NotCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringSubstrColStartLen.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprCondExprColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprNullColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprCondExprCondExpr.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IsNotNull.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncDecimalToTimestamp.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFMapIndexBaseCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarDivideLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampToStringUnaryUDF.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
MathFuncLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
MathFuncLongToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprStringGroupColumnStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastStringToIntervalYearMonth.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringScalarConcatStringGroupCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastStringToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
abstract void |
VectorExpression.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch)
This is the primary method to implement expression logic.
|
void |
VectorCoalesce.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
SelectColumnIsTrue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastStringToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IsNull.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorInBloomFilterColDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterScalarOrColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterColAndScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastTimestampToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateScalarSubtractDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLongToString.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
ColAndCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColDivideLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprDoubleColumnDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringSubstrColStart.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
AbstractFilterStringColLikeStringScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalToStringUnaryUDF.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
ListIndexColColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterColOrScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFDateAddScalarCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringUnaryUDF.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
NullVectorExpression.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncRoundWithNumDigitsDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprNullCondExpr.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastDoubleToTimestamp.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFDateDiffScalarCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
MathFuncDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterExprOrExpr.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
ColOrCol.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprNullNull.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
OctetLength.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColDivideLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprStringScalarStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringUnaryUDFDirect.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastMillisecondsLongToTimestamp.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
SelectColumnIsFalse.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncDecimalToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
ListIndexColScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringLength.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStructColumnInList.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncTimestampToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFTimestampFieldDate.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
SelectColumnIsNull.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastLongToTimestamp.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorElt.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorExpression.evaluateChildren(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch vrg)
Evaluate the child expressions on the given input batch.
|
static VectorExpressionWriter |
VectorExpressionWriterFactory.genVectorExpressionWritable(ExprNodeDesc nodeDesc)
Compiles the appropriate vector expression writer based on an expression info (ExprNodeDesc)
|
static VectorExpressionWriter |
VectorExpressionWriterFactory.genVectorExpressionWritable(ObjectInspector fieldObjInspector)
Compiles the appropriate vector expression writer based on an expression info (ExprNodeDesc)
|
static VectorExpressionWriter[] |
VectorExpressionWriterFactory.genVectorStructExpressionWritables(StructObjectInspector oi)
Compiles the appropriate vector expression writers based on a struct object
inspector.
|
static VectorExpressionWriter[] |
VectorExpressionWriterFactory.getExpressionWriters(List<ExprNodeDesc> nodesDesc)
Helper function to create an array of writers from a list of expression descriptors.
|
static VectorExpressionWriter[] |
VectorExpressionWriterFactory.getExpressionWriters(StructObjectInspector objInspector)
Returns
VectorExpressionWriter objects for the fields in the given
object inspector. |
org.apache.hadoop.hive.ql.exec.vector.ColumnVector.Type |
VectorExpression.getOutputColumnVectorType() |
static VectorExpressionWriter[] |
VectorExpressionWriterFactory.getSettableExpressionWriters(SettableStructObjectInspector objInspector) |
Object |
VectorExpressionWriter.initValue(Object ost) |
static void |
VectorExpressionWriterFactory.processVectorExpressions(List<ExprNodeDesc> nodesDesc,
List<String> columnNames,
VectorExpressionWriterFactory.SingleOIDClosure closure)
Creates the value writers for a column vector expression list.
|
static void |
VectorExpressionWriterFactory.processVectorExpressions(List<ExprNodeDesc> nodesDesc,
VectorExpressionWriterFactory.ListOIDClosure closure)
Creates the value writers for a column vector expression list.
|
static void |
VectorExpressionWriterFactory.processVectorInspector(StructObjectInspector structObjInspector,
VectorExpressionWriterFactory.SingleOIDClosure closure)
Creates the value writers for an struct object inspector.
|
void |
StructColumnInList.setStructColumnExprs(VectorizationContext vContext,
List<ExprNodeDesc> structColumnExprs,
org.apache.hadoop.hive.ql.exec.vector.ColumnVector.Type[] fieldVectorColumnTypes) |
void |
IStructInExpr.setStructColumnExprs(VectorizationContext vContext,
List<ExprNodeDesc> structColumnExprs,
org.apache.hadoop.hive.ql.exec.vector.ColumnVector.Type[] fieldVectorColumnTypes) |
void |
FilterStructColumnInList.setStructColumnExprs(VectorizationContext vContext,
List<ExprNodeDesc> structColumnExprs,
org.apache.hadoop.hive.ql.exec.vector.ColumnVector.Type[] fieldVectorColumnTypes) |
Object |
VectorExpressionWriter.setValue(Object row,
org.apache.hadoop.hive.ql.exec.vector.ColumnVector column,
int columnRow) |
void |
FilterLongColumnInList.transientInit() |
void |
VectorUDFDateDiffColCol.transientInit() |
void |
VectorUDFDateAddColScalar.transientInit() |
void |
VectorUDFTimestampFieldString.transientInit() |
void |
VectorUDFTimestampFieldTimestamp.transientInit() |
void |
CastDecimalToString.transientInit() |
void |
VectorUDFDateAddColCol.transientInit() |
void |
FilterDecimalColumnInList.transientInit() |
void |
CastStringToLong.transientInit() |
void |
FilterDoubleColumnInList.transientInit() |
void |
VectorExpression.transientInit() |
void |
VectorInBloomFilterColDynamicValue.transientInit() |
void |
CastTimestampToLong.transientInit() |
void |
FuncLongToString.transientInit() |
void |
AbstractFilterStringColLikeStringScalar.transientInit() |
void |
DecimalColumnInList.transientInit() |
void |
VectorUDFDateAddScalarCol.transientInit() |
void |
CastLongToString.transientInit() |
void |
FilterTimestampColumnInList.transientInit() |
void |
VectorUDFTimestampFieldDate.transientInit() |
Object |
VectorExpressionWriter.writeValue(byte[] value,
int start,
int length) |
Object |
VectorExpressionWriter.writeValue(org.apache.hadoop.hive.ql.exec.vector.ColumnVector column,
int row) |
Object |
VectorExpressionWriter.writeValue(double value) |
Object |
VectorExpressionWriter.writeValue(org.apache.hadoop.hive.common.type.HiveDecimal value) |
Object |
VectorExpressionWriter.writeValue(org.apache.hadoop.hive.serde2.io.HiveDecimalWritable value) |
Object |
VectorExpressionWriter.writeValue(org.apache.hadoop.hive.common.type.HiveIntervalDayTime value) |
Object |
VectorExpressionWriter.writeValue(HiveIntervalDayTimeWritable value) |
Object |
VectorExpressionWriter.writeValue(long value) |
Object |
VectorExpressionWriter.writeValue(Timestamp value) |
Object |
VectorExpressionWriter.writeValue(TimestampWritableV2 value) |
Constructor and Description |
---|
ConstantVectorExpression(int outputColumnNum,
byte[] value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
double value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
HiveChar value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
org.apache.hadoop.hive.common.type.HiveDecimal value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
org.apache.hadoop.hive.common.type.HiveIntervalDayTime value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
HiveVarchar value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
long value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
Timestamp value,
TypeInfo outputTypeInfo) |
ConstantVectorExpression(int outputColumnNum,
TypeInfo outputTypeInfo,
boolean isNull) |
DynamicValueVectorExpression(int outputColumnNum,
TypeInfo typeInfo,
DynamicValue dynamicValue) |
FilterConstantBooleanVectorExpression(long value) |
FilterStringColRegExpStringScalar(int colNum,
byte[] regExpPattern) |
Modifier and Type | Method and Description |
---|---|
void |
VectorUDAFBloomFilter.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumTimestamp.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFCount.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFBloomFilterMerge.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFCountStar.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDecimal64ToDecimal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDecimal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDecimal64.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFCountMerge.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
abstract void |
VectorAggregateExpression.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch unit) |
void |
VectorUDAFBloomFilter.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumTimestamp.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFCount.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFBloomFilterMerge.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFCountStar.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDecimal64ToDecimal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDecimal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDecimal64.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFCountMerge.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
abstract void |
VectorAggregateExpression.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch vrg) |
void |
VectorUDAFBloomFilter.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumTimestamp.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFCount.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFBloomFilterMerge.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFCountStar.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDecimal64ToDecimal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDecimal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDecimal64.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFCountMerge.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
abstract void |
VectorAggregateExpression.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFBloomFilter.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFSumTimestamp.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFCount.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFBloomFilterMerge.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFCountStar.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFSumDecimal64ToDecimal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFSumDecimal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFSumDecimal64.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFCountMerge.getNewAggregationBuffer() |
abstract VectorAggregateExpression.AggregationBuffer |
VectorAggregateExpression.getNewAggregationBuffer() |
void |
VectorUDAFBloomFilter.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumTimestamp.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFCount.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFBloomFilterMerge.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFCountStar.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDecimal64ToDecimal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDecimal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDecimal64.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFCountMerge.reset(VectorAggregateExpression.AggregationBuffer agg) |
abstract void |
VectorAggregateExpression.reset(VectorAggregateExpression.AggregationBuffer agg) |
Modifier and Type | Method and Description |
---|---|
void |
VectorUDAFAvgDecimal64ToDecimalComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDecimalComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimal64ToDecimal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxString.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinIntervalDayTime.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDoubleComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarLongComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimalFinal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarPartial2.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxLong.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgTimestamp.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDecimal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgPartial2.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDouble.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarFinal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinTimestamp.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDouble.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxDouble.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgTimestampComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxIntervalDayTime.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimalComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarTimestamp.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDoubleComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinString.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgLongComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgFinal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxTimestamp.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinDecimal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxDecimal.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarLong.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarTimestampComplete.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDouble.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinLong.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumLong.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgLong.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinDouble.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimalPartial2.aggregateInput(VectorAggregateExpression.AggregationBuffer agg,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimal64ToDecimalComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDecimalComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimal64ToDecimal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxString.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinIntervalDayTime.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDoubleComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarLongComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimalFinal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarPartial2.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxLong.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgTimestamp.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDecimal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgPartial2.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDouble.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarFinal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinTimestamp.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDouble.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxDouble.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgTimestampComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxIntervalDayTime.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimalComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarTimestamp.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarDoubleComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinString.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgLongComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgFinal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxTimestamp.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinDecimal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMaxDecimal.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarLong.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFVarTimestampComplete.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumDouble.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinLong.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFSumLong.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgLong.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFMinDouble.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int aggregrateIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimalPartial2.aggregateInputSelection(VectorAggregationBufferRow[] aggregationBufferSets,
int bufferIndex,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDAFAvgDecimal64ToDecimalComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDecimalComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimal64ToDecimal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxString.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinIntervalDayTime.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDoubleComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarLongComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimalFinal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarPartial2.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxLong.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgTimestamp.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDecimal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgPartial2.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDouble.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarFinal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinTimestamp.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDouble.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxDouble.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgTimestampComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxIntervalDayTime.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimalComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarTimestamp.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDoubleComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinString.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgLongComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgFinal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxTimestamp.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinDecimal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxDecimal.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarLong.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarTimestampComplete.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDouble.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinLong.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumLong.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgLong.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinDouble.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimalPartial2.assignRowColumn(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
int columnNum,
VectorAggregateExpression.AggregationBuffer agg) |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDecimal64ToDecimalComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarDecimalComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDecimal64ToDecimal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMaxString.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMinIntervalDayTime.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDoubleComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarLongComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDecimalFinal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarPartial2.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMaxLong.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDecimal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgTimestamp.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarDecimal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgPartial2.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDouble.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarFinal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMinTimestamp.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarDouble.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMaxDouble.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgTimestampComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMaxIntervalDayTime.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDecimalComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarTimestamp.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarDoubleComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMinString.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgLongComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgFinal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMaxTimestamp.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMinDecimal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMaxDecimal.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarLong.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFVarTimestampComplete.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFSumDouble.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMinLong.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFSumLong.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgLong.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFMinDouble.getNewAggregationBuffer() |
VectorAggregateExpression.AggregationBuffer |
VectorUDAFAvgDecimalPartial2.getNewAggregationBuffer() |
void |
VectorUDAFAvgDecimal64ToDecimalComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDecimalComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimal64ToDecimal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxString.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinIntervalDayTime.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDoubleComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarLongComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimalFinal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarPartial2.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxLong.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgTimestamp.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDecimal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgPartial2.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDouble.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarFinal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinTimestamp.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDouble.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxDouble.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgTimestampComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxIntervalDayTime.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimalComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarTimestamp.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarDoubleComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinString.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgLongComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgFinal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxTimestamp.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinDecimal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMaxDecimal.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarLong.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFVarTimestampComplete.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumDouble.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinLong.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFSumLong.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgLong.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFMinDouble.reset(VectorAggregateExpression.AggregationBuffer agg) |
void |
VectorUDAFAvgDecimalPartial2.reset(VectorAggregateExpression.AggregationBuffer agg) |
Modifier and Type | Method and Description |
---|---|
void |
FilterLongColGreaterEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColGreaterStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncSignLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupScalarGreaterStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarSubtractLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColNotEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprDoubleScalarDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarAddDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncCosDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalYearMonthColAddTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColAddTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColLessEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprLongScalarDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColLessEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalYearMonthScalarAddDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalScalarDivideDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarAddLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprIntervalDayTimeColumnScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDateColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarSubtractLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncRadiansLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarSubtractDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColAddIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarModuloDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarSubtractIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColLessIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarSubtractIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarModuloDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColNotEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColLessIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncRadiansDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncSinDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterCharColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColGreaterEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarLessEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncSqrtDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColGreaterEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColNotEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColNotEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarDivideDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColLessEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalScalarNotEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprIntervalDayTimeScalarColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColNotEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprDoubleScalarLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColLessEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColNotEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractDateScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalScalarLessEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColLessEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColumnBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampScalarLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalYearMonthColAddDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColSubtractDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColUnaryMinusChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColSubtractIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColAddDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarMultiplyDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColDivideDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarAddDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColLessEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColAddIntervalYearMonthScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarAddDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupScalarGreaterStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprLongScalarDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColLessEqualDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarAddLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeScalarNotEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarDivideLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColGreaterEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColAddTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalYearMonthColAddTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColGreaterEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColNotEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarModuloLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncTanDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColNotEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncNegateDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColNotEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarMultiplyLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColLessStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprDoubleScalarDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColNotEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColNotEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColNotEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncBRoundDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterCharColumnNotBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColLessIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColLessStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColAddIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColGreaterStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarMultiplyDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColLessIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastLongToFloatViaLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprIntervalDayTimeColumnColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalScalarAddDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncBRoundDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarAddIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastDoubleToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastLongToBooleanViaLongToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncRoundDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampScalarGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColLessStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncFloorDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprDoubleScalarLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprIntervalDayTimeScalarScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalScalarMultiplyDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLog2LongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColNotEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncAbsDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarMultiplyLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprLongColumnLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColGreaterEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColLessStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeScalarLessEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarModuloLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColModuloDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColGreaterEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarSubtractDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalScalarModuloDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterCharColumnBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColDivideDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColNotEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeScalarGreaterEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColAddIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateScalarSubtractTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColLessEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColLessEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarAddLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampScalarGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColAddDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColumnNotBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColSubtractIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarAddLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupScalarGreaterEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColSubtractDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalYearMonthColAddDateScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncFloorDoubleToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarAddIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColNotEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupScalarLessStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarGreaterIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprLongScalarLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
Decimal64ScalarAddDecimal64Column.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarSubtractTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncExpLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColAddIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarModuloDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprTimestampColumnColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColGreaterEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColEqualDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarSubtractIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarModuloLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColNotEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColGreaterIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLnDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprDoubleColumnLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringColumnNotBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalScalarGreaterEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncCeilDoubleToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupScalarNotEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColumnBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterVarCharColumnBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncAbsDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColNotEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
Decimal64ColSubtractDecimal64Column.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalScalarEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeScalarEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterVarCharColumnNotBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColMultiplyDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeScalarLessIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColNotEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColNotEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLnLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarAddDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarAddTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColDivideDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringColumnBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupScalarLessEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarMultiplyDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLog2DoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncSignDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColDivideDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampScalarNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarSubtractLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractIntervalYearMonthScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLog10LongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncRoundDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColEqualStringGroupColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColDivideLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColGreaterDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColAddIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
Decimal64ColAddDecimal64Scalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncFloorLongToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColumnNotBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncASinDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalScalarGreaterDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarMultiplyLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColLessDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColEqualIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncAbsLongToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColAddIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColGreaterIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColUnaryMinus.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastDoubleToBooleanViaDoubleToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncASinLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncDegreesLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
CastLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncACosLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColNotEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncACosDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarLessIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColAddDateScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColModuloDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncSignDecimalToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupScalarNotEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupScalarLessStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateScalarSubtractIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncSqrtLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColumnBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncExpDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarModuloLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarDivideDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncTanLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractIntervalYearMonthScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarSubtractDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprTimestampScalarScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColGreaterEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateScalarAddIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColGreaterIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeScalarGreaterIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddDoubleScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncCeilDecimalToDecimal.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncSinLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalScalarSubtractDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColNotEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprLongScalarLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarAddIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColNotEqualDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarAddDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColEqualDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColSubtractDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampScalarEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColAddIntervalYearMonthScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprTimestampColumnScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarGreaterEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColDivideDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarSubtractDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprDoubleColumnDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColGreaterStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarGreaterDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncATanLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColDivideDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarNotEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColGreaterEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncCosLongToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColSubtractTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprLongColumnDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColMultiplyDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
Decimal64ScalarSubtractDecimal64Column.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarGreaterTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColNotEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
Decimal64ColSubtractDecimal64Scalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampScalarLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColLessTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColLessEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColAddIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColAddLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterVarCharColumnBetweenDynamicValue.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColNotEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalScalarLessDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupColGreaterEqualStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupColGreaterStringGroupScalarBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColGreaterIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColumnBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarNotEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarLessDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupScalarGreaterEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
Decimal64ColAddDecimal64Column.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
StringGroupScalarEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColGreaterDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColAddIntervalDayTimeScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColDivideLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterIntervalDayTimeColEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColLessDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupScalarEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColLessEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateScalarAddIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarLessEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColUnaryMinusChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColNotEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleScalarGreaterEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarModuloDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColModuloDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncATanDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarMultiplyDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColGreaterEqualDecimalScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampColGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarLessTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IfExprTimestampScalarColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColGreaterDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarMultiplyLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
TimestampScalarSubtractDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncDegreesDoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateColSubtractIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColLessEqualDoubleColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColumnNotBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeScalarEqualIntervalDayTimeColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColGreaterEqualLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongScalarLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColAddLongScalarChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColGreaterEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalDayTimeColAddDateColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColNotEqualTimestampScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncCeilLongToLong.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterStringGroupScalarLessEqualStringGroupColumnBase.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColSubtractLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongScalarSubtractLongColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
IntervalYearMonthScalarAddTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDoubleColNotEqualLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarLessLongColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterLongColLessDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
LongColMultiplyDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DateScalarSubtractIntervalYearMonthColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColMultiplyDoubleColumnChecked.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleScalarLessEqualTimestampColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FuncLog10DoubleToDouble.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColEqualDoubleScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterTimestampColLessLongScalar.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DoubleColUnaryMinus.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
DecimalColModuloDecimalColumn.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDecimalColumnNotBetween.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
FilterDateColumnBetweenDynamicValue.transientInit() |
void |
FilterCharColumnBetweenDynamicValue.transientInit() |
void |
FilterDecimalColumnBetweenDynamicValue.transientInit() |
void |
FilterLongColumnBetweenDynamicValue.transientInit() |
void |
FilterStringColumnBetweenDynamicValue.transientInit() |
void |
FilterDoubleColumnBetweenDynamicValue.transientInit() |
void |
FilterTimestampColumnBetweenDynamicValue.transientInit() |
void |
FilterVarCharColumnBetweenDynamicValue.transientInit() |
Modifier and Type | Method and Description |
---|---|
void |
VectorKeySeriesMultiSerialized.init(TypeInfo[] typeInfos,
int[] columnNums) |
Modifier and Type | Method and Description |
---|---|
void |
VectorMapJoinGenerateResultOperator.closeOp(boolean aborted)
On close, make sure a partially filled overflow batch gets forwarded.
|
protected void |
VectorMapJoinCommonOperator.commonSetup(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorMapJoinInnerBigOnlyGenerateResultOperator.commonSetup(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorMapJoinInnerGenerateResultOperator.commonSetup(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorMapJoinGenerateResultOperator.commonSetup(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorMapJoinLeftSemiGenerateResultOperator.commonSetup(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorMapJoinOuterGenerateResultOperator.commonSetup(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorMapJoinCommonOperator.completeInitializationOp(Object[] os) |
protected void |
VectorMapJoinCommonOperator.determineCommonInfo(boolean isOuter) |
protected void |
VectorMapJoinGenerateResultOperator.doSmallTableDeserializeRow(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int batchIndex,
WriteBuffers.ByteSegmentRef byteSegmentRef,
VectorMapJoinHashMapResult hashMapResult) |
protected void |
VectorMapJoinInnerGenerateResultOperator.finishInner(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int allMatchCount,
int equalKeySeriesCount,
int spillCount,
int hashMapResultCount)
Generate the inner join output results for one vectorized row batch.
|
protected void |
VectorMapJoinInnerBigOnlyGenerateResultOperator.finishInnerBigOnly(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int allMatchCount,
int equalKeySeriesCount,
int spillCount,
VectorMapJoinHashTableResult[] hashTableResults,
int hashMapResultCount)
Generate the inner big table only join output results for one vectorized row batch.
|
protected void |
VectorMapJoinInnerBigOnlyGenerateResultOperator.finishInnerBigOnlyRepeated(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
JoinUtil.JoinResult joinResult,
VectorMapJoinHashMultiSetResult hashMultiSetResult) |
protected void |
VectorMapJoinInnerGenerateResultOperator.finishInnerRepeated(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
JoinUtil.JoinResult joinResult,
VectorMapJoinHashTableResult hashMapResult) |
protected void |
VectorMapJoinLeftSemiGenerateResultOperator.finishLeftSemi(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int allMatchCount,
int spillCount,
VectorMapJoinHashTableResult[] hashTableResults)
Generate the left semi join output results for one vectorized row batch.
|
protected void |
VectorMapJoinLeftSemiGenerateResultOperator.finishLeftSemiRepeated(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
JoinUtil.JoinResult joinResult,
VectorMapJoinHashTableResult hashSetResult) |
void |
VectorMapJoinOuterGenerateResultOperator.finishOuter(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int allMatchCount,
int equalKeySeriesCount,
boolean atLeastOneNonMatch,
boolean inputSelectedInUse,
int inputLogicalSize,
int spillCount,
int hashMapResultCount)
Generate the outer join output results for one vectorized row batch.
|
void |
VectorMapJoinOuterGenerateResultOperator.finishOuterRepeated(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
JoinUtil.JoinResult joinResult,
VectorMapJoinHashMapResult hashMapResult,
boolean someRowsFilteredOut,
boolean inputSelectedInUse,
int inputLogicalSize)
Generate the outer join output results for one vectorized row batch with a repeated key.
|
void |
VectorMapJoinGenerateResultOperator.forwardBigTableBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch)
Forward the big table batch to the children.
|
protected void |
VectorMapJoinGenerateResultOperator.forwardOverflow()
Forward the overflow batch and reset the batch.
|
protected void |
VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorMapJoinHashMapResult hashMapResult,
int[] allMatchs,
int allMatchesIndex,
int duplicateCount)
Generate results for a N x M cross product.
|
protected void |
VectorMapJoinGenerateResultOperator.generateHashMapResultRepeatedAll(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorMapJoinHashMapResult hashMapResult)
Generate optimized results when entire batch key is repeated and it matched the hash map.
|
protected int |
VectorMapJoinGenerateResultOperator.generateHashMapResultSingleValue(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorMapJoinHashMapResult hashMapResult,
int[] allMatchs,
int allMatchesIndex,
int duplicateCount,
int numSel)
Generate join results for a single small table value match.
|
protected int |
VectorMapJoinInnerBigOnlyGenerateResultOperator.generateHashMultiSetResultRepeatedAll(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorMapJoinHashMultiSetResult hashMultiSetResult)
Generate the inner big table only join output results for one vectorized row batch with
a repeated key.
|
protected int |
VectorMapJoinLeftSemiGenerateResultOperator.generateHashSetResultRepeatedAll(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch)
Generate the left semi join output results for one vectorized row batch with a repeated key.
|
protected void |
VectorMapJoinOuterGenerateResultOperator.generateOuterNulls(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int[] noMatchs,
int noMatchSize)
Generate the non matching outer join output results for one vectorized row batch.
|
protected void |
VectorMapJoinOuterGenerateResultOperator.generateOuterNullsRepeatedAll(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch)
Generate the non-match outer join output results for the whole repeating vectorized
row batch.
|
protected void |
VectorMapJoinCommonOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorMapJoinGenerateResultOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorMapJoinGenerateResultOperator.performValueExpressions(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
int[] allMatchs,
int allMatchCount) |
void |
VectorMapJoinInnerMultiKeyOperator.process(Object row,
int tag) |
void |
VectorMapJoinOuterMultiKeyOperator.process(Object row,
int tag) |
void |
VectorMapJoinLeftSemiLongOperator.process(Object row,
int tag) |
void |
VectorMapJoinInnerBigOnlyMultiKeyOperator.process(Object row,
int tag) |
void |
VectorMapJoinInnerStringOperator.process(Object row,
int tag) |
void |
VectorMapJoinInnerBigOnlyLongOperator.process(Object row,
int tag) |
void |
VectorMapJoinInnerLongOperator.process(Object row,
int tag) |
void |
VectorMapJoinOuterLongOperator.process(Object row,
int tag) |
void |
VectorMapJoinLeftSemiStringOperator.process(Object row,
int tag) |
void |
VectorMapJoinInnerBigOnlyStringOperator.process(Object row,
int tag) |
void |
VectorMapJoinOuterStringOperator.process(Object row,
int tag) |
void |
VectorMapJoinLeftSemiMultiKeyOperator.process(Object row,
int tag) |
protected void |
VectorMapJoinGenerateResultOperator.reloadHashTable(byte pos,
int partitionId) |
protected void |
VectorMapJoinGenerateResultOperator.reProcessBigTable(int partitionId) |
protected org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch |
VectorMapJoinCommonOperator.setupOverflowBatch() |
protected void |
VectorMapJoinGenerateResultOperator.spillBatchRepeated(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorMapJoinHashTableResult hashTableResult) |
protected void |
VectorMapJoinGenerateResultOperator.spillHashMapBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
VectorMapJoinHashTableResult[] hashTableResults,
int[] spills,
int[] spillHashTableResultIndices,
int spillCount) |
Modifier and Type | Method and Description |
---|---|
void |
VectorMapJoinFastStringCommon.adaptPutRow(VectorMapJoinFastBytesHashTable hashTable,
org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
void |
VectorMapJoinFastHashTableLoader.load(MapJoinTableContainer[] mapJoinTables,
MapJoinTableContainerSerDe[] mapJoinTableSerdes) |
void |
VectorMapJoinFastStringHashMultiSet.putRow(org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
void |
VectorMapJoinFastStringHashMap.putRow(org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
void |
VectorMapJoinFastBytesHashTable.putRow(org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
void |
VectorMapJoinFastStringHashSet.putRow(org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
void |
VectorMapJoinFastLongHashTable.putRow(org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
MapJoinKey |
VectorMapJoinFastTableContainer.putRow(org.apache.hadoop.io.Writable currentKey,
org.apache.hadoop.io.Writable currentValue) |
void |
VectorMapJoinFastMultiKeyHashSet.testPutRow(byte[] currentKey) |
void |
VectorMapJoinFastMultiKeyHashMultiSet.testPutRow(byte[] currentKey) |
void |
VectorMapJoinFastMultiKeyHashMap.testPutRow(byte[] currentKey,
byte[] currentValue) |
void |
VectorMapJoinFastLongHashSet.testPutRow(long currentKey) |
void |
VectorMapJoinFastLongHashMultiSet.testPutRow(long currentKey) |
void |
VectorMapJoinFastLongHashMap.testPutRow(long currentKey,
byte[] currentValue) |
Modifier and Type | Method and Description |
---|---|
void |
VectorMapJoinHashTable.putRow(org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
Modifier and Type | Method and Description |
---|---|
void |
VectorMapJoinOptimizedHashTable.putRow(org.apache.hadoop.io.BytesWritable currentKey,
org.apache.hadoop.io.BytesWritable currentValue) |
protected void |
VectorMapJoinOptimizedHashTable.putRowInternal(org.apache.hadoop.io.BytesWritable key,
org.apache.hadoop.io.BytesWritable value) |
Modifier and Type | Method and Description |
---|---|
void |
VectorPTFGroupBatches.bufferGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
protected void |
VectorPTFOperator.closeOp(boolean abort) |
void |
VectorPTFEvaluatorLongSum.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorLongFirstValue.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDecimalMin.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDoubleAvg.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorRank.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDoubleMax.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDenseRank.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorLongMin.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorLongLastValue.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDoubleLastValue.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDecimalSum.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDecimalAvg.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDoubleMin.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDecimalFirstValue.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorLongMax.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDecimalLastValue.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDoubleFirstValue.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDoubleSum.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorCount.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
abstract void |
VectorPTFEvaluatorBase.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorDecimalMax.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFGroupBatches.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorLongAvg.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorRowNumber.evaluateGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFEvaluatorBase.evaluateInputExpr(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorPTFGroupBatches.evaluateStreamingGroupBatch(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch,
boolean isLastGroupBatch) |
void |
VectorPTFGroupBatches.fillGroupResultsAndForward(VectorPTFOperator vecPTFOperator,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch lastBatch) |
void |
VectorPTFOperator.forward(Object row,
ObjectInspector rowInspector) |
protected void |
VectorPTFOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorPTFOperator.process(Object row,
int tag)
We are processing a batch from reduce processor that is only for one reducer key or PTF group.
|
void |
VectorPTFOperator.setNextVectorBatchGroupStatus(boolean isLastGroupBatch) |
protected org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch |
VectorPTFOperator.setupOverflowBatch() |
Constructor and Description |
---|
VectorPTFOperator(CompilationOpContext ctx,
OperatorDesc conf,
VectorizationContext vContext,
VectorDesc vectorDesc) |
Modifier and Type | Method and Description |
---|---|
protected void |
VectorReduceSinkCommonOperator.closeOp(boolean abort) |
protected void |
VectorReduceSinkCommonOperator.collect(HiveKey keyWritable,
org.apache.hadoop.io.BytesWritable valueWritable) |
protected void |
VectorReduceSinkLongOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorReduceSinkUniformHashOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorReduceSinkMultiKeyOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorReduceSinkStringOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorReduceSinkObjectHashOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorReduceSinkEmptyKeyOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
protected void |
VectorReduceSinkCommonOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
VectorReduceSinkUniformHashOperator.process(Object row,
int tag) |
void |
VectorReduceSinkObjectHashOperator.process(Object row,
int tag) |
void |
VectorReduceSinkEmptyKeyOperator.process(Object row,
int tag) |
Modifier and Type | Method and Description |
---|---|
void |
VectorUDFAdaptor.evaluate(org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
void |
VectorUDFAdaptor.init() |
Constructor and Description |
---|
VectorUDFAdaptor(ExprNodeGenericFuncDesc expr,
int outputColumnNum,
String resultType,
VectorUDFArgDesc[] argDescs) |
Modifier and Type | Method and Description |
---|---|
static boolean |
HiveFileFormatUtils.checkInputFormat(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
Class<? extends org.apache.hadoop.mapred.InputFormat> inputFormatCls,
List<org.apache.hadoop.fs.FileStatus> files)
checks if files are in same format as the given input format.
|
static RecordUpdater |
HiveFileFormatUtils.getAcidRecordUpdater(org.apache.hadoop.mapred.JobConf jc,
TableDesc tableInfo,
int bucket,
FileSinkDesc conf,
org.apache.hadoop.fs.Path outPath,
ObjectInspector inspector,
org.apache.hadoop.mapred.Reporter reporter,
int rowIdColNum) |
static HiveOutputFormat<?,?> |
HiveFileFormatUtils.getHiveOutputFormat(org.apache.hadoop.conf.Configuration conf,
PartitionDesc partDesc) |
static HiveOutputFormat<?,?> |
HiveFileFormatUtils.getHiveOutputFormat(org.apache.hadoop.conf.Configuration conf,
TableDesc tableDesc) |
static FileSinkOperator.RecordWriter |
HiveFileFormatUtils.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
TableDesc tableInfo,
Class<? extends org.apache.hadoop.io.Writable> outputClass,
FileSinkDesc conf,
org.apache.hadoop.fs.Path outPath,
org.apache.hadoop.mapred.Reporter reporter) |
static FileSinkOperator.RecordWriter |
HiveFileFormatUtils.getRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.mapred.OutputFormat<?,?> outputFormat,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProp,
org.apache.hadoop.fs.Path outPath,
org.apache.hadoop.mapred.Reporter reporter) |
static org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.WritableComparable,org.apache.hadoop.io.Writable> |
HiveInputFormat.wrapForLlap(org.apache.hadoop.mapred.InputFormat<org.apache.hadoop.io.WritableComparable,org.apache.hadoop.io.Writable> inputFormat,
org.apache.hadoop.conf.Configuration conf,
PartitionDesc part) |
Modifier and Type | Method and Description |
---|---|
void |
ExternalCache.ExternalFooterCachesByConf.Cache.clearFileMetadata(List<Long> fileIds) |
void |
MetastoreExternalCachesByConf.HBaseCache.clearFileMetadata(List<Long> fileIds) |
void |
OrcInputFormat.FooterCache.getAndValidate(List<HadoopShims.HdfsFileStatusWithId> files,
boolean isOriginal,
org.apache.orc.impl.OrcTail[] result,
ByteBuffer[] ppdResult) |
void |
ExternalCache.getAndValidate(List<HadoopShims.HdfsFileStatusWithId> files,
boolean isOriginal,
org.apache.orc.impl.OrcTail[] result,
ByteBuffer[] ppdResult) |
Iterator<Map.Entry<Long,ByteBuffer>> |
ExternalCache.ExternalFooterCachesByConf.Cache.getFileMetadata(List<Long> fileIds) |
Iterator<Map.Entry<Long,ByteBuffer>> |
MetastoreExternalCachesByConf.HBaseCache.getFileMetadata(List<Long> fileIds) |
Iterator<Map.Entry<Long,MetadataPpdResult>> |
ExternalCache.ExternalFooterCachesByConf.Cache.getFileMetadataByExpr(List<Long> fileIds,
ByteBuffer serializedSarg,
boolean doGetFooters) |
Iterator<Map.Entry<Long,MetadataPpdResult>> |
MetastoreExternalCachesByConf.HBaseCache.getFileMetadataByExpr(List<Long> fileIds,
ByteBuffer sarg,
boolean doGetFooters) |
void |
ExternalCache.ExternalFooterCachesByConf.Cache.putFileMetadata(ArrayList<Long> keys,
ArrayList<ByteBuffer> values) |
void |
MetastoreExternalCachesByConf.HBaseCache.putFileMetadata(ArrayList<Long> fileIds,
ArrayList<ByteBuffer> metadata) |
Modifier and Type | Method and Description |
---|---|
FilterPredicateLeafBuilder |
LeafFilterFactory.getLeafFilterBuilderByType(org.apache.hadoop.hive.ql.io.sarg.PredicateLeaf.Type type,
org.apache.parquet.schema.Type parquetType)
get leaf filter builder by FilterPredicateType, currently date, decimal and timestamp is not
supported yet.
|
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.fs.Path |
ColumnTruncateMapper.backupOutputPath(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path outpath,
org.apache.hadoop.mapred.JobConf job) |
static void |
ColumnTruncateMapper.jobClose(org.apache.hadoop.fs.Path outputPath,
boolean success,
org.apache.hadoop.mapred.JobConf job,
SessionState.LogHelper console,
DynamicPartitionCtx dynPartCtx,
org.apache.hadoop.mapred.Reporter reporter) |
Modifier and Type | Class and Description |
---|---|
class |
LockException
Exception from lock manager.
|
Modifier and Type | Method and Description |
---|---|
static HiveLockObject |
HiveLockObject.createFrom(Hive hiveDB,
String tableName,
Map<String,String> partSpec)
Creates a locking object for a table (when partition spec is not provided)
or a table partition
|
int |
DbTxnManager.lockDatabase(Hive hiveDB,
LockDatabaseDesc lockDb) |
int |
HiveTxnManager.lockDatabase(Hive hiveDB,
LockDatabaseDesc lockDb)
This function is called to lock the database when explicit lock command is
issued on a database.
|
int |
DbTxnManager.lockTable(Hive db,
LockTableDesc lockTbl) |
int |
HiveTxnManager.lockTable(Hive hiveDB,
LockTableDesc lockTbl)
This function is called to lock the table when explicit lock command is
issued on a table.
|
int |
DbTxnManager.unlockDatabase(Hive hiveDB,
UnlockDatabaseDesc unlockDb) |
int |
HiveTxnManager.unlockDatabase(Hive hiveDB,
UnlockDatabaseDesc unlockDb)
This function is called to unlock the database when explicit unlock command
is issued on a database.
|
int |
DbTxnManager.unlockTable(Hive hiveDB,
UnlockTableDesc unlockTbl) |
int |
HiveTxnManager.unlockTable(Hive hiveDB,
UnlockTableDesc unlockTbl)
This function is called to unlock the table when explicit unlock command is
issued on a table.
|
Modifier and Type | Class and Description |
---|---|
class |
HiveFatalException |
class |
InvalidTableException
Generic exception class for Hive.
|
class |
Table.ValidationFailureSemanticException
Marker SemanticException, so that processing that allows for table validation failures
and appropriately handles them can recover from these types of SemanticExceptions
|
Modifier and Type | Method and Description |
---|---|
void |
Hive.abortTransactions(List<Long> txnids) |
void |
Hive.addCheckConstraint(List<SQLCheckConstraint> checkConstraints) |
void |
Hive.addDefaultConstraint(List<SQLDefaultConstraint> defaultConstraints) |
void |
Hive.addForeignKey(List<SQLForeignKey> foreignKeyCols) |
void |
Hive.addNotNullConstraint(List<SQLNotNullConstraint> notNullConstraintCols) |
void |
Hive.addPrimaryKey(List<SQLPrimaryKey> primaryKeyCols) |
void |
Hive.addUniqueConstraint(List<SQLUniqueConstraint> uniqueConstraintCols) |
void |
Hive.alterDatabase(String dbName,
Database db) |
void |
Hive.alterFunction(String dbName,
String funcName,
Function newFunction) |
void |
Hive.alterPartition(String tblName,
Partition newPart,
EnvironmentContext environmentContext)
Updates the existing partition metadata with the new metadata.
|
void |
Hive.alterPartition(String dbName,
String tblName,
Partition newPart,
EnvironmentContext environmentContext)
Updates the existing partition metadata with the new metadata.
|
void |
Hive.alterPartitions(String tblName,
List<Partition> newParts,
EnvironmentContext environmentContext)
Updates the existing table metadata with the new metadata.
|
WMFullResourcePlan |
Hive.alterResourcePlan(String rpName,
WMNullableResourcePlan resourcePlan,
boolean canActivateDisabled,
boolean isForceDeactivate,
boolean isReplace) |
void |
Hive.alterTable(String dbName,
String tblName,
Table newTbl,
boolean cascade,
EnvironmentContext environmentContext) |
void |
Hive.alterTable(String fullyQlfdTblName,
Table newTbl,
boolean cascade,
EnvironmentContext environmentContext) |
void |
Hive.alterTable(String fullyQlfdTblName,
Table newTbl,
EnvironmentContext environmentContext)
Updates the existing table metadata with the new metadata.
|
void |
Hive.alterTable(Table newTbl,
EnvironmentContext environmentContext) |
void |
Hive.alterWMPool(WMNullablePool pool,
String poolPath) |
void |
Hive.alterWMTrigger(WMTrigger trigger) |
void |
Hive.cacheFileMetadata(String dbName,
String tableName,
String partName,
boolean allParts) |
void |
Hive.cancelDelegationToken(String tokenStrForm) |
void |
HiveMetaStoreChecker.checkMetastore(String dbName,
String tableName,
List<? extends Map<String,String>> partitions,
CheckResult result)
Check the metastore for inconsistencies, data missing in either the
metastore or on the dfs.
|
void |
Partition.checkValidity() |
void |
Table.checkValidity(org.apache.hadoop.conf.Configuration conf) |
void |
Hive.clearFileMetadata(List<Long> fileIds) |
void |
Hive.compact(String dbname,
String tableName,
String partName,
String compactType,
Map<String,String> tblproperties)
Deprecated.
|
CompactionResponse |
Hive.compact2(String dbname,
String tableName,
String partName,
String compactType,
Map<String,String> tblproperties)
Enqueue a compaction request.
|
static Partition |
Hive.convertAddSpecToMetaPartition(Table tbl,
AddPartitionDesc.OnePartitionDesc addSpec,
HiveConf conf) |
Table |
Table.copy() |
protected static void |
Hive.copyFiles(HiveConf conf,
org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
org.apache.hadoop.fs.FileSystem fs,
boolean isSrcLocal,
boolean isAcidIUD,
boolean isOverwrite,
List<org.apache.hadoop.fs.Path> newFiles,
boolean isBucketed,
boolean isFullAcidTable,
boolean isManaged)
Copy files.
|
void |
Hive.createDatabase(Database db)
Create a Database.
|
void |
Hive.createDatabase(Database db,
boolean ifNotExist)
Create a database
|
void |
Hive.createFunction(Function func) |
static Partition |
Partition.createMetaPartitionObject(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location) |
void |
Hive.createOrDropTriggerToPoolMapping(String resourcePlanName,
String triggerName,
String poolPath,
boolean shouldDrop) |
void |
Hive.createOrUpdateWMMapping(WMMapping mapping,
boolean isUpdate) |
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec)
Creates a partition.
|
List<Partition> |
Hive.createPartitions(AddPartitionDesc addPartitionDesc) |
void |
Hive.createResourcePlan(WMResourcePlan resourcePlan,
String copyFromName) |
void |
Hive.createRole(String roleName,
String ownerName) |
void |
Hive.createTable(String tableName,
List<String> columns,
List<String> partCols,
Class<? extends org.apache.hadoop.mapred.InputFormat> fileInputFormat,
Class<?> fileOutputFormat)
Creates a table metadata and the directory for the table data
|
void |
Hive.createTable(String tableName,
List<String> columns,
List<String> partCols,
Class<? extends org.apache.hadoop.mapred.InputFormat> fileInputFormat,
Class<?> fileOutputFormat,
int bucketCount,
List<String> bucketCols)
Creates a table metadata and the directory for the table data
|
void |
Hive.createTable(String tableName,
List<String> columns,
List<String> partCols,
Class<? extends org.apache.hadoop.mapred.InputFormat> fileInputFormat,
Class<?> fileOutputFormat,
int bucketCount,
List<String> bucketCols,
Map<String,String> parameters)
Create a table metadata and the directory for the table data
|
void |
Hive.createTable(Table tbl)
Creates the table with the give objects
|
void |
Hive.createTable(Table tbl,
boolean ifNotExists) |
void |
Hive.createTable(Table tbl,
boolean ifNotExists,
List<SQLPrimaryKey> primaryKeys,
List<SQLForeignKey> foreignKeys,
List<SQLUniqueConstraint> uniqueConstraints,
List<SQLNotNullConstraint> notNullConstraints,
List<SQLDefaultConstraint> defaultConstraints,
List<SQLCheckConstraint> checkConstraints)
Creates the table with the given objects.
|
void |
Hive.createWMPool(WMPool pool) |
void |
Hive.createWMTrigger(WMTrigger trigger) |
boolean |
Hive.databaseExists(String dbName)
Query metadata to see if a database with the given name already exists.
|
boolean |
Hive.deletePartitionColumnStatistics(String dbName,
String tableName,
String partName,
String colName) |
boolean |
Hive.deleteTableColumnStatistics(String dbName,
String tableName,
String colName) |
void |
Hive.dropConstraint(String dbName,
String tableName,
String constraintName) |
void |
Hive.dropDatabase(String name)
Drop a database.
|
void |
Hive.dropDatabase(String name,
boolean deleteData,
boolean ignoreUnknownDb)
Drop a database
|
void |
Hive.dropDatabase(String name,
boolean deleteData,
boolean ignoreUnknownDb,
boolean cascade)
Drop a database
|
void |
Hive.dropFunction(String dbName,
String funcName) |
boolean |
Hive.dropPartition(String tblName,
List<String> part_vals,
boolean deleteData) |
boolean |
Hive.dropPartition(String db_name,
String tbl_name,
List<String> part_vals,
boolean deleteData) |
boolean |
Hive.dropPartition(String dbName,
String tableName,
List<String> partVals,
PartitionDropOptions options) |
List<Partition> |
Hive.dropPartitions(String tblName,
List<DropTableDesc.PartSpec> partSpecs,
boolean deleteData,
boolean ifExists) |
List<Partition> |
Hive.dropPartitions(String tblName,
List<DropTableDesc.PartSpec> partSpecs,
PartitionDropOptions dropOptions) |
List<Partition> |
Hive.dropPartitions(String dbName,
String tblName,
List<DropTableDesc.PartSpec> partSpecs,
boolean deleteData,
boolean ifExists) |
List<Partition> |
Hive.dropPartitions(String dbName,
String tblName,
List<DropTableDesc.PartSpec> partSpecs,
PartitionDropOptions dropOptions) |
List<Partition> |
Hive.dropPartitions(Table table,
List<String> partDirNames,
boolean deleteData,
boolean ifExists)
drop the partitions specified as directory names associated with the table.
|
void |
Hive.dropResourcePlan(String rpName) |
void |
Hive.dropRole(String roleName) |
void |
Hive.dropTable(String tableName)
Drops table along with the data in it.
|
void |
Hive.dropTable(String tableName,
boolean ifPurge)
Drops table along with the data in it.
|
void |
Hive.dropTable(String dbName,
String tableName)
Drops table along with the data in it.
|
void |
Hive.dropTable(String dbName,
String tableName,
boolean deleteData,
boolean ignoreUnknownTab)
Drops the table.
|
void |
Hive.dropTable(String dbName,
String tableName,
boolean deleteData,
boolean ignoreUnknownTab,
boolean ifPurge)
Drops the table.
|
void |
Hive.dropWMMapping(WMMapping mapping) |
void |
Hive.dropWMPool(String resourcePlanName,
String poolPath) |
void |
Hive.dropWMTrigger(String rpName,
String triggerName) |
InputEstimator.Estimation |
InputEstimator.estimate(org.apache.hadoop.mapred.JobConf job,
TableScanOperator ts,
long remaining)
Estimate input size based on filter and projection on table scan operator
|
List<Partition> |
Hive.exchangeTablePartitions(Map<String,String> partitionSpecs,
String sourceDb,
String sourceTable,
String destDb,
String destinationTableName) |
PrincipalPrivilegeSet |
Hive.get_privilege_set(HiveObjectType objectType,
String db_name,
String table_name,
List<String> part_values,
String column_name,
String user_name,
List<String> group_names) |
static Hive |
Hive.get() |
static Hive |
Hive.get(boolean doRegisterAllFns) |
static Hive |
Hive.get(org.apache.hadoop.conf.Configuration c,
Class<?> clazz) |
static Hive |
Hive.get(HiveConf c)
Gets hive object for the current thread.
|
static Hive |
Hive.get(HiveConf c,
boolean needsRefresh)
get a connection to metastore.
|
WMFullResourcePlan |
Hive.getActiveResourcePlan() |
List<String> |
Hive.getAllDatabases()
Get all existing database names.
|
List<Function> |
Hive.getAllFunctions() |
List<Table> |
Hive.getAllMaterializedViewObjects(String dbName)
Get all materialized views for the specified database.
|
List<String> |
Hive.getAllMaterializedViews(String dbName)
Get all materialized view names for the specified database.
|
Set<Partition> |
Hive.getAllPartitionsOf(Table tbl)
Get all the partitions; unlike
Hive.getPartitions(Table) , does not include auth. |
List<WMResourcePlan> |
Hive.getAllResourcePlans() |
List<String> |
Hive.getAllRoleNames()
Get all existing role names.
|
List<Table> |
Hive.getAllTableObjects(String dbName)
Get all tables for the specified database.
|
List<String> |
Hive.getAllTables()
Get all table names for the current database.
|
List<String> |
Hive.getAllTables(String dbName)
Get all table names for the specified database.
|
List<org.apache.calcite.plan.RelOptMaterialization> |
Hive.getAllValidMaterializedViews(List<String> tablesUsed,
boolean forceMVContentsUpToDate)
Get the materialized views that have been enabled for rewriting from the
metastore.
|
static HiveAuthenticationProvider |
HiveUtils.getAuthenticator(org.apache.hadoop.conf.Configuration conf,
HiveConf.ConfVars authenticatorConfKey) |
HiveAuthorizationProvider |
DefaultStorageHandler.getAuthorizationProvider() |
HiveAuthorizationProvider |
HiveStorageHandler.getAuthorizationProvider()
Returns the implementation specific authorization provider
|
static HiveAuthorizationProvider |
HiveUtils.getAuthorizeProviderManager(org.apache.hadoop.conf.Configuration conf,
String authzClassName,
HiveAuthenticationProvider authenticator,
boolean nullIfOtherClass)
Create a new instance of HiveAuthorizationProvider
|
static HiveAuthorizerFactory |
HiveUtils.getAuthorizerFactory(org.apache.hadoop.conf.Configuration conf,
HiveConf.ConfVars authorizationProviderConfKey)
Return HiveAuthorizerFactory used by new authorization plugin interface.
|
List<SQLCheckConstraint> |
Hive.getCheckConstraintList(String dbName,
String tblName) |
CheckConstraint |
Hive.getCheckConstraints(String dbName,
String tblName) |
Database |
Hive.getDatabase(String dbName)
Get the database by name.
|
Database |
Hive.getDatabase(String catName,
String dbName)
Get the database by name.
|
Database |
Hive.getDatabaseCurrent()
Get the Database object for current database
|
List<String> |
Hive.getDatabasesByPattern(String databasePattern)
Get all existing databases that match the given
pattern.
|
List<SQLDefaultConstraint> |
Hive.getDefaultConstraintList(String dbName,
String tblName) |
DefaultConstraint |
Hive.getDefaultConstraints(String dbName,
String tblName) |
String |
Hive.getDelegationToken(String owner,
String renewer) |
CheckConstraint |
Hive.getEnabledCheckConstraints(String dbName,
String tblName)
Get CHECK constraints associated with the table that are enabled
|
DefaultConstraint |
Hive.getEnabledDefaultConstraints(String dbName,
String tblName)
Get Default constraints associated with the table that are enabled
|
NotNullConstraint |
Hive.getEnabledNotNullConstraints(String dbName,
String tblName)
Get not null constraints associated with the table that are enabled/enforced.
|
static List<FieldSchema> |
Hive.getFieldsFromDeserializer(String name,
Deserializer serde) |
Iterable<Map.Entry<Long,ByteBuffer>> |
Hive.getFileMetadata(List<Long> fileIds) |
Iterable<Map.Entry<Long,MetadataPpdResult>> |
Hive.getFileMetadataByExpr(List<Long> fileIds,
ByteBuffer sarg,
boolean doGetFooters) |
List<SQLForeignKey> |
Hive.getForeignKeyList(String dbName,
String tblName) |
ForeignKeyInfo |
Hive.getForeignKeys(String dbName,
String tblName)
Get all foreign keys associated with the table.
|
Function |
Hive.getFunction(String dbName,
String funcName) |
List<String> |
Hive.getFunctions(String dbName,
String pattern) |
Class<? extends org.apache.hadoop.mapred.InputFormat> |
Partition.getInputFormatClass() |
String |
Hive.getMetaConf(String propName) |
static List<HiveMetastoreAuthorizationProvider> |
HiveUtils.getMetaStoreAuthorizeProviderManagers(org.apache.hadoop.conf.Configuration conf,
HiveConf.ConfVars authorizationProviderConfKey,
HiveAuthenticationProvider authenticator) |
List<SQLNotNullConstraint> |
Hive.getNotNullConstraintList(String dbName,
String tblName) |
NotNullConstraint |
Hive.getNotNullConstraints(String dbName,
String tblName)
Get all not null constraints associated with the table.
|
int |
Hive.getNumPartitionsByFilter(Table tbl,
String filter)
Get a number of Partitions by filter.
|
Class<? extends org.apache.hadoop.mapred.OutputFormat> |
Partition.getOutputFormatClass() |
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate) |
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate,
String partPath,
boolean inheritTableSpecs)
Returns partition metadata
|
Map<String,List<ColumnStatisticsObj>> |
Hive.getPartitionColumnStatistics(String dbName,
String tableName,
List<String> partNames,
List<String> colNames) |
List<String> |
Hive.getPartitionNames(String tblName,
short max) |
List<String> |
Hive.getPartitionNames(String dbName,
String tblName,
Map<String,String> partSpec,
short max) |
List<String> |
Hive.getPartitionNames(String dbName,
String tblName,
short max) |
List<Partition> |
Hive.getPartitions(Table tbl)
get all the partitions that the table has
|
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial
specification.
|
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec,
short limit)
get all the partitions of the table that matches the given partial
specification.
|
boolean |
Hive.getPartitionsByExpr(Table tbl,
ExprNodeGenericFuncDesc expr,
HiveConf conf,
List<Partition> result)
Get a list of Partitions by expr.
|
List<Partition> |
Hive.getPartitionsByFilter(Table tbl,
String filter)
Get a list of Partitions by filter.
|
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
List<String> partNames)
Get all partitions of the table that matches the list of given partition names.
|
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial
specification.
|
org.apache.hadoop.fs.Path[] |
Partition.getPath(Sample s) |
List<SQLPrimaryKey> |
Hive.getPrimaryKeyList(String dbName,
String tblName) |
PrimaryKeyInfo |
Hive.getPrimaryKeys(String dbName,
String tblName)
Get all primary key columns associated with the table.
|
ForeignKeyInfo |
Hive.getReliableForeignKeys(String dbName,
String tblName)
Get foreign keys associated with the table that are available for optimization.
|
NotNullConstraint |
Hive.getReliableNotNullConstraints(String dbName,
String tblName)
Get not null constraints associated with the table that are available for optimization.
|
PrimaryKeyInfo |
Hive.getReliablePrimaryKeys(String dbName,
String tblName)
Get primary key columns associated with the table that are available for optimization.
|
UniqueConstraint |
Hive.getReliableUniqueConstraints(String dbName,
String tblName)
Get unique constraints associated with the table that are available for optimization.
|
WMFullResourcePlan |
Hive.getResourcePlan(String rpName) |
List<RolePrincipalGrant> |
Hive.getRoleGrantInfoForPrincipal(String principalName,
PrincipalType principalType) |
static HiveStorageHandler |
HiveUtils.getStorageHandler(org.apache.hadoop.conf.Configuration conf,
String className) |
StorageHandlerInfo |
Hive.getStorageHandlerInfo(Table table) |
Table |
Hive.getTable(String tableName)
Returns metadata for the table named tableName
|
Table |
Hive.getTable(String tableName,
boolean throwException)
Returns metadata for the table named tableName
|
Table |
Hive.getTable(String dbName,
String tableName)
Returns metadata of the table
|
Table |
Hive.getTable(String dbName,
String tableName,
boolean throwException)
Returns metadata of the table
|
List<ColumnStatisticsObj> |
Hive.getTableColumnStatistics(String dbName,
String tableName,
List<String> colNames) |
List<String> |
Hive.getTablesByPattern(String tablePattern)
Returns all existing tables from default database which match the given
pattern.
|
List<String> |
Hive.getTablesByPattern(String dbName,
String tablePattern)
Returns all existing tables from the specified database which match the given
pattern.
|
List<String> |
Hive.getTablesByType(String dbName,
String pattern,
TableType type)
Returns all existing tables of a type (VIRTUAL_VIEW|EXTERNAL_TABLE|MANAGED_TABLE) from the specified
database which match the given pattern.
|
List<String> |
Hive.getTablesForDb(String database,
String tablePattern)
Returns all existing tables from the given database which match the given
pattern.
|
List<SQLUniqueConstraint> |
Hive.getUniqueConstraintList(String dbName,
String tblName) |
UniqueConstraint |
Hive.getUniqueConstraints(String dbName,
String tblName)
Get all unique constraints associated with the table.
|
List<org.apache.calcite.plan.RelOptMaterialization> |
Hive.getValidMaterializedView(String dbName,
String materializedViewName,
List<String> tablesUsed,
boolean forceMVContentsUpToDate) |
static Hive |
Hive.getWithFastCheck(HiveConf c)
Same as
Hive.get(HiveConf) , except that it checks only the object identity of existing
MS client, assuming the relevant settings would be unchanged within the same conf object. |
static Hive |
Hive.getWithFastCheck(HiveConf c,
boolean doRegisterAllFns)
Same as
Hive.get(HiveConf) , except that it checks only the object identity of existing
MS client, assuming the relevant settings would be unchanged within the same conf object. |
static Hive |
Hive.getWithoutRegisterFns(HiveConf c)
Same as
Hive.get(HiveConf) , except that it does not register all functions. |
boolean |
Hive.grantPrivileges(PrivilegeBag privileges) |
boolean |
Hive.grantRole(String roleName,
String userName,
PrincipalType principalType,
String grantor,
PrincipalType grantorType,
boolean grantOption) |
protected void |
Partition.initialize(Table table,
Partition tPartition)
Initializes this object with the given variables
|
boolean |
Table.isEmpty() |
static void |
Hive.listNewFilesRecursively(org.apache.hadoop.fs.FileSystem destFs,
org.apache.hadoop.fs.Path dest,
List<org.apache.hadoop.fs.Path> newFiles) |
List<Role> |
Hive.listRoles(String userName,
PrincipalType principalType) |
Map<Map<String,String>,Partition> |
Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path loadPath,
String tableName,
Map<String,String> partSpec,
LoadTableDesc.LoadFileType loadFileType,
int numDP,
int numLB,
boolean isAcid,
long writeId,
int stmtId,
boolean hasFollowingStatsTask,
AcidUtils.Operation operation,
boolean isInsertOverwrite)
Given a source directory name of the load path, load all dynamically generated partitions
into the specified table and return a list of strings that represent the dynamic partition
paths.
|
Partition |
Hive.loadPartition(org.apache.hadoop.fs.Path loadPath,
Table tbl,
Map<String,String> partSpec,
LoadTableDesc.LoadFileType loadFileType,
boolean inheritTableSpecs,
boolean isSkewedStoreAsSubdir,
boolean isSrcLocal,
boolean isAcidIUDoperation,
boolean hasFollowingStatsTask,
Long writeId,
int stmtId,
boolean isInsertOverwrite)
Load a directory into a Hive Table Partition - Alters existing content of
the partition with the contents of loadPath.
|
void |
Hive.loadTable(org.apache.hadoop.fs.Path loadPath,
String tableName,
LoadTableDesc.LoadFileType loadFileType,
boolean isSrcLocal,
boolean isSkewedStoreAsSubdir,
boolean isAcidIUDoperation,
boolean hasFollowingStatsTask,
Long writeId,
int stmtId,
boolean isInsertOverwrite)
Load a directory into a Hive Table.
|
static void |
Hive.moveAcidFiles(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus[] stats,
org.apache.hadoop.fs.Path dst,
List<org.apache.hadoop.fs.Path> newFiles) |
static boolean |
Hive.moveFile(HiveConf conf,
org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
boolean replace,
boolean isSrcLocal,
boolean isManaged) |
Table |
Hive.newTable(String tableName) |
void |
Hive.putFileMetadata(List<Long> fileIds,
List<ByteBuffer> metadata) |
void |
Hive.recycleDirToCmPath(org.apache.hadoop.fs.Path dataPath,
boolean isPurge)
Recycles the files recursively from the input path to the cmroot directory either by copying or moving it.
|
void |
Hive.reloadFunctions() |
void |
Hive.renamePartition(Table tbl,
Map<String,String> oldPartSpec,
Partition newPart)
Rename a old partition to new partition
|
protected void |
Hive.replaceFiles(org.apache.hadoop.fs.Path tablePath,
org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
org.apache.hadoop.fs.Path oldPath,
HiveConf conf,
boolean isSrcLocal,
boolean purge,
List<org.apache.hadoop.fs.Path> newFiles,
org.apache.hadoop.fs.PathFilter deletePathFilter,
boolean isNeedRecycle,
boolean isManaged)
Replaces files in the partition with new data set specified by srcf.
|
boolean |
Hive.revokePrivileges(PrivilegeBag privileges,
boolean grantOption) |
boolean |
Hive.revokeRole(String roleName,
String userName,
PrincipalType principalType,
boolean grantOption) |
void |
Table.setBucketCols(List<String> bucketCols) |
void |
Table.setInputFormatClass(String name) |
void |
Hive.setMetaConf(String propName,
String propValue) |
void |
Table.setOutputFormatClass(String name) |
boolean |
Hive.setPartitionColumnStatistics(SetPartitionsStatsRequest request) |
void |
Table.setSkewedColNames(List<String> skewedColNames) |
void |
Table.setSkewedColValues(List<List<String>> skewedValues) |
void |
Table.setSkewedInfo(SkewedInfo skewedInfo) |
void |
Partition.setSkewedValueLocationMap(List<String> valList,
String dirName) |
void |
Table.setSkewedValueLocationMap(List<String> valList,
String dirName) |
void |
Table.setSortCols(List<Order> sortOrder) |
void |
Table.setStoredAsSubDirectories(boolean storedAsSubDirectories) |
void |
Partition.setValues(Map<String,String> partSpec)
Set Partition's values
|
ShowCompactResponse |
Hive.showCompactions() |
List<HiveObjectPrivilege> |
Hive.showPrivilegeGrant(HiveObjectType objectType,
String principalName,
PrincipalType principalType,
String dbName,
String tableName,
List<String> partValues,
String columnName) |
GetOpenTxnsInfoResponse |
Hive.showTransactions() |
void |
Hive.truncateTable(String dbDotTableName,
Map<String,String> partSpec)
Truncates the table/partition as per specifications.
|
void |
Hive.updateCreationMetadata(String dbName,
String tableName,
CreationMetadata cm) |
static void |
Table.validateColumns(List<FieldSchema> columns,
List<FieldSchema> partCols) |
void |
Hive.validatePartitionNameCharacters(List<String> partVals) |
WMValidateResourcePlanResponse |
Hive.validateResourcePlan(String rpName) |
Constructor and Description |
---|
DummyPartition(Table tbl,
String name) |
DummyPartition(Table tbl,
String name,
Map<String,String> partSpec) |
Partition(Table tbl)
create an empty partition.
|
Partition(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location)
Create partition object with the given info.
|
Partition(Table tbl,
Partition tp) |
PartitionIterable(Hive db,
Table table,
Map<String,String> partialPartitionSpec,
int batch_size)
Primary constructor that fetches all partitions in a given table, given
a Hive object and a table object, and a partial partition spec.
|
Sample(int num,
int fraction,
Dimension d) |
Modifier and Type | Method and Description |
---|---|
void |
MetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isOutputPadded,
List<ColumnStatisticsObj> colStats,
PrimaryKeyInfo pkInfo,
ForeignKeyInfo fkInfo,
UniqueConstraint ukInfo,
NotNullConstraint nnInfo,
DefaultConstraint dInfo,
CheckConstraint cInfo,
StorageHandlerInfo storageHandlerInfo)
Describe table.
|
void |
JsonMetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isOutputPadded,
List<ColumnStatisticsObj> colStats,
PrimaryKeyInfo pkInfo,
ForeignKeyInfo fkInfo,
UniqueConstraint ukInfo,
NotNullConstraint nnInfo,
DefaultConstraint dInfo,
CheckConstraint cInfo,
StorageHandlerInfo storageHandlerInfo)
Describe table.
|
void |
MetaDataFormatter.error(OutputStream out,
String msg,
int errorCode,
String sqlState)
Write an error message.
|
void |
JsonMetaDataFormatter.error(OutputStream out,
String msg,
int errorCode,
String sqlState)
Write an error message.
|
void |
MetaDataFormatter.error(OutputStream out,
String errorMessage,
int errorCode,
String sqlState,
String errorDetail) |
void |
JsonMetaDataFormatter.error(OutputStream out,
String errorMessage,
int errorCode,
String sqlState,
String errorDetail) |
static void |
MetaDataFormatUtils.formatFullRP(MetaDataFormatUtils.RPFormatter rpFormatter,
WMFullResourcePlan fullRp) |
void |
MetaDataFormatter.showDatabaseDescription(DataOutputStream out,
String database,
String comment,
String location,
String ownerName,
String ownerType,
Map<String,String> params)
Describe a database.
|
void |
JsonMetaDataFormatter.showDatabaseDescription(DataOutputStream out,
String database,
String comment,
String location,
String ownerName,
String ownerType,
Map<String,String> params)
Show the description of a database
|
void |
MetaDataFormatter.showDatabases(DataOutputStream out,
List<String> databases)
Show the databases
|
void |
JsonMetaDataFormatter.showDatabases(DataOutputStream out,
List<String> databases)
Show a list of databases
|
void |
MetaDataFormatter.showErrors(DataOutputStream out,
WMValidateResourcePlanResponse errors) |
void |
JsonMetaDataFormatter.showErrors(DataOutputStream out,
WMValidateResourcePlanResponse response) |
void |
MetaDataFormatter.showFullResourcePlan(DataOutputStream out,
WMFullResourcePlan resourcePlan) |
void |
JsonMetaDataFormatter.showFullResourcePlan(DataOutputStream out,
WMFullResourcePlan resourcePlan) |
void |
MetaDataFormatter.showResourcePlans(DataOutputStream out,
List<WMResourcePlan> resourcePlans) |
void |
JsonMetaDataFormatter.showResourcePlans(DataOutputStream out,
List<WMResourcePlan> resourcePlans) |
void |
MetaDataFormatter.showTablePartitions(DataOutputStream out,
List<String> parts)
Show the table partitions.
|
void |
JsonMetaDataFormatter.showTablePartitions(DataOutputStream out,
List<String> parts)
Show the table partitions.
|
void |
MetaDataFormatter.showTables(DataOutputStream out,
Set<String> tables)
Show a list of tables.
|
void |
JsonMetaDataFormatter.showTables(DataOutputStream out,
Set<String> tables)
Show a list of tables.
|
void |
MetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
Show the table status.
|
void |
JsonMetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par) |
Modifier and Type | Class and Description |
---|---|
class |
CalciteSemanticException
Exception from SemanticAnalyzer.
|
class |
CalciteSubquerySemanticException
Exception from SemanticAnalyzer.
|
class |
CalciteViewSemanticException
Exception from SemanticAnalyzer.
|
Modifier and Type | Method and Description |
---|---|
Operator<? extends OperatorDesc> |
Vectorizer.validateAndVectorizeOperator(Operator<? extends OperatorDesc> op,
VectorizationContext vContext,
boolean isReduce,
boolean isTezOrSpark,
org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.VectorTaskColumnInfo vectorTaskColumnInfo) |
static Operator<? extends OperatorDesc> |
Vectorizer.vectorizeFilterOperator(Operator<? extends OperatorDesc> filterOp,
VectorizationContext vContext,
VectorFilterDesc vectorFilterDesc) |
static Operator<? extends OperatorDesc> |
Vectorizer.vectorizeGroupByOperator(Operator<? extends OperatorDesc> groupByOp,
VectorizationContext vContext,
VectorGroupByDesc vectorGroupByDesc) |
Operator<? extends OperatorDesc> |
Vectorizer.vectorizeOperator(Operator<? extends OperatorDesc> op,
VectorizationContext vContext,
boolean isReduce,
boolean isTezOrSpark,
org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.VectorTaskColumnInfo vectorTaskColumnInfo) |
static Operator<? extends OperatorDesc> |
Vectorizer.vectorizePTFOperator(Operator<? extends OperatorDesc> ptfOp,
VectorizationContext vContext,
VectorPTFDesc vectorPTFDesc) |
static Operator<? extends OperatorDesc> |
Vectorizer.vectorizeSelectOperator(Operator<? extends OperatorDesc> selectOp,
VectorizationContext vContext,
VectorSelectDesc vectorSelectDesc) |
Modifier and Type | Method and Description |
---|---|
static Object |
PartExprEvalUtils.evalExprWithPart(ExprNodeDesc expr,
Partition p,
List<VirtualColumn> vcs,
StructObjectInspector rowObjectInspector)
Evaluate expression with partition columns
|
static Object |
PartExprEvalUtils.evaluateExprOnPart(ObjectPair<PrimitiveObjectInspector,ExprNodeEvaluator> pair,
Object partColValues) |
static ObjectPair<PrimitiveObjectInspector,ExprNodeEvaluator> |
PartExprEvalUtils.prepareExpr(ExprNodeGenericFuncDesc expr,
List<String> partColumnNames,
List<PrimitiveTypeInfo> partColumnTypeInfos) |
static boolean |
PartitionPruner.prunePartitionNames(List<String> partColumnNames,
List<PrimitiveTypeInfo> partColumnTypeInfos,
ExprNodeGenericFuncDesc prunerExpr,
String defaultPartitionName,
List<String> partNames)
Prunes partition names to see if they match the prune expression.
|
Modifier and Type | Class and Description |
---|---|
class |
SemanticException
Exception from SemanticAnalyzer.
|
Modifier and Type | Method and Description |
---|---|
PTFExpressionDef |
PTFTranslator.buildExpressionDef(ShapeDetails inpShape,
ASTNode arg) |
static ExprNodeEvaluator |
WindowingExprNodeEvaluatorFactory.get(LeadLagInfo llInfo,
ExprNodeDesc desc) |
Hive |
HiveSemanticAnalyzerHookContextImpl.getHive() |
Hive |
HiveSemanticAnalyzerHookContext.getHive() |
protected Table |
SemanticAnalyzer.getTableObjectByName(String tableName,
boolean throwException) |
boolean |
BaseSemanticAnalyzer.isValidPrefixSpec(Table tTable,
Map<String,String> spec)
Checks if given specification is proper specification for prefix of
partition cols, for table partitioned by ds, hr, min valid ones are
(ds='2008-04-08'), (ds='2008-04-08', hr='12'), (ds='2008-04-08', hr='12', min='30')
invalid one is for example (ds='2008-04-08', min='30')
|
static boolean |
ImportSemanticAnalyzer.prepareImport(boolean isImportCmd,
boolean isLocationSet,
boolean isExternalSet,
boolean isPartSpecSet,
boolean waitOnPrecursor,
String parsedLocation,
String parsedTableName,
String overrideDBName,
LinkedHashMap<String,String> parsedPartSpec,
String fromLocn,
EximUtil.SemanticAnalyzerWrapperContext x,
UpdatedMetaDataTracker updatedMetadata,
HiveTxnManager txnMgr)
The same code is used from both the "repl load" as well as "import".
|
static Table |
ImportSemanticAnalyzer.tableIfExists(ImportTableDesc tblDesc,
Hive db)
Utility method that returns a table if one corresponding to the destination
tblDesc is found.
|
void |
WindowingExprNodeEvaluatorFactory.FindLeadLagFuncExprs.visit(ExprNodeGenericFuncDesc fnExpr) |
Constructor and Description |
---|
TableSpec(Hive db,
String tableName,
Map<String,String> partSpec) |
TableSpec(Hive db,
String tableName,
Map<String,String> partSpec,
boolean allowPartialPartitionsSpec) |
TableSpec(Table tableHandle,
List<Partition> partitions) |
Modifier and Type | Method and Description |
---|---|
HiveWrapper.Tuple<Database> |
HiveWrapper.database() |
HiveWrapper.Tuple<Function> |
HiveWrapper.function(String name) |
static Collection<String> |
Utils.getAllTables(Hive db,
String dbName) |
static boolean |
Utils.isBootstrapDumpInProgress(Hive hiveDb,
String dbName) |
static Iterable<? extends String> |
Utils.matchesDb(Hive db,
String dbPattern) |
static Iterable<? extends String> |
Utils.matchesTbl(Hive db,
String dbName,
String tblPattern) |
static void |
Utils.resetDbBootstrapDumpState(Hive hiveDb,
String dbName,
String uniqueKey) |
static String |
Utils.setDbBootstrapDumpState(Hive hiveDb,
String dbName) |
HiveWrapper.Tuple<Table> |
HiveWrapper.table(String tableName) |
Modifier and Type | Method and Description |
---|---|
void |
SparkPartitionPruningSinkOperator.closeOp(boolean abort) |
void |
SparkPartitionPruningSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf) |
void |
SparkPartitionPruningSinkOperator.process(Object row,
int tag) |
Modifier and Type | Method and Description |
---|---|
void |
ExportWork.acidPostProcess(Hive db)
For exporting Acid table, change the "pointer" to the temp table.
|
static void |
PlanUtils.addInputsForView(ParseContext parseCtx) |
protected void |
PTFDeserializer.initialize(PartitionedTableFunctionDef def) |
protected void |
PTFDeserializer.initialize(PTFExpressionDef eDef,
ShapeDetails inpShape) |
protected void |
PTFDeserializer.initialize(PTFQueryInputDef def,
StructObjectInspector OI) |
protected void |
PTFDeserializer.initialize(ShapeDetails shp,
StructObjectInspector OI) |
protected void |
PTFDeserializer.initialize(WindowFrameDef winFrame,
ShapeDetails inpShape) |
void |
PTFDeserializer.initializePTFChain(PartitionedTableFunctionDef tblFnDef) |
void |
PTFDeserializer.initializeWindowing(WindowTableFunctionDef def) |
Table |
CreateTableDesc.toTable(HiveConf conf) |
Table |
CreateViewDesc.toTable(HiveConf conf) |
Constructor and Description |
---|
PartitionDesc(Partition part) |
PartitionDesc(Partition part,
TableDesc tableDesc) |
PartitionDesc(Partition part,
TableDesc tblDesc,
boolean usePartSchemaProperties) |
Modifier and Type | Method and Description |
---|---|
OrderDef |
WindowFrameDef.getOrderDef() |
Modifier and Type | Method and Description |
---|---|
void |
SessionStateUserAuthenticator.destroy() |
void |
HiveAuthenticationProvider.destroy() |
void |
HadoopDefaultAuthenticator.destroy() |
void |
SessionStateConfigUserAuthenticator.destroy() |
Modifier and Type | Method and Description |
---|---|
void |
BitSetCheckedAuthorizationProvider.authorize(Database db,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Database db,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Database db,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Database db,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a database object.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Partition part,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive partition object.
|
void |
StorageBasedAuthorizationProvider.authorize(org.apache.hadoop.fs.Path path,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a path.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization user level privileges.
|
abstract void |
HiveMultiPartitionAuthorizationProviderBase.authorize(Table table,
Iterable<Partition> partitions,
Privilege[] requiredReadPrivileges,
Privilege[] requiredWritePrivileges)
Authorization method for partition sets.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a list of columns.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive table object.
|
void |
HiveMetastoreAuthorizationProvider.authorizeAuthorizationApiInvocation()
Authorize metastore authorization api call.
|
void |
DefaultHiveMetastoreAuthorizationProvider.authorizeAuthorizationApiInvocation() |
void |
StorageBasedAuthorizationProvider.authorizeAuthorizationApiInvocation() |
protected boolean |
BitSetCheckedAuthorizationProvider.authorizePrivileges(PrincipalPrivilegeSet privileges,
Privilege[] inputPriv,
boolean[] inputCheck,
Privilege[] outputPriv,
boolean[] outputCheck) |
protected boolean |
BitSetCheckedAuthorizationProvider.authorizeUserPriv(Privilege[] inputRequiredPriv,
boolean[] inputCheck,
Privilege[] outputRequiredPriv,
boolean[] outputCheck) |
protected void |
StorageBasedAuthorizationProvider.checkPermissions(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.Path path,
EnumSet<org.apache.hadoop.fs.permission.FsAction> actions)
Checks the permissions for the given path and current user on Hadoop FS.
|
protected static void |
StorageBasedAuthorizationProvider.checkPermissions(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus stat,
EnumSet<org.apache.hadoop.fs.permission.FsAction> actions,
String user)
Checks the permissions for the given path and current user on Hadoop FS.
|
PrincipalPrivilegeSet |
HiveAuthorizationProviderBase.HiveProxy.get_privilege_set(HiveObjectType column,
String dbName,
String tableName,
List<String> partValues,
String col,
String userName,
List<String> groupNames) |
Database |
HiveAuthorizationProviderBase.HiveProxy.getDatabase(String catName,
String dbName)
Get the database object
|
protected org.apache.hadoop.fs.Path |
StorageBasedAuthorizationProvider.getDbLocation(Database db) |
static HivePrivilegeObject |
AuthorizationUtils.getHiveObjectRef(HiveObjectRef privObj) |
HivePrincipal |
DefaultHiveAuthorizationTranslator.getHivePrincipal(PrincipalDesc principal) |
static HivePrincipal |
AuthorizationUtils.getHivePrincipal(String name,
PrincipalType type) |
static List<HivePrincipal> |
AuthorizationUtils.getHivePrincipals(List<PrincipalDesc> principals,
HiveAuthorizationTranslator trans) |
static HivePrincipal.HivePrincipalType |
AuthorizationUtils.getHivePrincipalType(PrincipalType type)
Convert thrift principal type to authorization plugin principal type
|
HivePrivilegeObject |
DefaultHiveAuthorizationTranslator.getHivePrivilegeObject(PrivilegeObjectDesc privSubjectDesc) |
static List<HivePrivilegeInfo> |
AuthorizationUtils.getPrivilegeInfos(List<HiveObjectPrivilege> privs) |
static HiveObjectRef |
AuthorizationUtils.getThriftHiveObjectRef(HivePrivilegeObject privObj)
Convert thrift HiveObjectRef to plugin HivePrivilegeObject
|
static HiveObjectType |
AuthorizationUtils.getThriftHiveObjType(HivePrivilegeObject.HivePrivilegeObjectType type)
Convert plugin privilege object type to thrift type
|
static PrivilegeGrantInfo |
AuthorizationUtils.getThriftPrivilegeGrantInfo(HivePrivilege privilege,
HivePrincipal grantorPrincipal,
boolean grantOption,
int grantTime)
Get thrift privilege grant info
|
void |
DefaultHiveMetastoreAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.init(org.apache.hadoop.conf.Configuration conf) |
void |
StorageBasedAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf) |
void |
DefaultHiveAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf) |
void |
HiveAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf) |
Constructor and Description |
---|
AuthorizationPreEventListener(org.apache.hadoop.conf.Configuration config) |
PartitionWrapper(Partition mapiPart,
PreEventContext context) |
PartitionWrapper(Table table,
Partition mapiPart) |
Modifier and Type | Class and Description |
---|---|
class |
HiveAccessControlException
Exception thrown by the Authorization plugin api (v2).
|
class |
HiveAuthzPluginException
Exception thrown by the Authorization plugin api (v2).
|
Modifier and Type | Method and Description |
---|---|
HivePrincipal |
HiveAuthorizationTranslator.getHivePrincipal(PrincipalDesc principal) |
HivePrivilegeObject |
HiveAuthorizationTranslator.getHivePrivilegeObject(PrivilegeObjectDesc privObject) |
Modifier and Type | Method and Description |
---|---|
void |
SessionState.applyAuthorizationPolicy()
If authorization mode is v2, then pass it through authorizer so that it can apply
any security configuration changes.
|
static CreateTableAutomaticGrant |
CreateTableAutomaticGrant.create(HiveConf conf) |
HadoopShims.HdfsEncryptionShim |
SessionState.getHdfsEncryptionShim() |
HadoopShims.HdfsEncryptionShim |
SessionState.getHdfsEncryptionShim(org.apache.hadoop.fs.FileSystem fs) |
void |
NullKillQuery.killQuery(String queryId,
String errMsg) |
void |
KillQuery.killQuery(String queryId,
String errMsg) |
Modifier and Type | Method and Description |
---|---|
static Statistics |
StatsUtils.collectStatistics(HiveConf conf,
PrunedPartitionList partList,
ColumnStatsList colStatsCache,
Table table,
TableScanOperator tableScanOperator)
Collect table, partition and column level statistics
|
static Statistics |
StatsUtils.collectStatistics(HiveConf conf,
PrunedPartitionList partList,
Table table,
List<ColumnInfo> schema,
List<String> neededColumns,
ColumnStatsList colStatsCache,
List<String> referencedColumns,
boolean fetchColStats) |
abstract Class<? extends org.apache.hadoop.mapred.InputFormat> |
Partish.getInputFormatClass() |
abstract Object |
Partish.getOutput() |
abstract Class<? extends org.apache.hadoop.mapred.OutputFormat> |
Partish.getOutputFormatClass() |
int |
ColStatsProcessor.persistColumnStats(Hive db,
Table tbl) |
static ColumnStatisticsObj |
ColumnStatisticsObjTranslator.readHiveStruct(String columnName,
String columnType,
StructField structField,
Object values) |
Modifier and Type | Method and Description |
---|---|
Object |
UDFHour.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
UDFMinute.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
UDFMonth.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
UDFYear.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
UDFSecond.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
UDFDayOfMonth.evaluate(GenericUDF.DeferredObject[] arguments) |
Modifier and Type | Method and Description |
---|---|
void |
NGramEstimator.add(ArrayList<String> ng)
Adds a new n-gram to the estimation.
|
void |
GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
This function will be called by GroupByOperator when it sees a new input
row.
|
Object |
GenericUDFConcat.binaryEvaluate(GenericUDF.DeferredObject[] arguments) |
void |
GenericUDTFParseUrlTuple.close() |
void |
GenericUDTFReplicateRows.close() |
void |
GenericUDTFInline.close() |
abstract void |
GenericUDTF.close()
Called to notify the UDTF that there are no more rows to process.
|
void |
GenericUDTFPosExplode.close() |
void |
GenericUDTFStack.close() |
void |
GenericUDTFJSONTuple.close() |
void |
GenericUDTFGetSplits.close() |
void |
GenericUDTFExplode.close() |
void |
UDTFCollector.collect(Object input) |
void |
Collector.collect(Object input)
Other classes will call collect() with the data that it has.
|
Integer |
GenericUDFBaseCompare.compare(GenericUDF.DeferredObject[] arguments) |
GenericUDTFGetSplits.PlanFragment |
GenericUDTFGetSplits.createPlanFragment(String query,
int num,
org.apache.hadoop.yarn.api.records.ApplicationId splitsAppId) |
void |
GenericUDAFAverage.GenericUDAFAverageEvaluatorDouble.doReset(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage.AverageAggregationBuffer<Double> aggregation) |
void |
GenericUDAFAverage.GenericUDAFAverageEvaluatorDecimal.doReset(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage.AverageAggregationBuffer<org.apache.hadoop.hive.common.type.HiveDecimal> aggregation) |
protected abstract void |
GenericUDAFAverage.AbstractGenericUDAFAverageEvaluator.doReset(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage.AverageAggregationBuffer<TYPE> aggregation) |
Object |
GenericUDAFEvaluator.evaluate(GenericUDAFEvaluator.AggregationBuffer agg)
This function will be called by GroupByOperator when it sees a new input
row.
|
Object |
GenericUDFSplit.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFTrunc.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPPositive.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCurrentDate.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSentences.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
BaseMaskUDF.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSubstringIndex.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFUnixTimeStamp.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFArrayContains.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFWhen.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFUpper.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFReflect.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFTimestamp.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPAnd.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFStruct.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFHash.evaluate(GenericUDF.DeferredObject[] arguments)
Deprecated.
|
Object |
GenericUDFDate.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFInstr.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLoggedInUser.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFInBloomFilter.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCurrentAuthorizer.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFNvl.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFElt.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFInitCap.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPLessThan.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToUnixTimeStamp.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFTranslate.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNotNull.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFField.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFBaseNumeric.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToIntervalYearMonth.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFInternalInterval.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCurrentTimestamp.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFMurmurHash.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFIn.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFFloorCeilBase.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNotTrue.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFGrouping.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToChar.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFNamedStruct.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFEnforceConstraint.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFDecode.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCardinalityViolation.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLikeAny.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToDate.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSortArray.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFPower.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFFromUtcTimestamp.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPEqualOrGreaterThan.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFArray.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFBaseTrim.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFInFile.evaluate(GenericUDF.DeferredObject[] arguments) |
abstract Object |
GenericUDF.evaluate(GenericUDF.DeferredObject[] arguments)
Evaluate the GenericUDF with the arguments.
|
Object |
GenericUDFDatetimeLegacyHybridCalendar.evaluate(GenericUDF.DeferredObject[] arguments) |
org.apache.hadoop.io.IntWritable |
GenericUDFDateDiff.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFMapKeys.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFRegExp.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCurrentUser.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFEpochMilli.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNotEqualNS.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFQuarter.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFAssertTrueOOM.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFIndex.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToBinary.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFFormatNumber.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOctetLength.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPGreaterThan.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
UDFCurrentDB.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCharacterLength.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLeadLag.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLikeAll.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPEqualNS.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNot.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCbrt.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFAesBase.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFIf.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFNullif.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCase.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFRestrictInformationSchema.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFAddMonths.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFStructField.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFFactorial.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFMonthsBetween.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNull.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFBaseNwayCompare.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFNextDay.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPEqualOrLessThan.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFMap.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPOr.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSize.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFAbs.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSortArrayByField.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFPrintf.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLastDay.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFBridge.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFDateFormat.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFAssertTrue.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFStringToMap.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToTimestampLocalTZ.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSoundex.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNegative.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFBasePad.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFBetween.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFEncode.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToVarchar.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFBaseArithmetic.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPFalse.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFDateAdd.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFConcatWS.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFReflect2.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFMacro.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPEqual.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPTrue.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSQCountCheck.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFRound.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCurrentGroups.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToIntervalDayTime.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFToDecimal.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLocate.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLower.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFUnion.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPDTIMinus.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFConcat.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPDTIPlus.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNotEqual.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFSha2.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLength.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFOPNotFalse.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFLevenshtein.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFWidthBucket.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFMapValues.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFExtractUnion.evaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDFCoalesce.evaluate(GenericUDF.DeferredObject[] arguments) |
protected void |
GenericUDTF.forward(Object o)
Passes an output row to the collector.
|
Object |
GenericUDF.DeferredObject.get() |
Object |
GenericUDF.DeferredJavaObject.get() |
static org.apache.hadoop.io.BytesWritable |
GenericUDFParamUtils.getBinaryValue(GenericUDF.DeferredObject[] arguments,
int i,
ObjectInspectorConverters.Converter[] converters) |
protected abstract T2 |
GenericUDAFStreamingEvaluator.SumAvgEnhancer.getCurrentIntermediateResult(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFStreamingEvaluator.SumAvgEnhancer.SumAvgStreamingState ss) |
protected Date |
GenericUDF.getDateValue(GenericUDF.DeferredObject[] arguments,
int i,
PrimitiveObjectInspector.PrimitiveCategory[] inputTypes,
ObjectInspectorConverters.Converter[] converters) |
protected Double |
GenericUDF.getDoubleValue(GenericUDF.DeferredObject[] arguments,
int i,
ObjectInspectorConverters.Converter[] converters) |
protected org.apache.hadoop.hive.common.type.HiveIntervalDayTime |
GenericUDF.getIntervalDayTimeValue(GenericUDF.DeferredObject[] arguments,
int i,
PrimitiveObjectInspector.PrimitiveCategory[] inputTypes,
ObjectInspectorConverters.Converter[] converters) |
protected HiveIntervalYearMonth |
GenericUDF.getIntervalYearMonthValue(GenericUDF.DeferredObject[] arguments,
int i,
PrimitiveObjectInspector.PrimitiveCategory[] inputTypes,
ObjectInspectorConverters.Converter[] converters) |
protected Integer |
GenericUDF.getIntValue(GenericUDF.DeferredObject[] arguments,
int i,
ObjectInspectorConverters.Converter[] converters) |
protected Long |
GenericUDF.getLongValue(GenericUDF.DeferredObject[] arguments,
int i,
ObjectInspectorConverters.Converter[] converters) |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFFirstValue.GenericUDAFFirstValueEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFDecimalStatsEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFDateStatsEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFSum.GenericUDAFSumHiveDecimal.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFSum.GenericUDAFSumDouble.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFSum.GenericUDAFSumLong.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFMin.GenericUDAFMinEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFRowNumber.GenericUDAFAbstractRowNumberEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFMkCollectionEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFNTile.GenericUDAFNTileEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFStreamingEvaluator.SumAvgEnhancer.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFCount.GenericUDAFCountEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFBloomFilter.GenericUDAFBloomFilterEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFAverage.GenericUDAFAverageEvaluatorDouble.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFAverage.GenericUDAFAverageEvaluatorDecimal.getNewAggregationBuffer() |
abstract GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFEvaluator.getNewAggregationBuffer()
Get a new aggregation object.
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFMax.GenericUDAFMaxEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFRank.GenericUDAFAbstractRankEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFLastValue.GenericUDAFLastValueEvaluator.getNewAggregationBuffer() |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.getNewAggregationBuffer() |
protected org.apache.hadoop.hive.ql.udf.generic.LeadLagBuffer |
GenericUDAFLead.GenericUDAFLeadEvaluator.getNewLLBuffer() |
protected abstract org.apache.hadoop.hive.ql.udf.generic.LeadLagBuffer |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.getNewLLBuffer() |
protected org.apache.hadoop.hive.ql.udf.generic.LeadLagBuffer |
GenericUDAFLag.GenericUDAFLagEvaluator.getNewLLBuffer() |
Object |
GenericUDAFRowNumber.GenericUDAFRowNumberEvaluator.getNextResult(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFStreamingEvaluator.getNextResult(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
ISupportStreamingModeForWindowing.getNextResult(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFRank.GenericUDAFRankEvaluator.getNextResult(GenericUDAFEvaluator.AggregationBuffer agg) |
protected abstract T1 |
GenericUDAFStreamingEvaluator.SumAvgEnhancer.getNextResult(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFStreamingEvaluator.SumAvgEnhancer.SumAvgStreamingState ss) |
ArrayList<Object[]> |
NGramEstimator.getNGrams()
Returns the final top-k n-grams in a format suitable for returning to Hive.
|
protected double[] |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.getQuantileArray(ConstantObjectInspector quantileOI) |
protected Object |
GenericUDFLag.getRow(int amt) |
protected abstract Object |
GenericUDFLeadLag.getRow(int amt) |
protected Object |
GenericUDFLead.getRow(int amt) |
int |
GenericUDAFRowNumber.GenericUDAFRowNumberEvaluator.getRowsRemainingAfterTerminate() |
int |
GenericUDAFStreamingEvaluator.SumAvgEnhancer.getRowsRemainingAfterTerminate() |
int |
ISupportStreamingModeForWindowing.getRowsRemainingAfterTerminate() |
int |
GenericUDAFRank.GenericUDAFRankEvaluator.getRowsRemainingAfterTerminate() |
protected String |
GenericUDF.getStringValue(GenericUDF.DeferredObject[] arguments,
int i,
ObjectInspectorConverters.Converter[] converters) |
static org.apache.hadoop.io.Text |
GenericUDFParamUtils.getTextValue(GenericUDF.DeferredObject[] arguments,
int i,
ObjectInspectorConverters.Converter[] converters) |
protected Timestamp |
GenericUDF.getTimestampValue(GenericUDF.DeferredObject[] arguments,
int i,
ObjectInspectorConverters.Converter[] converters) |
ObjectInspector |
GenericUDAFFirstValue.GenericUDAFFirstValueEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFNumericStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFSum.GenericUDAFSumHiveDecimal.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFSum.GenericUDAFSumDouble.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFSum.GenericUDAFSumLong.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFPercentileApprox.GenericUDAFSinglePercentileApproxEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFPercentileApprox.GenericUDAFMultiplePercentileApproxEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFMin.GenericUDAFMinEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFRowNumber.GenericUDAFAbstractRowNumberEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFMkCollectionEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFNTile.GenericUDAFNTileEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFStreamingEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFCount.GenericUDAFCountEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFBloomFilter.GenericUDAFBloomFilterEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFAverage.AbstractGenericUDAFAverageEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
Initialize the evaluator.
|
ObjectInspector |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFMax.GenericUDAFMaxEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFCumeDist.GenericUDAFCumeDistEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFRank.GenericUDAFAbstractRankEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFPercentRank.GenericUDAFPercentRankEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFLastValue.GenericUDAFLastValueEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
ObjectInspector |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters) |
void |
NGramEstimator.initialize(int pk,
int ppf,
int pn)
Sets the 'k' and 'pf' parameters.
|
void |
GenericUDAFFirstValue.GenericUDAFFirstValueEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFComputeStats.GenericUDAFNumericStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFSum.GenericUDAFSumHiveDecimal.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFSum.GenericUDAFSumDouble.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFSum.GenericUDAFSumLong.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFMin.GenericUDAFMinEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFRowNumber.GenericUDAFAbstractRowNumberEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFMkCollectionEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFNTile.GenericUDAFNTileEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFStreamingEvaluator.SumAvgEnhancer.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFCount.GenericUDAFCountEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFBloomFilter.GenericUDAFBloomFilterEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFAverage.AbstractGenericUDAFAverageEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer aggregation,
Object[] parameters) |
abstract void |
GenericUDAFEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
Iterate through original data.
|
void |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFMax.GenericUDAFMaxEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFRank.GenericUDAFAbstractRankEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFLastValue.GenericUDAFLastValueEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters) |
void |
GenericUDAFFirstValue.GenericUDAFFirstValueEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object obj) |
void |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFComputeStats.GenericUDAFNumericStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFSum.GenericUDAFSumHiveDecimal.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFSum.GenericUDAFSumDouble.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFSum.GenericUDAFSumLong.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFMin.GenericUDAFMinEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFRowNumber.GenericUDAFAbstractRowNumberEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFMkCollectionEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFNTile.GenericUDAFNTileEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFStreamingEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFCount.GenericUDAFCountEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFBloomFilter.GenericUDAFBloomFilterEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFAverage.AbstractGenericUDAFAverageEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer aggregation,
Object partial) |
abstract void |
GenericUDAFEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
Merge with partial aggregation result.
|
void |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFMax.GenericUDAFMaxEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFRank.GenericUDAFAbstractRankEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFLastValue.GenericUDAFLastValueEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial) |
void |
NGramEstimator.merge(List other)
Takes a serialized n-gram estimator object created by the serialize() method and merges
it with the current n-gram object.
|
void |
GenericUDF.DeferredObject.prepare(int version) |
void |
GenericUDF.DeferredJavaObject.prepare(int version) |
void |
GenericUDTFParseUrlTuple.process(Object[] o) |
void |
GenericUDTFReplicateRows.process(Object[] args) |
void |
GenericUDTFInline.process(Object[] os) |
abstract void |
GenericUDTF.process(Object[] args)
Give a set of arguments for the UDTF to process.
|
void |
GenericUDTFPosExplode.process(Object[] o) |
void |
GenericUDTFStack.process(Object[] args) |
void |
GenericUDTFJSONTuple.process(Object[] o) |
void |
GenericUDTFGetSplits.process(Object[] arguments) |
void |
GenericUDTFExplode.process(Object[] o) |
void |
GenericUDAFFirstValue.GenericUDAFFirstValueEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFDecimalStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFDateStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFSum.GenericUDAFSumHiveDecimal.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFSum.GenericUDAFSumDouble.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFSum.GenericUDAFSumLong.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFMin.GenericUDAFMinEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFRowNumber.GenericUDAFAbstractRowNumberEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFMkCollectionEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFNTile.GenericUDAFNTileEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFStreamingEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFCount.GenericUDAFCountEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFBloomFilter.GenericUDAFBloomFilterEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFAverage.AbstractGenericUDAFAverageEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer aggregation) |
abstract void |
GenericUDAFEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
Reset the aggregation.
|
void |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFMax.GenericUDAFMaxEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFRank.GenericUDAFAbstractRankEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFLastValue.GenericUDAFLastValueEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg) |
void |
GenericUDAFComputeStats.GenericUDAFNumericStatsEvaluator.NumericStatsAgg.reset(String type) |
ArrayList<org.apache.hadoop.io.Text> |
NGramEstimator.serialize()
In preparation for a Hive merge() call, serializes the current n-gram estimator object into an
ArrayList of Text objects.
|
String |
GenericUDFConcat.stringEvaluate(GenericUDF.DeferredObject[] arguments) |
Object |
GenericUDAFFirstValue.GenericUDAFFirstValueEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFStd.GenericUDAFStdEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFNumericStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFSum.GenericUDAFSumHiveDecimal.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFSum.GenericUDAFSumDouble.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFSum.GenericUDAFSumLong.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFPercentileApprox.GenericUDAFSinglePercentileApproxEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFPercentileApprox.GenericUDAFMultiplePercentileApproxEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFSumEmptyIsZero.SumLongZeroIfEmpty.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFSumEmptyIsZero.SumDoubleZeroIfEmpty.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFSumEmptyIsZero.SumHiveDecimalZeroIfEmpty.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFStdSample.GenericUDAFStdSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFMin.GenericUDAFMinEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFRowNumber.GenericUDAFAbstractRowNumberEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFMkCollectionEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFVarianceSample.GenericUDAFVarianceSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFNTile.GenericUDAFNTileEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFStreamingEvaluator.SumAvgEnhancer.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCount.GenericUDAFCountEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFBloomFilter.GenericUDAFBloomFilterEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFAverage.AbstractGenericUDAFAverageEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer aggregation) |
abstract Object |
GenericUDAFEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
Get final aggregation result.
|
Object |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFMax.GenericUDAFMaxEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCumeDist.GenericUDAFCumeDistEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFRank.GenericUDAFAbstractRankEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCovarianceSample.GenericUDAFCovarianceSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFPercentRank.GenericUDAFPercentRankEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFLastValue.GenericUDAFLastValueEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFFirstValue.GenericUDAFFirstValueEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFNumericStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFSum.GenericUDAFSumEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFMin.GenericUDAFMinEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFRowNumber.GenericUDAFAbstractRowNumberEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFMkCollectionEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFNTile.GenericUDAFNTileEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFStreamingEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCount.GenericUDAFCountEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFBloomFilter.GenericUDAFBloomFilterEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFAverage.AbstractGenericUDAFAverageEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer aggregation) |
abstract Object |
GenericUDAFEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
Get partial aggregation result.
|
Object |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFLeadLag.GenericUDAFLeadLagEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFMax.GenericUDAFMaxEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFRank.GenericUDAFAbstractRankEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFLastValue.GenericUDAFLastValueEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Object |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg) |
Modifier and Type | Method and Description |
---|---|
protected PTFPartition |
TableFunctionEvaluator._transformRawInput(PTFPartition iPart) |
protected PTFPartition |
NoopWithMap._transformRawInput(PTFPartition iPart) |
protected Object |
BasePartitionEvaluator.calcFunctionValue(PTFPartition.PTFPartitionIterator<Object> pItr,
LeadLagInfo leadLagInfo)
Given a partition iterator, calculate the function value
|
abstract int |
ValueBoundaryScanner.computeEnd(int rowIdx,
PTFPartition p) |
abstract int |
ValueBoundaryScanner.computeStart(int rowIdx,
PTFPartition p) |
void |
MatchPath.execute(PTFPartition.PTFPartitionIterator<Object> pItr,
PTFPartition outP) |
protected abstract void |
TableFunctionEvaluator.execute(PTFPartition.PTFPartitionIterator<Object> pItr,
PTFPartition oPart) |
void |
WindowingTableFunction.execute(PTFPartition.PTFPartitionIterator<Object> pItr,
PTFPartition outP) |
PTFPartition |
Noop.execute(PTFPartition iPart) |
PTFPartition |
TableFunctionEvaluator.execute(PTFPartition iPart) |
List<Object> |
TableFunctionEvaluator.finishPartition() |
List<Object> |
WindowingTableFunction.finishPartition() |
Object |
BasePartitionEvaluator.getPartitionAgg()
Get the aggregation for the whole partition.
|
static ArrayList<Object> |
MatchPath.getPath(Object currRow,
ObjectInspector rowOI,
PTFPartition.PTFPartitionIterator<Object> pItr,
int sz) |
protected static BasePartitionEvaluator.Range |
BasePartitionEvaluator.getRange(WindowFrameDef winFrame,
int currRow,
PTFPartition p) |
static ValueBoundaryScanner |
ValueBoundaryScanner.getScanner(WindowFrameDef winFrameDef) |
static Object |
MatchPath.getSelectListInput(Object currRow,
ObjectInspector rowOI,
PTFPartition.PTFPartitionIterator<Object> pItr,
int sz) |
void |
TableFunctionResolver.initialize(PTFDesc ptfDesc,
PartitionedTableFunctionDef tDef,
TableFunctionEvaluator evaluator) |
void |
MatchPath.MatchPathResolver.initializeOutputOI() |
void |
Noop.NoopResolver.initializeOutputOI() |
abstract void |
TableFunctionResolver.initializeOutputOI()
This method is invoked during runtime(during deserialization of theQueryDef).
|
void |
WindowingTableFunction.WindowingTableFunctionResolver.initializeOutputOI() |
void |
NoopWithMap.NoopWithMapResolver.initializeOutputOI() |
void |
TableFunctionResolver.initializeRawInputOI() |
void |
NoopWithMap.NoopWithMapResolver.initializeRawInputOI() |
void |
NoopWithMapStreaming.initializeStreaming(org.apache.hadoop.conf.Configuration cfg,
StructObjectInspector inputOI,
boolean isMapSide) |
void |
NoopStreaming.initializeStreaming(org.apache.hadoop.conf.Configuration cfg,
StructObjectInspector inputOI,
boolean isMapSide) |
void |
TableFunctionEvaluator.initializeStreaming(org.apache.hadoop.conf.Configuration cfg,
StructObjectInspector inputOI,
boolean isMapSide) |
void |
WindowingTableFunction.initializeStreaming(org.apache.hadoop.conf.Configuration cfg,
StructObjectInspector inputOI,
boolean isMapSide) |
Object |
BasePartitionEvaluator.iterate(int currentRow,
LeadLagInfo leadLagInfo)
Given the current row, get the aggregation for the window
|
Object |
BasePartitionEvaluator.SumPartitionEvaluator.iterate(int currentRow,
LeadLagInfo leadLagInfo) |
Object |
BasePartitionEvaluator.AvgPartitionEvaluator.iterate(int currentRow,
LeadLagInfo leadLagInfo) |
Iterator<Object> |
TableFunctionEvaluator.iterator(PTFPartition.PTFPartitionIterator<Object> pItr) |
Iterator<Object> |
WindowingTableFunction.iterator(PTFPartition.PTFPartitionIterator<Object> pItr) |
static MatchPath.SymbolFunctionResult |
MatchPath.SymbolFunction.match(MatchPath.SymbolFunction syFn,
Object row,
PTFPartition.PTFPartitionIterator<Object> pItr) |
protected abstract MatchPath.SymbolFunctionResult |
MatchPath.SymbolFunction.match(Object row,
PTFPartition.PTFPartitionIterator<Object> pItr) |
protected MatchPath.SymbolFunctionResult |
MatchPath.Symbol.match(Object row,
PTFPartition.PTFPartitionIterator<Object> pItr) |
protected MatchPath.SymbolFunctionResult |
MatchPath.Star.match(Object row,
PTFPartition.PTFPartitionIterator<Object> pItr) |
protected MatchPath.SymbolFunctionResult |
MatchPath.Plus.match(Object row,
PTFPartition.PTFPartitionIterator<Object> pItr) |
protected MatchPath.SymbolFunctionResult |
MatchPath.Chain.match(Object row,
PTFPartition.PTFPartitionIterator<Object> pItr) |
List<Object> |
NoopWithMapStreaming.processRow(Object row) |
List<Object> |
NoopStreaming.processRow(Object row) |
List<Object> |
TableFunctionEvaluator.processRow(Object row) |
List<Object> |
WindowingTableFunction.processRow(Object row) |
void |
TableFunctionEvaluator.startPartition() |
void |
WindowingTableFunction.startPartition() |
protected PTFPartition |
TableFunctionEvaluator.transformRawInput(PTFPartition iPart) |
protected Iterator<Object> |
TableFunctionEvaluator.transformRawInputIterator(PTFPartition.PTFPartitionIterator<Object> pItr) |
void |
MatchPath.ResultExpressionParser.translate() |
Constructor and Description |
---|
AvgPartitionDoubleEvaluator(GenericUDAFEvaluator wrappedEvaluator,
WindowFrameDef winFrame,
PTFPartition partition,
List<PTFExpressionDef> parameters,
ObjectInspector inputOI,
ObjectInspector outputOI) |
AvgPartitionHiveDecimalEvaluator(GenericUDAFEvaluator wrappedEvaluator,
WindowFrameDef winFrame,
PTFPartition partition,
List<PTFExpressionDef> parameters,
ObjectInspector inputOI,
ObjectInspector outputOI) |
Modifier and Type | Method and Description |
---|---|
Object |
GenericUDFXPath.evaluate(GenericUDF.DeferredObject[] arguments) |
Modifier and Type | Method and Description |
---|---|
protected void |
HCatSemanticAnalyzerBase.authorizeDDLWork(HiveSemanticAnalyzerHookContext context,
Hive hive,
DDLWork work)
Authorized the given DDLWork.
|
protected void |
HCatSemanticAnalyzer.authorizeDDLWork(HiveSemanticAnalyzerHookContext cntxt,
Hive hive,
DDLWork work) |
protected void |
HCatSemanticAnalyzerBase.authorizeTable(Hive hive,
String tableName,
Privilege priv) |
Modifier and Type | Method and Description |
---|---|
HiveAuthorizationProvider |
FosterStorageHandler.getAuthorizationProvider() |
Modifier and Type | Method and Description |
---|---|
Object |
Udf.evaluate(GenericUDF.DeferredObject[] arguments)
Execute UDF
|
void |
Udf.initExec(GenericUDF.DeferredObject[] arguments)
init exec
|
Modifier and Type | Method and Description |
---|---|
void |
KillQueryImpl.killQuery(String queryId,
String errMsg) |
Modifier and Type | Method and Description |
---|---|
HiveAuthorizationProvider |
JdbcStorageHandler.getAuthorizationProvider() |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.conf.Configuration |
JdbcStorageConfigManager.convertPropertiesToConfiguration(Properties props) |
static void |
JdbcStorageConfigManager.copyConfigurationToJob(Properties props,
Map<String,String> jobProps) |
static void |
JdbcStorageConfigManager.copySecretsToJob(Properties props,
Map<String,String> jobSecrets) |
Copyright © 2022 The Apache Software Foundation. All rights reserved.