Package | Description |
---|---|
org.apache.hadoop.hive.ql.exec |
Hive QL execution tasks, operators, functions and other handlers.
|
org.apache.hadoop.hive.ql.io | |
org.apache.hadoop.hive.ql.optimizer | |
org.apache.hadoop.hive.ql.parse | |
org.apache.hadoop.hive.ql.parse.spark | |
org.apache.hadoop.hive.ql.plan |
Modifier and Type | Method and Description |
---|---|
static int |
Utilities.getDPColOffset(FileSinkDesc conf) |
boolean |
FetchTask.isFetchFrom(FileSinkDesc fs) |
static void |
Utilities.mvFileToFinalPath(org.apache.hadoop.fs.Path specPath,
org.apache.hadoop.conf.Configuration hconf,
boolean success,
org.slf4j.Logger log,
DynamicPartitionCtx dpCtx,
FileSinkDesc conf,
org.apache.hadoop.mapred.Reporter reporter) |
static List<org.apache.hadoop.fs.Path> |
Utilities.removeTempOrDuplicateFiles(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus[] fileStats,
DynamicPartitionCtx dpCtx,
FileSinkDesc conf,
org.apache.hadoop.conf.Configuration hconf,
boolean isBaseDir) |
static List<org.apache.hadoop.fs.Path> |
Utilities.removeTempOrDuplicateFiles(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus[] fileStats,
DynamicPartitionCtx dpCtx,
FileSinkDesc conf,
org.apache.hadoop.conf.Configuration hconf,
Set<org.apache.hadoop.fs.Path> filesKept,
boolean isBaseDir)
Remove all temporary files and duplicate (double-committed) files from a given directory.
|
static List<org.apache.hadoop.fs.Path> |
Utilities.removeTempOrDuplicateFiles(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
DynamicPartitionCtx dpCtx,
FileSinkDesc conf,
org.apache.hadoop.conf.Configuration hconf,
boolean isBaseDir) |
Modifier and Type | Method and Description |
---|---|
static RecordUpdater |
HiveFileFormatUtils.getAcidRecordUpdater(org.apache.hadoop.mapred.JobConf jc,
TableDesc tableInfo,
int bucket,
FileSinkDesc conf,
org.apache.hadoop.fs.Path outPath,
ObjectInspector inspector,
org.apache.hadoop.mapred.Reporter reporter,
int rowIdColNum) |
static FileSinkOperator.RecordWriter |
HiveFileFormatUtils.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
TableDesc tableInfo,
Class<? extends org.apache.hadoop.io.Writable> outputClass,
FileSinkDesc conf,
org.apache.hadoop.fs.Path outPath,
org.apache.hadoop.mapred.Reporter reporter) |
Modifier and Type | Method and Description |
---|---|
Map<FileSinkDesc,Task<? extends Serializable>> |
GenMRProcContext.getLinkedFileDescTasks() |
Modifier and Type | Method and Description |
---|---|
static MapWork |
GenMapRedUtils.createMergeTask(FileSinkDesc fsInputDesc,
org.apache.hadoop.fs.Path finalName,
boolean hasDynamicPartitions,
CompilationOpContext ctx)
Create a block level merge task for RCFiles or stripe level merge task for
ORCFiles
|
static boolean |
GenMapRedUtils.isSkewedStoredAsDirs(FileSinkDesc fsInputDesc)
check if it is skewed table and stored as dirs.
|
Modifier and Type | Method and Description |
---|---|
void |
GenMRProcContext.setLinkedFileDescTasks(Map<FileSinkDesc,Task<? extends Serializable>> linkedFileDescTasks) |
Constructor and Description |
---|
QueryPlanPostProcessor(List<Task<?>> rootTasks,
Set<FileSinkDesc> acidSinks,
String executionId) |
Modifier and Type | Field and Description |
---|---|
protected Set<FileSinkDesc> |
BaseSemanticAnalyzer.acidFileSinks
A set of FileSinkOperators being written to in an ACID compliant way.
|
Map<org.apache.hadoop.fs.Path,List<FileSinkDesc>> |
GenTezProcContext.linkedFileSinks |
Modifier and Type | Method and Description |
---|---|
Set<FileSinkDesc> |
BaseSemanticAnalyzer.getAcidFileSinks() |
Set<FileSinkDesc> |
ParseContext.getAcidSinks() |
Constructor and Description |
---|
ParseContext(QueryState queryState,
HashMap<TableScanOperator,ExprNodeDesc> opToPartPruner,
HashMap<TableScanOperator,PrunedPartitionList> opToPartList,
HashMap<String,TableScanOperator> topOps,
Set<JoinOperator> joinOps,
Set<SMBMapJoinOperator> smbMapJoinOps,
List<LoadTableDesc> loadTableWork,
List<LoadFileDesc> loadFileWork,
List<ColumnStatsAutoGatherContext> columnStatsAutoGatherContexts,
Context ctx,
HashMap<String,String> idToTableNameMap,
int destTableId,
UnionProcContext uCtx,
List<AbstractMapJoinOperator<? extends MapJoinDesc>> listMapJoinOpsNoReducer,
Map<String,PrunedPartitionList> prunedPartitions,
Map<String,Table> tabNameToTabObject,
HashMap<TableScanOperator,FilterDesc.SampleDesc> opToSamplePruner,
GlobalLimitCtx globalLimitCtx,
HashMap<String,SplitSample> nameToSplitSample,
HashSet<ReadEntity> semanticInputs,
List<Task<? extends Serializable>> rootTasks,
Map<TableScanOperator,Map<String,ExprNodeDesc>> opToPartToSkewedPruner,
Map<String,ReadEntity> viewAliasToInput,
List<ReduceSinkOperator> reduceSinkOperatorsAddedByEnforceBucketingSorting,
BaseSemanticAnalyzer.AnalyzeRewriteContext analyzeRewrite,
CreateTableDesc createTableDesc,
CreateViewDesc createViewDesc,
MaterializedViewDesc materializedViewUpdateDesc,
QueryProperties queryProperties,
Map<SelectOperator,Table> viewProjectToTableSchema,
Set<FileSinkDesc> acidFileSinks) |
Modifier and Type | Field and Description |
---|---|
Map<org.apache.hadoop.fs.Path,List<FileSinkDesc>> |
GenSparkProcContext.linkedFileSinks |
Modifier and Type | Method and Description |
---|---|
FileSinkDesc |
CreateTableDesc.getAndUnsetWriter() |
Modifier and Type | Method and Description |
---|---|
List<FileSinkDesc> |
FileSinkDesc.getLinkedFileSinkDesc() |
Modifier and Type | Method and Description |
---|---|
void |
CreateTableDesc.setWriter(FileSinkDesc writer) |
Modifier and Type | Method and Description |
---|---|
void |
FileSinkDesc.setLinkedFileSinkDesc(List<FileSinkDesc> linkedFileSinkDesc) |
Copyright © 2022 The Apache Software Foundation. All rights reserved.