Package | Description |
---|---|
org.apache.hadoop.hive.ql.exec |
Hive QL execution tasks, operators, functions and other handlers.
|
org.apache.hadoop.hive.ql.hooks | |
org.apache.hadoop.hive.ql.index | |
org.apache.hadoop.hive.ql.lockmgr |
Hive Lock Manager interfaces and some custom implmentations
|
org.apache.hadoop.hive.ql.metadata | |
org.apache.hadoop.hive.ql.metadata.formatting | |
org.apache.hadoop.hive.ql.optimizer | |
org.apache.hadoop.hive.ql.optimizer.listbucketingpruner | |
org.apache.hadoop.hive.ql.optimizer.pcr | |
org.apache.hadoop.hive.ql.optimizer.ppr | |
org.apache.hadoop.hive.ql.parse | |
org.apache.hadoop.hive.ql.plan | |
org.apache.hadoop.hive.ql.security.authorization | |
org.apache.hadoop.hive.ql.stats | |
org.apache.hive.hcatalog.cli.SemanticAnalysis | |
org.apache.hive.hcatalog.common |
Modifier and Type | Method and Description |
---|---|
static int |
ArchiveUtils.getArchivingLevel(Partition p)
Returns archiving level, which is how many fields were set in partial
specification ARCHIVE was run for
|
static String |
ArchiveUtils.getPartialName(Partition p,
int level)
Get a prefix of the given parition's string representation.
|
static PartitionDesc |
Utilities.getPartitionDesc(Partition part) |
static PartitionDesc |
Utilities.getPartitionDescFromTableDesc(TableDesc tblDesc,
Partition part) |
static boolean |
ArchiveUtils.isArchived(Partition p)
Determines whether a partition has been archived
|
Modifier and Type | Method and Description |
---|---|
Partition |
Entity.getP() |
Partition |
Entity.getPartition()
Get the partition associated with the entity.
|
Modifier and Type | Method and Description |
---|---|
void |
Entity.setP(Partition p) |
Constructor and Description |
---|
Entity(Partition p,
boolean complete)
Constructor for a partition.
|
ReadEntity(Partition p)
Constructor given a partition.
|
ReadEntity(Partition p,
ReadEntity parent) |
ReadEntity(Partition p,
ReadEntity parent,
boolean isDirect) |
WriteEntity(Partition p,
WriteEntity.WriteType type)
Constructor for a partition.
|
Modifier and Type | Method and Description |
---|---|
Set<Partition> |
HiveIndexQueryContext.getQueryPartitions() |
Modifier and Type | Method and Description |
---|---|
List<Task<?>> |
TableBasedIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
List<Task<?>> |
TableBasedIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs) |
List<Task<?>> |
HiveIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
Requests that the handler generate a plan for building the index; the plan
should read the base table and write out the index representation.
|
List<Task<?>> |
HiveIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
Requests that the handler generate a plan for building the index; the plan
should read the base table and write out the index representation.
|
void |
HiveIndexQueryContext.setQueryPartitions(Set<Partition> queryPartitions) |
Constructor and Description |
---|
HiveLockObject(Partition par,
HiveLockObject.HiveLockObjectData lockData) |
Modifier and Type | Class and Description |
---|---|
class |
DummyPartition
A Hive Table Partition: is a fundamental storage unit within a Table.
|
Modifier and Type | Method and Description |
---|---|
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec)
Creates a partition.
|
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate) |
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate,
String partPath,
boolean inheritTableSpecs)
Returns partition metadata
|
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate,
String partPath,
boolean inheritTableSpecs,
List<org.apache.hadoop.fs.Path> newFiles)
Returns partition metadata
|
Partition |
Hive.loadPartition(org.apache.hadoop.fs.Path loadPath,
Table tbl,
Map<String,String> partSpec,
boolean replace,
boolean holdDDLTime,
boolean inheritTableSpecs,
boolean isSkewedStoreAsSubdir,
boolean isSrcLocal,
boolean isAcid)
Load a directory into a Hive Table Partition - Alters existing content of
the partition with the contents of loadPath.
|
Modifier and Type | Method and Description |
---|---|
List<Partition> |
Hive.createPartitions(AddPartitionDesc addPartitionDesc) |
List<Partition> |
Hive.dropPartitions(String tblName,
List<DropTableDesc.PartSpec> partSpecs,
boolean deleteData,
boolean ignoreProtection,
boolean ifExists) |
List<Partition> |
Hive.dropPartitions(String tblName,
List<DropTableDesc.PartSpec> partSpecs,
PartitionDropOptions dropOptions) |
List<Partition> |
Hive.dropPartitions(String dbName,
String tblName,
List<DropTableDesc.PartSpec> partSpecs,
boolean deleteData,
boolean ignoreProtection,
boolean ifExists) |
List<Partition> |
Hive.dropPartitions(String dbName,
String tblName,
List<DropTableDesc.PartSpec> partSpecs,
PartitionDropOptions dropOptions) |
Set<Partition> |
Hive.getAllPartitionsOf(Table tbl)
Get all the partitions; unlike
Hive.getPartitions(Table) , does not include auth. |
List<Partition> |
Hive.getPartitions(Table tbl)
get all the partitions that the table has
|
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial
specification.
|
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec,
short limit)
get all the partitions of the table that matches the given partial
specification.
|
List<Partition> |
Hive.getPartitionsByFilter(Table tbl,
String filter)
Get a list of Partitions by filter.
|
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
List<String> partNames)
Get all partitions of the table that matches the list of given partition names.
|
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial
specification.
|
Iterator<Partition> |
PartitionIterable.iterator() |
Map<Map<String,String>,Partition> |
Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path loadPath,
String tableName,
Map<String,String> partSpec,
boolean replace,
int numDP,
boolean holdDDLTime,
boolean listBucketingEnabled,
boolean isAcid,
long txnId)
Given a source directory name of the load path, load all dynamically generated partitions
into the specified table and return a list of strings that represent the dynamic partition
paths.
|
Modifier and Type | Method and Description |
---|---|
void |
Hive.alterPartition(String tblName,
Partition newPart)
Updates the existing partition metadata with the new metadata.
|
void |
Hive.alterPartition(String dbName,
String tblName,
Partition newPart)
Updates the existing partition metadata with the new metadata.
|
void |
Hive.renamePartition(Table tbl,
Map<String,String> oldPartSpec,
Partition newPart)
Rename a old partition to new partition
|
Modifier and Type | Method and Description |
---|---|
void |
Hive.alterPartitions(String tblName,
List<Partition> newParts)
Updates the existing table metadata with the new metadata.
|
boolean |
Hive.getPartitionsByExpr(Table tbl,
ExprNodeGenericFuncDesc expr,
HiveConf conf,
List<Partition> result)
Get a list of Partitions by expr.
|
Constructor and Description |
---|
PartitionIterable(List<Partition> ptnsProvided)
Dummy constructor, which simply acts as an iterator on an already-present
list of partitions, allows for easy drop-in replacement for other methods
that already have a List
|
Modifier and Type | Method and Description |
---|---|
void |
MetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isPretty,
boolean isOutputPadded,
List<ColumnStatisticsObj> colStats)
Describe table.
|
void |
JsonMetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isPretty,
boolean isOutputPadded,
List<ColumnStatisticsObj> colStats)
Describe table.
|
static String |
MetaDataFormatUtils.getPartitionInformation(Partition part) |
void |
MetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
Show the table status.
|
void |
JsonMetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par) |
Modifier and Type | Method and Description |
---|---|
static Set<Partition> |
IndexUtils.checkPartitionsCoveredByIndex(TableScanOperator tableScan,
ParseContext pctx,
List<Index> indexes)
Check the partitions used by the table scan to make sure they also exist in the
index table.
|
Map<Partition,List<String>> |
BucketJoinProcCtx.getBigTblPartsToBucketFileNames() |
Map<Partition,Integer> |
BucketJoinProcCtx.getBigTblPartsToBucketNumber() |
static Set<Partition> |
GenMapRedUtils.getConfirmedPartitionsForScan(TableScanOperator tableScanOp) |
Modifier and Type | Method and Description |
---|---|
protected void |
PrunerOperatorFactory.FilterPruner.addPruningPred(Map<TableScanOperator,Map<String,ExprNodeDesc>> opToPrunner,
TableScanOperator top,
ExprNodeDesc new_pruner_pred,
Partition part)
Add pruning predicate.
|
protected long |
SizeBasedBigTableSelectorForAutoSMJ.getSize(HiveConf conf,
Partition partition) |
static SamplePruner.LimitPruneRetStatus |
SamplePruner.limitPrune(Partition part,
long sizeLimit,
int fileLimit,
Collection<org.apache.hadoop.fs.Path> retPathList)
Try to generate a list of subset of files in the partition to reach a size
limit with number of files less than fileLimit
|
static org.apache.hadoop.fs.Path[] |
SamplePruner.prune(Partition part,
FilterDesc.SampleDesc sampleDescr)
Prunes to get all the files in the partition that satisfy the TABLESAMPLE
clause.
|
Modifier and Type | Method and Description |
---|---|
void |
BucketJoinProcCtx.setBigTblPartsToBucketFileNames(Map<Partition,List<String>> bigTblPartsToBucketFileNames) |
void |
BucketJoinProcCtx.setBigTblPartsToBucketNumber(Map<Partition,Integer> bigTblPartsToBucketNumber) |
Modifier and Type | Method and Description |
---|---|
Partition |
LBOpWalkerCtx.getPart() |
Partition |
LBExprProcCtx.getPart() |
Modifier and Type | Method and Description |
---|---|
static ExprNodeDesc |
LBExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred,
Partition part)
Generates the list bucketing pruner for the expression tree.
|
static boolean |
ListBucketingPrunerUtils.isListBucketingPart(Partition part)
check if the partition is list bucketing
|
static org.apache.hadoop.fs.Path[] |
ListBucketingPruner.prune(ParseContext ctx,
Partition part,
ExprNodeDesc pruner)
Prunes to the directories which match the skewed keys in where clause.
|
Constructor and Description |
---|
LBExprProcCtx(String tabAlias,
Partition part) |
LBOpWalkerCtx(Map<TableScanOperator,Map<String,ExprNodeDesc>> opToPartToLBPruner,
Partition part)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
List<Partition> |
PcrExprProcCtx.getPartList() |
Modifier and Type | Method and Description |
---|---|
static PcrExprProcFactory.NodeInfoWrapper |
PcrExprProcFactory.walkExprTree(String tabAlias,
ArrayList<Partition> parts,
List<VirtualColumn> vcs,
ExprNodeDesc pred)
Remove partition conditions when necessary from the the expression tree.
|
Constructor and Description |
---|
PcrExprProcCtx(String tabAlias,
List<Partition> partList) |
PcrExprProcCtx(String tabAlias,
List<Partition> partList,
List<VirtualColumn> vcs) |
Modifier and Type | Method and Description |
---|---|
static Object |
PartExprEvalUtils.evalExprWithPart(ExprNodeDesc expr,
Partition p,
List<VirtualColumn> vcs,
StructObjectInspector rowObjectInspector)
Evaluate expression with partition columns
|
Modifier and Type | Field and Description |
---|---|
Partition |
BaseSemanticAnalyzer.TableSpec.partHandle |
Modifier and Type | Field and Description |
---|---|
List<Partition> |
BaseSemanticAnalyzer.TableSpec.partitions |
Modifier and Type | Method and Description |
---|---|
Partition |
QBMetaData.getDestPartitionForAlias(String alias) |
protected Partition |
BaseSemanticAnalyzer.getPartition(Table table,
Map<String,String> partSpec,
boolean throwException) |
Modifier and Type | Method and Description |
---|---|
com.google.common.base.Predicate<Partition> |
ReplicationSpec.allowEventReplacementInto()
Returns a predicate filter to filter an Iterable
|
Map<String,Partition> |
QBMetaData.getNameToDestPartition() |
List<Partition> |
PrunedPartitionList.getNotDeniedPartns() |
Set<Partition> |
PrunedPartitionList.getPartitions() |
protected List<Partition> |
BaseSemanticAnalyzer.getPartitions(Table table,
Map<String,String> partSpec,
boolean throwException) |
Modifier and Type | Method and Description |
---|---|
boolean |
ReplicationSpec.allowEventReplacementInto(Partition ptn)
Determines if a current replication event specification is allowed to
replicate-replace-into a given partition
|
boolean |
ReplicationSpec.allowReplacementInto(Partition ptn)
Determines if a current replication object(current state of dump) is allowed to
replicate-replace-into a given partition
|
void |
QBMetaData.setDestForAlias(String alias,
Partition part) |
Modifier and Type | Method and Description |
---|---|
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
Iterable<Partition> partitions,
ReplicationSpec replicationSpec) |
Constructor and Description |
---|
PrunedPartitionList(Table source,
Set<Partition> partitions,
List<String> referred,
boolean hasUnknowns) |
Constructor and Description |
---|
PartitionDesc(Partition part) |
PartitionDesc(Partition part,
TableDesc tblDesc) |
Modifier and Type | Class and Description |
---|---|
static class |
AuthorizationPreEventListener.PartitionWrapper |
Modifier and Type | Method and Description |
---|---|
void |
StorageBasedAuthorizationProvider.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive partition object.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Partition part,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a list of columns.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
Modifier and Type | Method and Description |
---|---|
abstract void |
HiveMultiPartitionAuthorizationProviderBase.authorize(Table table,
Iterable<Partition> partitions,
Privilege[] requiredReadPrivileges,
Privilege[] requiredWritePrivileges)
Authorization method for partition sets.
|
Modifier and Type | Method and Description |
---|---|
static List<Long> |
StatsUtils.getBasicStatForPartitions(Table table,
List<Partition> parts,
String statType)
Get basic stats of partitions
|
static List<Long> |
StatsUtils.getFileSizeForPartitions(HiveConf conf,
List<Partition> parts)
Find the bytes on disks occupied by list of partitions
|
static int |
StatsUtils.getNDVPartitionColumn(Set<Partition> partitions,
String partColName) |
Modifier and Type | Method and Description |
---|---|
protected void |
HCatSemanticAnalyzerBase.authorize(Partition part,
Privilege priv) |
Modifier and Type | Method and Description |
---|---|
static HCatSchema |
HCatUtil.extractSchema(Partition partition) |
Copyright © 2017 The Apache Software Foundation. All rights reserved.