Modifier and Type | Method and Description |
---|---|
Table |
Context.getMaterializedTable(String cteName) |
Table |
Context.getTempTableForLoad() |
Modifier and Type | Method and Description |
---|---|
void |
Context.addMaterializedTable(String cteName,
Table table) |
void |
Context.setTempTableForLoad(Table tempTableForLoad) |
Modifier and Type | Method and Description |
---|---|
static void |
Utilities.addSchemaEvolutionToTableScanOperator(Table table,
TableScanOperator tableScanOp) |
static String |
ArchiveUtils.conflictingArchiveNameOrNull(Hive db,
Table tbl,
LinkedHashMap<String,String> partSpec)
Determines if one can insert into partition(s), or there's a conflict with
archive.
|
static ArchiveUtils.PartSpecInfo |
ArchiveUtils.PartSpecInfo.create(Table tbl,
Map<String,String> partSpec)
Extract partial prefix specification from table and key-value map
|
org.apache.hadoop.fs.Path |
ArchiveUtils.PartSpecInfo.createPath(Table tbl)
Creates path where partitions matching prefix should lie in filesystem
|
static boolean |
DDLTask.doesTableNeedLocation(Table tbl) |
static TableDesc |
Utilities.getTableDesc(Table tbl) |
static void |
DDLTask.makeLocationQualified(String databaseName,
Table table,
HiveConf conf)
Make location in specified sd qualified.
|
Modifier and Type | Method and Description |
---|---|
static Map<Integer,List<ExprNodeGenericFuncDesc>> |
ReplUtils.genPartSpecs(Table table,
List<Map<String,String>> partitions) |
Modifier and Type | Method and Description |
---|---|
Table |
Entity.getT() |
Table |
Entity.getTable()
Get the table associated with the entity.
|
Modifier and Type | Method and Description |
---|---|
void |
Entity.setT(Table t) |
Constructor and Description |
---|
Entity(Table t,
boolean complete)
Constructor for a table.
|
ReadEntity(Table t)
Constructor.
|
ReadEntity(Table t,
ReadEntity parent) |
ReadEntity(Table t,
ReadEntity parent,
boolean isDirect) |
WriteEntity(Table t,
WriteEntity.WriteType type)
Constructor for a table.
|
WriteEntity(Table t,
WriteEntity.WriteType type,
boolean complete) |
Modifier and Type | Method and Description |
---|---|
static List<org.apache.hadoop.fs.FileStatus> |
AcidUtils.getAcidFilesForStats(Table table,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.conf.Configuration jc,
org.apache.hadoop.fs.FileSystem fs) |
static AcidUtils.AcidOperationalProperties |
AcidUtils.getAcidOperationalProperties(Table table)
Returns the acidOperationalProperties for a given table.
|
static boolean |
AcidUtils.isFullAcidTable(Table table)
Should produce the same result as
TxnUtils.isAcidTable(org.apache.hadoop.hive.metastore.api.Table) |
static boolean |
AcidUtils.isInsertOnlyTable(Table table) |
static Boolean |
AcidUtils.isToInsertOnlyTable(Table tbl,
Map<String,String> props)
The method for altering table props; may set the table to MM, non-MM, or not affect MM.
|
static boolean |
AcidUtils.isTransactionalTable(Table table) |
Modifier and Type | Method and Description |
---|---|
static boolean |
ParquetHiveSerDe.isParquetTable(Table table) |
Constructor and Description |
---|
HiveLockObject(Table tbl,
HiveLockObject.HiveLockObjectData lockData) |
Modifier and Type | Method and Description |
---|---|
Table |
Table.copy() |
Table |
Partition.getTable() |
Table |
Hive.getTable(String tableName)
Returns metadata for the table named tableName
|
Table |
Hive.getTable(String tableName,
boolean throwException)
Returns metadata for the table named tableName
|
Table |
Hive.getTable(String dbName,
String tableName)
Returns metadata of the table
|
Table |
Hive.getTable(String dbName,
String tableName,
boolean throwException)
Returns metadata of the table
|
Table |
Hive.newTable(String tableName) |
Modifier and Type | Method and Description |
---|---|
List<Table> |
Hive.getAllMaterializedViewObjects(String dbName)
Get all materialized views for the specified database.
|
List<Table> |
Hive.getAllTableObjects(String dbName)
Get all tables for the specified database.
|
static Map<String,Table> |
SessionHiveMetaStoreClient.getTempTablesForDatabase(String dbName,
String tblName) |
Modifier and Type | Method and Description |
---|---|
void |
Hive.alterTable(String dbName,
String tblName,
Table newTbl,
boolean cascade,
EnvironmentContext environmentContext) |
void |
Hive.alterTable(String fullyQlfdTblName,
Table newTbl,
boolean cascade,
EnvironmentContext environmentContext) |
void |
Hive.alterTable(String fullyQlfdTblName,
Table newTbl,
EnvironmentContext environmentContext)
Updates the existing table metadata with the new metadata.
|
void |
Hive.alterTable(Table newTbl,
EnvironmentContext environmentContext) |
static Partition |
Hive.convertAddSpecToMetaPartition(Table tbl,
AddPartitionDesc.OnePartitionDesc addSpec,
HiveConf conf) |
org.apache.calcite.plan.RelOptMaterialization |
HiveMaterializedViewsRegistry.createMaterializedView(HiveConf conf,
Table materializedViewTable)
Adds a newly created materialized view to the cache.
|
static Partition |
Partition.createMetaPartitionObject(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location) |
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec)
Creates a partition.
|
void |
Hive.createTable(Table tbl)
Creates the table with the give objects
|
void |
Hive.createTable(Table tbl,
boolean ifNotExists) |
void |
Hive.createTable(Table tbl,
boolean ifNotExists,
List<SQLPrimaryKey> primaryKeys,
List<SQLForeignKey> foreignKeys,
List<SQLUniqueConstraint> uniqueConstraints,
List<SQLNotNullConstraint> notNullConstraints,
List<SQLDefaultConstraint> defaultConstraints,
List<SQLCheckConstraint> checkConstraints)
Creates the table with the given objects.
|
void |
HiveMaterializedViewsRegistry.dropMaterializedView(Table materializedViewTable)
Removes the materialized view from the cache.
|
List<Partition> |
Hive.dropPartitions(Table table,
List<String> partDirNames,
boolean deleteData,
boolean ifExists)
drop the partitions specified as directory names associated with the table.
|
Set<Partition> |
Hive.getAllPartitionsOf(Table tbl)
Get all the partitions; unlike
Hive.getPartitions(Table) , does not include auth. |
static List<FieldSchema> |
Hive.getFieldsFromDeserializerForMsStorage(Table tbl,
Deserializer deserializer) |
int |
Hive.getNumPartitionsByFilter(Table tbl,
String filter)
Get a number of Partitions by filter.
|
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate) |
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate,
String partPath,
boolean inheritTableSpecs)
Returns partition metadata
|
List<Partition> |
Hive.getPartitions(Table tbl)
get all the partitions that the table has
|
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial
specification.
|
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec,
short limit)
get all the partitions of the table that matches the given partial
specification.
|
boolean |
Hive.getPartitionsByExpr(Table tbl,
ExprNodeGenericFuncDesc expr,
HiveConf conf,
List<Partition> result)
Get a list of Partitions by expr.
|
List<Partition> |
Hive.getPartitionsByFilter(Table tbl,
String filter)
Get a list of Partitions by filter.
|
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
List<String> partNames)
Get all partitions of the table that matches the list of given partition names.
|
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial
specification.
|
StorageHandlerInfo |
Hive.getStorageHandlerInfo(Table table) |
protected void |
Partition.initialize(Table table,
Partition tPartition)
Initializes this object with the given variables
|
Partition |
Hive.loadPartition(org.apache.hadoop.fs.Path loadPath,
Table tbl,
Map<String,String> partSpec,
LoadTableDesc.LoadFileType loadFileType,
boolean inheritTableSpecs,
boolean isSkewedStoreAsSubdir,
boolean isSrcLocal,
boolean isAcidIUDoperation,
boolean hasFollowingStatsTask,
Long writeId,
int stmtId,
boolean isInsertOverwrite)
Load a directory into a Hive Table Partition - Alters existing content of
the partition with the contents of loadPath.
|
void |
Hive.renamePartition(Table tbl,
Map<String,String> oldPartSpec,
Partition newPart)
Rename a old partition to new partition
|
void |
Partition.setTable(Table table)
Should be only used by serialization.
|
Constructor and Description |
---|
DummyPartition(Table tbl,
String name) |
DummyPartition(Table tbl,
String name,
Map<String,String> partSpec) |
Partition(Table tbl)
create an empty partition.
|
Partition(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location)
Create partition object with the given info.
|
Partition(Table tbl,
Partition tp) |
PartitionIterable(Hive db,
Table table,
Map<String,String> partialPartitionSpec,
int batch_size)
Primary constructor that fetches all partitions in a given table, given
a Hive object and a table object, and a partial partition spec.
|
Modifier and Type | Method and Description |
---|---|
void |
MetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isOutputPadded,
List<ColumnStatisticsObj> colStats,
PrimaryKeyInfo pkInfo,
ForeignKeyInfo fkInfo,
UniqueConstraint ukInfo,
NotNullConstraint nnInfo,
DefaultConstraint dInfo,
CheckConstraint cInfo,
StorageHandlerInfo storageHandlerInfo)
Describe table.
|
void |
JsonMetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isOutputPadded,
List<ColumnStatisticsObj> colStats,
PrimaryKeyInfo pkInfo,
ForeignKeyInfo fkInfo,
UniqueConstraint ukInfo,
NotNullConstraint nnInfo,
DefaultConstraint dInfo,
CheckConstraint cInfo,
StorageHandlerInfo storageHandlerInfo)
Describe table.
|
static String |
MetaDataFormatUtils.getTableInformation(Table table,
boolean isOutputPadded) |
Modifier and Type | Method and Description |
---|---|
void |
MetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
Show the table status.
|
void |
JsonMetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par) |
Modifier and Type | Method and Description |
---|---|
protected long |
SizeBasedBigTableSelectorForAutoSMJ.getSize(HiveConf conf,
Table table) |
Modifier and Type | Method and Description |
---|---|
Table |
RelOptHiveTable.getHiveTableMD() |
Constructor and Description |
---|
RelOptHiveTable(org.apache.calcite.plan.RelOptSchema calciteSchema,
String qualifiedTblName,
org.apache.calcite.rel.type.RelDataType rowType,
Table hiveTblMetadata,
List<ColumnInfo> hiveNonPartitionCols,
List<ColumnInfo> hivePartitionCols,
List<VirtualColumn> hiveVirtualCols,
HiveConf hconf,
Map<String,PrunedPartitionList> partitionCache,
Map<String,ColumnStatsList> colStatsCache,
AtomicInteger noColsMissingStats) |
Constructor and Description |
---|
HiveRelFieldTrimmer(org.apache.calcite.sql.validate.SqlValidator validator,
org.apache.calcite.tools.RelBuilder relBuilder,
ColumnAccessInfo columnAccessInfo,
Map<HiveProject,Table> viewToTableSchema) |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,ObjectPair<SelectOperator,Table>> |
LineageCtx.Index.getFinalSelectOps() |
Modifier and Type | Method and Description |
---|---|
boolean |
OpTraitsRulesProcFactory.TableScanRule.checkBucketedTable(Table tbl,
ParseContext pGraphContext,
PrunedPartitionList prunedParts) |
Modifier and Type | Method and Description |
---|---|
static boolean |
PartitionPruner.onlyContainsPartnCols(Table tab,
ExprNodeDesc expr)
Find out whether the condition only contains partitioned columns.
|
static PrunedPartitionList |
PartitionPruner.prune(Table tab,
ExprNodeDesc prunerExpr,
HiveConf conf,
String alias,
Map<String,PrunedPartitionList> prunedPartitionsMap)
Get the partition list for the table that satisfies the partition pruner
condition.
|
Modifier and Type | Field and Description |
---|---|
Table |
BaseSemanticAnalyzer.TableSpec.tableHandle |
Modifier and Type | Method and Description |
---|---|
Table |
QBMetaData.getDestTableForAlias(String alias) |
protected Table |
SemanticAnalyzer.getDummyTable() |
Table |
PrunedPartitionList.getSourceTable() |
Table |
QBMetaData.getSrcForAlias(String alias) |
Table |
PreInsertTableDesc.getTable() |
static Table |
AnalyzeCommandUtils.getTable(ASTNode tree,
BaseSemanticAnalyzer sa) |
protected Table |
BaseSemanticAnalyzer.getTable(String tblName) |
protected Table |
BaseSemanticAnalyzer.getTable(String[] qualified) |
protected Table |
BaseSemanticAnalyzer.getTable(String[] qualified,
boolean throwException) |
protected Table |
BaseSemanticAnalyzer.getTable(String tblName,
boolean throwException) |
protected Table |
BaseSemanticAnalyzer.getTable(String database,
String tblName,
boolean throwException) |
Table |
QBMetaData.getTableForAlias(String alias) |
protected Table |
SemanticAnalyzer.getTableObjectByName(String tableName,
boolean throwException) |
static Table |
ImportSemanticAnalyzer.tableIfExists(ImportTableDesc tblDesc,
Hive db)
Utility method that returns a table if one corresponding to the destination
tblDesc is found.
|
Modifier and Type | Method and Description |
---|---|
HashMap<String,Table> |
QBMetaData.getAliasToTable() |
Map<String,Table> |
QBMetaData.getNameToDestTable() |
Map<String,Table> |
ParseContext.getTabNameToTabObject() |
Map<SelectOperator,Table> |
ParseContext.getViewProjectToTableSchema() |
HashMap<String,Table> |
QB.getViewToTabSchema() |
Modifier and Type | Method and Description |
---|---|
protected void |
SemanticAnalyzer.checkAcidTxnManager(Table table) |
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
Iterable<Partition> partitions,
ReplicationSpec replicationSpec,
HiveConf hiveConf) |
protected Partition |
BaseSemanticAnalyzer.getPartition(Table table,
Map<String,String> partSpec,
boolean throwException) |
protected List<Partition> |
BaseSemanticAnalyzer.getPartitions(Table table,
Map<String,String> partSpec,
boolean throwException) |
static Map<String,String> |
AnalyzeCommandUtils.getPartKeyValuePairsFromAST(Table tbl,
ASTNode tree,
HiveConf hiveConf) |
static HashMap<String,String> |
DDLSemanticAnalyzer.getValidatedPartSpec(Table table,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
static boolean |
DDLSemanticAnalyzer.isFullSpec(Table table,
Map<String,String> partSpec) |
boolean |
BaseSemanticAnalyzer.isValidPrefixSpec(Table tTable,
Map<String,String> spec)
Checks if given specification is proper specification for prefix of
partition cols, for table partitioned by ds, hr, min valid ones are
(ds='2008-04-08'), (ds='2008-04-08', hr='12'), (ds='2008-04-08', hr='12', min='30')
invalid one is for example (ds='2008-04-08', min='30')
|
static String |
SemanticAnalyzer.replaceDefaultKeywordForMerge(String valueClause,
Table targetTable) |
void |
QB.rewriteViewToSubq(String alias,
String viewName,
QBExpr qbexpr,
Table tab) |
void |
QBMetaData.setDestForAlias(String alias,
Table tab) |
void |
QBMetaData.setSrcForAlias(String alias,
Table tab) |
static void |
BaseSemanticAnalyzer.validatePartColumnType(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf) |
static void |
BaseSemanticAnalyzer.validatePartSpec(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull) |
Modifier and Type | Method and Description |
---|---|
protected static ASTNode |
SemanticAnalyzer.rewriteASTWithMaskAndFilter(TableMask tableMask,
ASTNode ast,
org.antlr.runtime.TokenRewriteStream tokenRewriteStream,
Context ctx,
Hive db,
Map<String,Table> tabNameToTabObject,
Set<Integer> ignoredTokens) |
Constructor and Description |
---|
ColumnStatsAutoGatherContext(SemanticAnalyzer sa,
HiveConf conf,
Operator<? extends OperatorDesc> op,
Table tbl,
Map<String,String> partSpec,
boolean isInsertInto,
Context ctx) |
PreInsertTableDesc(Table table,
boolean overwrite) |
PrunedPartitionList(Table source,
Set<Partition> partitions,
List<String> referred,
boolean hasUnknowns) |
PrunedPartitionList(Table source,
String key,
Set<Partition> partitions,
List<String> referred,
boolean hasUnknowns) |
TableSpec(Table table) |
TableSpec(Table tableHandle,
List<Partition> partitions) |
Modifier and Type | Method and Description |
---|---|
HiveWrapper.Tuple<Table> |
HiveWrapper.table(String tableName) |
Modifier and Type | Method and Description |
---|---|
static Boolean |
Utils.shouldReplicate(ReplicationSpec replicationSpec,
Table tableHandle,
HiveConf hiveConf)
validates if a table can be exported, similar to EximUtil.shouldExport with few replication
specific checks.
|
Constructor and Description |
---|
TableSerializer(Table tableHandle,
Iterable<Partition> partitions,
HiveConf hiveConf) |
Modifier and Type | Method and Description |
---|---|
Table |
AlterTableExchangePartition.getDestinationTable() |
Table |
AlterTableExchangePartition.getSourceTable() |
Table |
FileSinkDesc.getTable() |
Table |
InsertCommitHookDesc.getTable() |
Table |
StatsWork.getTable() |
Table |
TableScanDesc.getTableMetadata() |
Table |
ImportTableDesc.toTable(HiveConf conf) |
Table |
CreateTableDesc.toTable(HiveConf conf) |
Table |
CreateViewDesc.toTable(HiveConf conf) |
Modifier and Type | Method and Description |
---|---|
static ExportWork.MmContext |
ExportWork.MmContext.createIfNeeded(Table t) |
static TableDesc |
PartitionDesc.getTableDesc(Table table) |
void |
AlterTableExchangePartition.setDestinationTable(Table destinationTable) |
void |
AlterTableExchangePartition.setSourceTable(Table sourceTable) |
void |
FileSinkDesc.setTable(Table table) |
void |
AlterTableDesc.setTable(Table table) |
void |
TableScanDesc.setTableMetadata(Table tableMetadata) |
void |
ImportTableDesc.setViewAsReferenceText(String dbName,
Table table) |
Constructor and Description |
---|
AlterTableExchangePartition(Table sourceTable,
Table destinationTable,
Map<String,String> partitionSpecs) |
DynamicPartitionCtx(Table tbl,
Map<String,String> partSpec,
String defaultPartName,
int maxParts) |
ImportTableDesc(String dbName,
Table table) |
InsertCommitHookDesc(Table table,
boolean overwrite) |
StatsWork(Table table,
BasicStatsWork basicStatsWork,
HiveConf hconf) |
StatsWork(Table table,
HiveConf hconf) |
TableScanDesc(String alias,
List<VirtualColumn> vcs,
Table tblMetadata) |
TableScanDesc(String alias,
Table tblMetadata) |
TableScanDesc(Table tblMetadata) |
Modifier and Type | Class and Description |
---|---|
static class |
AuthorizationPreEventListener.TableWrapper |
Modifier and Type | Method and Description |
---|---|
abstract void |
HiveMultiPartitionAuthorizationProviderBase.authorize(Table table,
Iterable<Partition> partitions,
Privilege[] requiredReadPrivileges,
Privilege[] requiredWritePrivileges)
Authorization method for partition sets.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a list of columns.
|
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv) |
void |
MetaStoreAuthzAPIAuthorizerEmbedOnly.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv) |
void |
HiveAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive table object.
|
Constructor and Description |
---|
PartitionWrapper(Table table,
Partition mapiPart) |
Modifier and Type | Method and Description |
---|---|
Map<String,Map<String,Table>> |
SessionState.getTempTables() |
Modifier and Type | Method and Description |
---|---|
abstract Table |
Partish.getTable() |
Modifier and Type | Method and Description |
---|---|
static boolean |
StatsUtils.areBasicStatsUptoDateForQueryAnswering(Table table,
Map<String,String> params)
Are the basic stats for the table up-to-date for query planning.
|
static boolean |
StatsUtils.areColumnStatsUptoDateForQueryAnswering(Table table,
Map<String,String> params,
String colName)
Are the column stats for the table up-to-date for query planning.
|
static Partish |
Partish.buildFor(Table table) |
static Partish |
Partish.buildFor(Table table,
Partition part) |
static Statistics |
StatsUtils.collectStatistics(HiveConf conf,
PrunedPartitionList partList,
ColumnStatsList colStatsCache,
Table table,
TableScanOperator tableScanOperator)
Collect table, partition and column level statistics
|
static Statistics |
StatsUtils.collectStatistics(HiveConf conf,
PrunedPartitionList partList,
Table table,
List<ColumnInfo> schema,
List<String> neededColumns,
ColumnStatsList colStatsCache,
List<String> referencedColumns,
boolean fetchColStats) |
static List<Long> |
StatsUtils.getBasicStatForPartitions(Table table,
List<Partition> parts,
String statType)
Get basic stats of partitions
|
static long |
StatsUtils.getBasicStatForTable(Table table,
String statType)
Get basic stats of table
|
static long |
StatsUtils.getFileSizeForTable(HiveConf conf,
Table table)
Find the bytes on disk occupied by a table
|
static long |
StatsUtils.getNumRows(HiveConf conf,
List<ColumnInfo> schema,
Table table,
PrunedPartitionList partitionList,
AtomicInteger noColsMissingStats)
Returns number of rows if it exists.
|
static long |
StatsUtils.getNumRows(Table table)
Get number of rows of a give table
|
static long |
StatsUtils.getRawDataSize(Table table)
Get raw data size of a give table
|
static List<ColStatistics> |
StatsUtils.getTableColumnStats(Table table,
List<ColumnInfo> schema,
List<String> neededColumns,
ColumnStatsList colStatsCache)
Get table level column statistics from metastore for needed columns
|
static long |
StatsUtils.getTotalSize(Table table)
Get total size of a give table
|
int |
ColStatsProcessor.persistColumnStats(Hive db,
Table tbl) |
int |
BasicStatsTask.process(Hive db,
Table tbl) |
int |
BasicStatsNoJobTask.process(Hive db,
Table tbl) |
int |
IStatsProcessor.process(Hive db,
Table tbl) |
int |
ColStatsProcessor.process(Hive db,
Table tbl) |
Modifier and Type | Method and Description |
---|---|
protected void |
HCatSemanticAnalyzerBase.authorize(Table table,
Privilege priv) |
Modifier and Type | Method and Description |
---|---|
static Table |
HCatUtil.getTable(IMetaStoreClient client,
String dbName,
String tableName) |
Modifier and Type | Method and Description |
---|---|
static HCatSchema |
HCatUtil.extractSchema(Table table) |
static HCatSchema |
HCatUtil.getPartitionColumns(Table table)
return the partition columns from a table instance
|
static HCatSchema |
HCatUtil.getTableSchemaWithPtnCols(Table table) |
static List<FieldSchema> |
HCatUtil.validatePartitionSchema(Table table,
HCatSchema partitionSchema)
Validate partition schema, checks if the column types match between the
partition and the existing table schema.
|
Modifier and Type | Field and Description |
---|---|
protected Table |
AbstractRecordWriter.table |
Modifier and Type | Method and Description |
---|---|
Table |
HiveStreamingConnection.getTable() |
Table |
ConnectionInfo.getTable()
Get the table used by streaming connection.
|
Copyright © 2022 The Apache Software Foundation. All rights reserved.