PREHOOK: query: DROP TABLE Employee_Part PREHOOK: type: DROPTABLE POSTHOOK: query: DROP TABLE Employee_Part POSTHOOK: type: DROPTABLE PREHOOK: query: CREATE TABLE Employee_Part(employeeID int, employeeName String) partitioned by (employeeSalary double) row format delimited fields terminated by '|' stored as textfile PREHOOK: type: CREATETABLE POSTHOOK: query: CREATE TABLE Employee_Part(employeeID int, employeeName String) partitioned by (employeeSalary double) row format delimited fields terminated by '|' stored as textfile POSTHOOK: type: CREATETABLE POSTHOOK: Output: default@Employee_Part PREHOOK: query: LOAD DATA LOCAL INPATH "../data/files/employee.dat" INTO TABLE Employee_Part partition(employeeSalary=2000.0) PREHOOK: type: LOAD PREHOOK: Output: default@employee_part POSTHOOK: query: LOAD DATA LOCAL INPATH "../data/files/employee.dat" INTO TABLE Employee_Part partition(employeeSalary=2000.0) POSTHOOK: type: LOAD POSTHOOK: Output: default@employee_part POSTHOOK: Output: default@employee_part@employeesalary=2000.0 PREHOOK: query: LOAD DATA LOCAL INPATH "../data/files/employee.dat" INTO TABLE Employee_Part partition(employeeSalary=4000.0) PREHOOK: type: LOAD PREHOOK: Output: default@employee_part POSTHOOK: query: LOAD DATA LOCAL INPATH "../data/files/employee.dat" INTO TABLE Employee_Part partition(employeeSalary=4000.0) POSTHOOK: type: LOAD POSTHOOK: Output: default@employee_part POSTHOOK: Output: default@employee_part@employeesalary=4000.0 PREHOOK: query: explain analyze table Employee_Part partition (employeeSalary=2000.0) compute statistics for columns employeeID PREHOOK: type: QUERY POSTHOOK: query: explain analyze table Employee_Part partition (employeeSalary=2000.0) compute statistics for columns employeeID POSTHOOK: type: QUERY ABSTRACT SYNTAX TREE: (TOK_ANALYZE (TOK_TAB (TOK_TABNAME Employee_Part) (TOK_PARTSPEC (TOK_PARTVAL employeeSalary 2000.0))) (TOK_TABCOLNAME employeeID)) STAGE DEPENDENCIES: Stage-0 is a root stage Stage-1 is a root stage STAGE PLANS: Stage: Stage-0 Map Reduce Alias -> Map Operator Tree: employee_part TableScan alias: employee_part Select Operator expressions: expr: employeeid type: int outputColumnNames: employeeid Group By Operator aggregations: expr: compute_stats(employeeid, 16) bucketGroup: false mode: hash outputColumnNames: _col0 Reduce Output Operator sort order: tag: -1 value expressions: expr: _col0 type: struct Reduce Operator Tree: Group By Operator aggregations: expr: compute_stats(VALUE._col0) bucketGroup: false mode: mergepartial outputColumnNames: _col0 Select Operator expressions: expr: _col0 type: struct outputColumnNames: _col0 File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Stage: Stage-1 Column Stats Work Column Stats Desc: Columns: employeeID Column Types: int Partition: employeesalary=2000.0 Table: Employee_Part PREHOOK: query: explain extended analyze table Employee_Part partition (employeeSalary=2000.0) compute statistics for columns employeeID PREHOOK: type: QUERY POSTHOOK: query: explain extended analyze table Employee_Part partition (employeeSalary=2000.0) compute statistics for columns employeeID POSTHOOK: type: QUERY ABSTRACT SYNTAX TREE: (TOK_ANALYZE (TOK_TAB (TOK_TABNAME Employee_Part) (TOK_PARTSPEC (TOK_PARTVAL employeeSalary 2000.0))) (TOK_TABCOLNAME employeeID)) STAGE DEPENDENCIES: Stage-0 is a root stage Stage-1 is a root stage STAGE PLANS: Stage: Stage-0 Map Reduce Alias -> Map Operator Tree: employee_part TableScan alias: employee_part GatherStats: false Select Operator expressions: expr: employeeid type: int outputColumnNames: employeeid Group By Operator aggregations: expr: compute_stats(employeeid, 16) bucketGroup: false mode: hash outputColumnNames: _col0 Reduce Output Operator sort order: tag: -1 value expressions: expr: _col0 type: struct Needs Tagging: false Path -> Alias: #### A masked pattern was here #### Path -> Partition: #### A masked pattern was here #### Partition base file name: employeesalary=2000.0 input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat partition values: employeesalary 2000.0 properties: bucket_count -1 columns employeeid,employeename columns.types int:string field.delim | #### A masked pattern was here #### name default.employee_part numFiles 1 numPartitions 2 numRows 0 partition_columns employeesalary rawDataSize 0 serialization.ddl struct employee_part { i32 employeeid, string employeename} serialization.format | serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 105 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: bucket_count -1 columns employeeid,employeename columns.types int:string field.delim | #### A masked pattern was here #### name default.employee_part numFiles 2 numPartitions 2 numRows 0 partition_columns employeesalary rawDataSize 0 serialization.ddl struct employee_part { i32 employeeid, string employeename} serialization.format | serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 210 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe name: default.employee_part name: default.employee_part Reduce Operator Tree: Group By Operator aggregations: expr: compute_stats(VALUE._col0) bucketGroup: false mode: mergepartial outputColumnNames: _col0 Select Operator expressions: expr: _col0 type: struct outputColumnNames: _col0 File Output Operator compressed: false GlobalTableId: 0 #### A masked pattern was here #### NumFilesPerFileSink: 1 #### A masked pattern was here #### table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: columns _col0 columns.types struct escape.delim \ serialization.format 1 TotalFiles: 1 GatherStats: false MultiFileSpray: false Truncated Path -> Alias: /employee_part/employeesalary=2000.0 [employee_part] Stage: Stage-1 Column Stats Work Column Stats Desc: Columns: employeeID Column Types: int Partition: employeesalary=2000.0 Table: Employee_Part Is Table Level Stats: false PREHOOK: query: analyze table Employee_Part partition (employeeSalary=2000.0) compute statistics for columns employeeID PREHOOK: type: QUERY PREHOOK: Input: default@employee_part@employeesalary=2000.0 #### A masked pattern was here #### POSTHOOK: query: analyze table Employee_Part partition (employeeSalary=2000.0) compute statistics for columns employeeID POSTHOOK: type: QUERY POSTHOOK: Input: default@employee_part@employeesalary=2000.0 #### A masked pattern was here #### PREHOOK: query: explain analyze table Employee_Part partition (employeeSalary=4000.0) compute statistics for columns employeeID PREHOOK: type: QUERY POSTHOOK: query: explain analyze table Employee_Part partition (employeeSalary=4000.0) compute statistics for columns employeeID POSTHOOK: type: QUERY ABSTRACT SYNTAX TREE: (TOK_ANALYZE (TOK_TAB (TOK_TABNAME Employee_Part) (TOK_PARTSPEC (TOK_PARTVAL employeeSalary 4000.0))) (TOK_TABCOLNAME employeeID)) STAGE DEPENDENCIES: Stage-0 is a root stage Stage-1 is a root stage STAGE PLANS: Stage: Stage-0 Map Reduce Alias -> Map Operator Tree: employee_part TableScan alias: employee_part Select Operator expressions: expr: employeeid type: int outputColumnNames: employeeid Group By Operator aggregations: expr: compute_stats(employeeid, 16) bucketGroup: false mode: hash outputColumnNames: _col0 Reduce Output Operator sort order: tag: -1 value expressions: expr: _col0 type: struct Reduce Operator Tree: Group By Operator aggregations: expr: compute_stats(VALUE._col0) bucketGroup: false mode: mergepartial outputColumnNames: _col0 Select Operator expressions: expr: _col0 type: struct outputColumnNames: _col0 File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Stage: Stage-1 Column Stats Work Column Stats Desc: Columns: employeeID Column Types: int Partition: employeesalary=4000.0 Table: Employee_Part PREHOOK: query: explain extended analyze table Employee_Part partition (employeeSalary=4000.0) compute statistics for columns employeeID PREHOOK: type: QUERY POSTHOOK: query: explain extended analyze table Employee_Part partition (employeeSalary=4000.0) compute statistics for columns employeeID POSTHOOK: type: QUERY ABSTRACT SYNTAX TREE: (TOK_ANALYZE (TOK_TAB (TOK_TABNAME Employee_Part) (TOK_PARTSPEC (TOK_PARTVAL employeeSalary 4000.0))) (TOK_TABCOLNAME employeeID)) STAGE DEPENDENCIES: Stage-0 is a root stage Stage-1 is a root stage STAGE PLANS: Stage: Stage-0 Map Reduce Alias -> Map Operator Tree: employee_part TableScan alias: employee_part GatherStats: false Select Operator expressions: expr: employeeid type: int outputColumnNames: employeeid Group By Operator aggregations: expr: compute_stats(employeeid, 16) bucketGroup: false mode: hash outputColumnNames: _col0 Reduce Output Operator sort order: tag: -1 value expressions: expr: _col0 type: struct Needs Tagging: false Path -> Alias: #### A masked pattern was here #### Path -> Partition: #### A masked pattern was here #### Partition base file name: employeesalary=4000.0 input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat partition values: employeesalary 4000.0 properties: bucket_count -1 columns employeeid,employeename columns.types int:string field.delim | #### A masked pattern was here #### name default.employee_part numFiles 1 numPartitions 2 numRows 0 partition_columns employeesalary rawDataSize 0 serialization.ddl struct employee_part { i32 employeeid, string employeename} serialization.format | serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 105 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: bucket_count -1 columns employeeid,employeename columns.types int:string field.delim | #### A masked pattern was here #### name default.employee_part numFiles 2 numPartitions 2 numRows 0 partition_columns employeesalary rawDataSize 0 serialization.ddl struct employee_part { i32 employeeid, string employeename} serialization.format | serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 210 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe name: default.employee_part name: default.employee_part Reduce Operator Tree: Group By Operator aggregations: expr: compute_stats(VALUE._col0) bucketGroup: false mode: mergepartial outputColumnNames: _col0 Select Operator expressions: expr: _col0 type: struct outputColumnNames: _col0 File Output Operator compressed: false GlobalTableId: 0 #### A masked pattern was here #### NumFilesPerFileSink: 1 #### A masked pattern was here #### table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: columns _col0 columns.types struct escape.delim \ serialization.format 1 TotalFiles: 1 GatherStats: false MultiFileSpray: false Truncated Path -> Alias: /employee_part/employeesalary=4000.0 [employee_part] Stage: Stage-1 Column Stats Work Column Stats Desc: Columns: employeeID Column Types: int Partition: employeesalary=4000.0 Table: Employee_Part Is Table Level Stats: false PREHOOK: query: analyze table Employee_Part partition (employeeSalary=4000.0) compute statistics for columns employeeID PREHOOK: type: QUERY PREHOOK: Input: default@employee_part@employeesalary=4000.0 #### A masked pattern was here #### POSTHOOK: query: analyze table Employee_Part partition (employeeSalary=4000.0) compute statistics for columns employeeID POSTHOOK: type: QUERY POSTHOOK: Input: default@employee_part@employeesalary=4000.0 #### A masked pattern was here ####