PREHOOK: query: -- This is to test the union->selectstar->filesink optimization -- Union of 2 subqueries is performed (one of which is a mapred query, and the -- other one is a map-join query), followed by select star and a file sink. -- The union selectstar optimization should be performed, and the union should be removed. -- INCLUDE_HADOOP_MAJOR_VERSIONS(0.23) -- Since this test creates sub-directories for the output table outputTbl1, it might be easier -- to run the test only on hadoop 23 -- The final file format is different from the input and intermediate file format. -- It does not matter, whether the output is merged or not. In this case, merging is turned -- on create table inputTbl1(key string, val string) stored as textfile PREHOOK: type: CREATETABLE POSTHOOK: query: -- This is to test the union->selectstar->filesink optimization -- Union of 2 subqueries is performed (one of which is a mapred query, and the -- other one is a map-join query), followed by select star and a file sink. -- The union selectstar optimization should be performed, and the union should be removed. -- INCLUDE_HADOOP_MAJOR_VERSIONS(0.23) -- Since this test creates sub-directories for the output table outputTbl1, it might be easier -- to run the test only on hadoop 23 -- The final file format is different from the input and intermediate file format. -- It does not matter, whether the output is merged or not. In this case, merging is turned -- on create table inputTbl1(key string, val string) stored as textfile POSTHOOK: type: CREATETABLE POSTHOOK: Output: default@inputTbl1 PREHOOK: query: create table outputTbl1(key string, values bigint) stored as rcfile PREHOOK: type: CREATETABLE POSTHOOK: query: create table outputTbl1(key string, values bigint) stored as rcfile POSTHOOK: type: CREATETABLE POSTHOOK: Output: default@outputTbl1 PREHOOK: query: load data local inpath '../data/files/T1.txt' into table inputTbl1 PREHOOK: type: LOAD PREHOOK: Output: default@inputtbl1 POSTHOOK: query: load data local inpath '../data/files/T1.txt' into table inputTbl1 POSTHOOK: type: LOAD POSTHOOK: Output: default@inputtbl1 PREHOOK: query: explain insert overwrite table outputTbl1 SELECT * FROM ( select key, count(1) as values from inputTbl1 group by key union all select /*+ mapjoin(a) */ a.key as key, b.val as values FROM inputTbl1 a join inputTbl1 b on a.key=b.key )c PREHOOK: type: QUERY POSTHOOK: query: explain insert overwrite table outputTbl1 SELECT * FROM ( select key, count(1) as values from inputTbl1 group by key union all select /*+ mapjoin(a) */ a.key as key, b.val as values FROM inputTbl1 a join inputTbl1 b on a.key=b.key )c POSTHOOK: type: QUERY ABSTRACT SYNTAX TREE: (TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_UNION (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME inputTbl1))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (TOK_TABLE_OR_COL key)) (TOK_SELEXPR (TOK_FUNCTION count 1) values)) (TOK_GROUPBY (TOK_TABLE_OR_COL key)))) (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF (TOK_TABNAME inputTbl1) a) (TOK_TABREF (TOK_TABNAME inputTbl1) b) (= (. (TOK_TABLE_OR_COL a) key) (. (TOK_TABLE_OR_COL b) key)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_HINTLIST (TOK_HINT TOK_MAPJOIN (TOK_HINTARGLIST a))) (TOK_SELEXPR (. (TOK_TABLE_OR_COL a) key) key) (TOK_SELEXPR (. (TOK_TABLE_OR_COL b) val) values))))) c)) (TOK_INSERT (TOK_DESTINATION (TOK_TAB (TOK_TABNAME outputTbl1))) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)))) STAGE DEPENDENCIES: Stage-9 is a root stage Stage-7 depends on stages: Stage-2, Stage-9 , consists of Stage-4, Stage-3, Stage-5 Stage-4 Stage-0 depends on stages: Stage-4, Stage-3, Stage-6 Stage-3 Stage-5 Stage-6 depends on stages: Stage-5 Stage-10 is a root stage Stage-1 depends on stages: Stage-10 Stage-2 depends on stages: Stage-1 STAGE PLANS: Stage: Stage-9 Map Reduce Alias -> Map Operator Tree: null-subquery1:c-subquery1:inputtbl1 TableScan alias: inputtbl1 Select Operator expressions: expr: key type: string outputColumnNames: key Group By Operator aggregations: expr: count(1) bucketGroup: false keys: expr: key type: string mode: hash outputColumnNames: _col0, _col1 Reduce Output Operator key expressions: expr: _col0 type: string sort order: + Map-reduce partition columns: expr: _col0 type: string tag: -1 value expressions: expr: _col1 type: bigint Reduce Operator Tree: Group By Operator aggregations: expr: count(VALUE._col0) bucketGroup: false keys: expr: KEY._col0 type: string mode: mergepartial outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: _col1 type: bigint outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: UDFToString(_col1) type: string outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: _col1 type: string outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: UDFToLong(_col1) type: bigint outputColumnNames: _col0, _col1 File Output Operator compressed: false GlobalTableId: 1 table: input format: org.apache.hadoop.hive.ql.io.RCFileInputFormat output format: org.apache.hadoop.hive.ql.io.RCFileOutputFormat serde: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe name: default.outputtbl1 Stage: Stage-7 Conditional Operator Stage: Stage-4 Move Operator files: hdfs directory: true #### A masked pattern was here #### Stage: Stage-0 Move Operator tables: replace: true table: input format: org.apache.hadoop.hive.ql.io.RCFileInputFormat output format: org.apache.hadoop.hive.ql.io.RCFileOutputFormat serde: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe name: default.outputtbl1 Stage: Stage-3 Block level merge Stage: Stage-5 Block level merge Stage: Stage-6 Move Operator files: hdfs directory: true #### A masked pattern was here #### Stage: Stage-10 Map Reduce Local Work Alias -> Map Local Tables: null-subquery2:c-subquery2:a Fetch Operator limit: -1 Alias -> Map Local Operator Tree: null-subquery2:c-subquery2:a TableScan alias: a HashTable Sink Operator condition expressions: 0 {key} 1 {val} handleSkewJoin: false keys: 0 [Column[key]] 1 [Column[key]] Position of Big Table: 1 Stage: Stage-1 Map Reduce Alias -> Map Operator Tree: null-subquery2:c-subquery2:b TableScan alias: b Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {key} 1 {val} handleSkewJoin: false keys: 0 [Column[key]] 1 [Column[key]] outputColumnNames: _col0, _col5 Position of Big Table: 1 File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.SequenceFileInputFormat output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat Local Work: Map Reduce Local Work Stage: Stage-2 Map Reduce Alias -> Map Operator Tree: #### A masked pattern was here #### Select Operator expressions: expr: _col0 type: string expr: _col5 type: string outputColumnNames: _col0, _col5 Select Operator expressions: expr: _col0 type: string expr: _col5 type: string outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: _col1 type: string outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: UDFToLong(_col1) type: bigint outputColumnNames: _col0, _col1 File Output Operator compressed: false GlobalTableId: 1 table: input format: org.apache.hadoop.hive.ql.io.RCFileInputFormat output format: org.apache.hadoop.hive.ql.io.RCFileOutputFormat serde: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe name: default.outputtbl1 PREHOOK: query: insert overwrite table outputTbl1 SELECT * FROM ( select key, count(1) as values from inputTbl1 group by key union all select /*+ mapjoin(a) */ a.key as key, b.val as values FROM inputTbl1 a join inputTbl1 b on a.key=b.key )c PREHOOK: type: QUERY PREHOOK: Input: default@inputtbl1 PREHOOK: Output: default@outputtbl1 POSTHOOK: query: insert overwrite table outputTbl1 SELECT * FROM ( select key, count(1) as values from inputTbl1 group by key union all select /*+ mapjoin(a) */ a.key as key, b.val as values FROM inputTbl1 a join inputTbl1 b on a.key=b.key )c POSTHOOK: type: QUERY POSTHOOK: Input: default@inputtbl1 POSTHOOK: Output: default@outputtbl1 POSTHOOK: Lineage: outputtbl1.key EXPRESSION [(inputtbl1)a.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1.values EXPRESSION [(inputtbl1)b.FieldSchema(name:val, type:string, comment:null), (inputtbl1)inputtbl1.null, ] PREHOOK: query: desc formatted outputTbl1 PREHOOK: type: DESCTABLE POSTHOOK: query: desc formatted outputTbl1 POSTHOOK: type: DESCTABLE POSTHOOK: Lineage: outputtbl1.key EXPRESSION [(inputtbl1)a.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1.values EXPRESSION [(inputtbl1)b.FieldSchema(name:val, type:string, comment:null), (inputtbl1)inputtbl1.null, ] # col_name data_type comment key string None values bigint None # Detailed Table Information Database: default #### A masked pattern was here #### Protect Mode: None Retention: 0 #### A masked pattern was here #### Table Type: MANAGED_TABLE Table Parameters: #### A masked pattern was here #### # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe InputFormat: org.apache.hadoop.hive.ql.io.RCFileInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.RCFileOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: serialization.format 1 PREHOOK: query: select * from outputTbl1 order by key, values PREHOOK: type: QUERY PREHOOK: Input: default@outputtbl1 #### A masked pattern was here #### POSTHOOK: query: select * from outputTbl1 order by key, values POSTHOOK: type: QUERY POSTHOOK: Input: default@outputtbl1 #### A masked pattern was here #### POSTHOOK: Lineage: outputtbl1.key EXPRESSION [(inputtbl1)a.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1.values EXPRESSION [(inputtbl1)b.FieldSchema(name:val, type:string, comment:null), (inputtbl1)inputtbl1.null, ] 1 1 1 11 2 1 2 12 3 1 3 13 7 1 7 17 8 2 8 18 8 18 8 28 8 28