PREHOOK: query: -- This is to test the union->selectstar->filesink optimization -- Union of 2 map-reduce subqueries is performed followed by select star and a file sink -- and the results are written to a table using dynamic partitions. -- There is no need to write the temporary results of the sub-queries, and then read them -- again to process the union. The union can be removed completely. -- It does not matter, whether the output is merged or not. In this case, merging is turned -- on -- This test demonstrates that this optimization works in the presence of dynamic partitions. -- INCLUDE_HADOOP_MAJOR_VERSIONS(0.23) -- Since this test creates sub-directories for the output table outputTbl1, it might be easier -- to run the test only on hadoop 23 create table inputTbl1(key string, val string) stored as textfile PREHOOK: type: CREATETABLE POSTHOOK: query: -- This is to test the union->selectstar->filesink optimization -- Union of 2 map-reduce subqueries is performed followed by select star and a file sink -- and the results are written to a table using dynamic partitions. -- There is no need to write the temporary results of the sub-queries, and then read them -- again to process the union. The union can be removed completely. -- It does not matter, whether the output is merged or not. In this case, merging is turned -- on -- This test demonstrates that this optimization works in the presence of dynamic partitions. -- INCLUDE_HADOOP_MAJOR_VERSIONS(0.23) -- Since this test creates sub-directories for the output table outputTbl1, it might be easier -- to run the test only on hadoop 23 create table inputTbl1(key string, val string) stored as textfile POSTHOOK: type: CREATETABLE POSTHOOK: Output: default@inputTbl1 PREHOOK: query: create table outputTbl1(key string, values bigint) partitioned by (ds string) stored as rcfile PREHOOK: type: CREATETABLE POSTHOOK: query: create table outputTbl1(key string, values bigint) partitioned by (ds string) stored as rcfile POSTHOOK: type: CREATETABLE POSTHOOK: Output: default@outputTbl1 PREHOOK: query: load data local inpath '../data/files/T1.txt' into table inputTbl1 PREHOOK: type: LOAD PREHOOK: Output: default@inputtbl1 POSTHOOK: query: load data local inpath '../data/files/T1.txt' into table inputTbl1 POSTHOOK: type: LOAD POSTHOOK: Output: default@inputtbl1 PREHOOK: query: explain insert overwrite table outputTbl1 partition (ds) SELECT * FROM ( SELECT key, count(1) as values, '1' as ds from inputTbl1 group by key UNION ALL SELECT key, count(1) as values, '2' as ds from inputTbl1 group by key ) a PREHOOK: type: QUERY POSTHOOK: query: explain insert overwrite table outputTbl1 partition (ds) SELECT * FROM ( SELECT key, count(1) as values, '1' as ds from inputTbl1 group by key UNION ALL SELECT key, count(1) as values, '2' as ds from inputTbl1 group by key ) a POSTHOOK: type: QUERY ABSTRACT SYNTAX TREE: (TOK_QUERY (TOK_FROM (TOK_SUBQUERY (TOK_UNION (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME inputTbl1))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (TOK_TABLE_OR_COL key)) (TOK_SELEXPR (TOK_FUNCTION count 1) values) (TOK_SELEXPR '1' ds)) (TOK_GROUPBY (TOK_TABLE_OR_COL key)))) (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME inputTbl1))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (TOK_TABLE_OR_COL key)) (TOK_SELEXPR (TOK_FUNCTION count 1) values) (TOK_SELEXPR '2' ds)) (TOK_GROUPBY (TOK_TABLE_OR_COL key))))) a)) (TOK_INSERT (TOK_DESTINATION (TOK_TAB (TOK_TABNAME outputTbl1) (TOK_PARTSPEC (TOK_PARTVAL ds)))) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)))) STAGE DEPENDENCIES: Stage-1 is a root stage Stage-6 depends on stages: Stage-1, Stage-7 , consists of Stage-3, Stage-2, Stage-4 Stage-3 Stage-0 depends on stages: Stage-3, Stage-2, Stage-5 Stage-2 Stage-4 Stage-5 depends on stages: Stage-4 Stage-7 is a root stage STAGE PLANS: Stage: Stage-1 Map Reduce Alias -> Map Operator Tree: null-subquery2:a-subquery2:inputtbl1 TableScan alias: inputtbl1 Select Operator expressions: expr: key type: string outputColumnNames: key Group By Operator aggregations: expr: count(1) bucketGroup: false keys: expr: key type: string mode: hash outputColumnNames: _col0, _col1 Reduce Output Operator key expressions: expr: _col0 type: string sort order: + Map-reduce partition columns: expr: _col0 type: string tag: -1 value expressions: expr: _col1 type: bigint Reduce Operator Tree: Group By Operator aggregations: expr: count(VALUE._col0) bucketGroup: false keys: expr: KEY._col0 type: string mode: mergepartial outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: _col1 type: bigint expr: '2' type: string outputColumnNames: _col0, _col1, _col2 File Output Operator compressed: false GlobalTableId: 1 table: input format: org.apache.hadoop.hive.ql.io.RCFileInputFormat output format: org.apache.hadoop.hive.ql.io.RCFileOutputFormat serde: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe name: default.outputtbl1 Stage: Stage-6 Conditional Operator Stage: Stage-3 Move Operator files: hdfs directory: true #### A masked pattern was here #### Stage: Stage-0 Move Operator tables: partition: ds replace: true table: input format: org.apache.hadoop.hive.ql.io.RCFileInputFormat output format: org.apache.hadoop.hive.ql.io.RCFileOutputFormat serde: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe name: default.outputtbl1 Stage: Stage-2 Block level merge Stage: Stage-4 Block level merge Stage: Stage-5 Move Operator files: hdfs directory: true #### A masked pattern was here #### Stage: Stage-7 Map Reduce Alias -> Map Operator Tree: null-subquery1:a-subquery1:inputtbl1 TableScan alias: inputtbl1 Select Operator expressions: expr: key type: string outputColumnNames: key Group By Operator aggregations: expr: count(1) bucketGroup: false keys: expr: key type: string mode: hash outputColumnNames: _col0, _col1 Reduce Output Operator key expressions: expr: _col0 type: string sort order: + Map-reduce partition columns: expr: _col0 type: string tag: -1 value expressions: expr: _col1 type: bigint Reduce Operator Tree: Group By Operator aggregations: expr: count(VALUE._col0) bucketGroup: false keys: expr: KEY._col0 type: string mode: mergepartial outputColumnNames: _col0, _col1 Select Operator expressions: expr: _col0 type: string expr: _col1 type: bigint expr: '1' type: string outputColumnNames: _col0, _col1, _col2 File Output Operator compressed: false GlobalTableId: 1 table: input format: org.apache.hadoop.hive.ql.io.RCFileInputFormat output format: org.apache.hadoop.hive.ql.io.RCFileOutputFormat serde: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe name: default.outputtbl1 PREHOOK: query: insert overwrite table outputTbl1 partition (ds) SELECT * FROM ( SELECT key, count(1) as values, '1' as ds from inputTbl1 group by key UNION ALL SELECT key, count(1) as values, '2' as ds from inputTbl1 group by key ) a PREHOOK: type: QUERY PREHOOK: Input: default@inputtbl1 PREHOOK: Output: default@outputtbl1 POSTHOOK: query: insert overwrite table outputTbl1 partition (ds) SELECT * FROM ( SELECT key, count(1) as values, '1' as ds from inputTbl1 group by key UNION ALL SELECT key, count(1) as values, '2' as ds from inputTbl1 group by key ) a POSTHOOK: type: QUERY POSTHOOK: Input: default@inputtbl1 POSTHOOK: Output: default@outputtbl1@ds=1 POSTHOOK: Output: default@outputtbl1@ds=2 POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] PREHOOK: query: desc formatted outputTbl1 PREHOOK: type: DESCTABLE POSTHOOK: query: desc formatted outputTbl1 POSTHOOK: type: DESCTABLE POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] # col_name data_type comment key string None values bigint None # Partition Information # col_name data_type comment ds string None # Detailed Table Information Database: default #### A masked pattern was here #### Protect Mode: None Retention: 0 #### A masked pattern was here #### Table Type: MANAGED_TABLE Table Parameters: #### A masked pattern was here #### # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe InputFormat: org.apache.hadoop.hive.ql.io.RCFileInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.RCFileOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: serialization.format 1 PREHOOK: query: show partitions outputTbl1 PREHOOK: type: SHOWPARTITIONS POSTHOOK: query: show partitions outputTbl1 POSTHOOK: type: SHOWPARTITIONS POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] ds=1 ds=2 PREHOOK: query: select * from outputTbl1 where ds = '1' order by key, values PREHOOK: type: QUERY PREHOOK: Input: default@outputtbl1 PREHOOK: Input: default@outputtbl1@ds=1 #### A masked pattern was here #### POSTHOOK: query: select * from outputTbl1 where ds = '1' order by key, values POSTHOOK: type: QUERY POSTHOOK: Input: default@outputtbl1 POSTHOOK: Input: default@outputtbl1@ds=1 #### A masked pattern was here #### POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] 1 1 1 2 1 1 3 1 1 7 1 1 8 2 1 PREHOOK: query: select * from outputTbl1 where ds = '2' order by key, values PREHOOK: type: QUERY PREHOOK: Input: default@outputtbl1 PREHOOK: Input: default@outputtbl1@ds=2 #### A masked pattern was here #### POSTHOOK: query: select * from outputTbl1 where ds = '2' order by key, values POSTHOOK: type: QUERY POSTHOOK: Input: default@outputtbl1 POSTHOOK: Input: default@outputtbl1@ds=2 #### A masked pattern was here #### POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=1).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).key EXPRESSION [(inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), (inputtbl1)inputtbl1.FieldSchema(name:key, type:string, comment:null), ] POSTHOOK: Lineage: outputtbl1 PARTITION(ds=2).values EXPRESSION [(inputtbl1)inputtbl1.null, (inputtbl1)inputtbl1.null, ] 1 1 2 2 1 2 3 1 2 7 1 2 8 2 2