PREHOOK: query: -- sampling with join and alias -- SORT_QUERY_RESULTS EXPLAIN EXTENDED SELECT s.* FROM srcpart TABLESAMPLE (BUCKET 1 OUT OF 1 ON key) s JOIN srcpart TABLESAMPLE (BUCKET 1 OUT OF 10 ON key) t WHERE t.key = s.key and t.value = s.value and s.ds='2008-04-08' and s.hr='11' PREHOOK: type: QUERY POSTHOOK: query: -- sampling with join and alias -- SORT_QUERY_RESULTS EXPLAIN EXTENDED SELECT s.* FROM srcpart TABLESAMPLE (BUCKET 1 OUT OF 1 ON key) s JOIN srcpart TABLESAMPLE (BUCKET 1 OUT OF 10 ON key) t WHERE t.key = s.key and t.value = s.value and s.ds='2008-04-08' and s.hr='11' POSTHOOK: type: QUERY ABSTRACT SYNTAX TREE: TOK_QUERY TOK_FROM TOK_JOIN TOK_TABREF TOK_TABNAME srcpart TOK_TABLEBUCKETSAMPLE 1 1 TOK_TABLE_OR_COL key s TOK_TABREF TOK_TABNAME srcpart TOK_TABLEBUCKETSAMPLE 1 10 TOK_TABLE_OR_COL key t TOK_INSERT TOK_DESTINATION TOK_DIR TOK_TMP_FILE TOK_SELECT TOK_SELEXPR TOK_ALLCOLREF TOK_TABNAME s TOK_WHERE and and and = . TOK_TABLE_OR_COL t key . TOK_TABLE_OR_COL s key = . TOK_TABLE_OR_COL t value . TOK_TABLE_OR_COL s value = . TOK_TABLE_OR_COL s ds '2008-04-08' = . TOK_TABLE_OR_COL s hr '11' STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 depends on stages: Stage-1 STAGE PLANS: Stage: Stage-1 Map Reduce Map Operator Tree: TableScan alias: s Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE GatherStats: false Filter Operator isSamplingPred: true predicate: (((((hash(key) & 2147483647) % 10) = 0) and value is not null) and (((hash(key) & 2147483647) % 1) = 0)) (type: boolean) Statistics: Num rows: 62 Data size: 658 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: key (type: string), value (type: string) sort order: ++ Map-reduce partition columns: key (type: string), value (type: string) Statistics: Num rows: 62 Data size: 658 Basic stats: COMPLETE Column stats: NONE tag: 0 auto parallelism: false TableScan alias: t Statistics: Num rows: 2000 Data size: 21248 Basic stats: COMPLETE Column stats: NONE GatherStats: false Filter Operator isSamplingPred: true predicate: (((((hash(key) & 2147483647) % 1) = 0) and value is not null) and (((hash(key) & 2147483647) % 10) = 0)) (type: boolean) Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: key (type: string), value (type: string) sort order: ++ Map-reduce partition columns: key (type: string), value (type: string) Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE tag: 1 auto parallelism: false Path -> Alias: #### A masked pattern was here #### Path -> Partition: #### A masked pattern was here #### Partition base file name: hr=11 input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat partition values: ds 2008-04-08 hr 11 properties: COLUMN_STATS_ACCURATE true bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart numFiles 1 numRows 500 partition_columns ds/hr partition_columns.types string:string rawDataSize 5312 serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 5812 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart partition_columns ds/hr partition_columns.types string:string serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe name: default.srcpart name: default.srcpart #### A masked pattern was here #### Partition base file name: hr=12 input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat partition values: ds 2008-04-08 hr 12 properties: COLUMN_STATS_ACCURATE true bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart numFiles 1 numRows 500 partition_columns ds/hr partition_columns.types string:string rawDataSize 5312 serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 5812 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart partition_columns ds/hr partition_columns.types string:string serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe name: default.srcpart name: default.srcpart #### A masked pattern was here #### Partition base file name: hr=11 input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat partition values: ds 2008-04-09 hr 11 properties: COLUMN_STATS_ACCURATE true bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart numFiles 1 numRows 500 partition_columns ds/hr partition_columns.types string:string rawDataSize 5312 serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 5812 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart partition_columns ds/hr partition_columns.types string:string serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe name: default.srcpart name: default.srcpart #### A masked pattern was here #### Partition base file name: hr=12 input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat partition values: ds 2008-04-09 hr 12 properties: COLUMN_STATS_ACCURATE true bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart numFiles 1 numRows 500 partition_columns ds/hr partition_columns.types string:string rawDataSize 5312 serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe totalSize 5812 #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: bucket_count -1 columns key,value columns.comments 'default','default' columns.types string:string #### A masked pattern was here #### name default.srcpart partition_columns ds/hr partition_columns.types string:string serialization.ddl struct srcpart { string key, string value} serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe #### A masked pattern was here #### serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe name: default.srcpart name: default.srcpart Truncated Path -> Alias: /srcpart/ds=2008-04-08/hr=11 [s, t] /srcpart/ds=2008-04-08/hr=12 [t] /srcpart/ds=2008-04-09/hr=11 [t] /srcpart/ds=2008-04-09/hr=12 [t] Needs Tagging: true Reduce Operator Tree: Join Operator condition map: Inner Join 0 to 1 keys: 0 key (type: string), value (type: string) 1 key (type: string), value (type: string) outputColumnNames: _col0, _col1, _col7, _col8 Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE Filter Operator isSamplingPred: false predicate: ((_col7 = _col0) and (_col8 = _col1)) (type: boolean) Statistics: Num rows: 68 Data size: 722 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: string), _col1 (type: string), '2008-04-08' (type: string), '11' (type: string) outputColumnNames: _col0, _col1, _col2, _col3 Statistics: Num rows: 68 Data size: 722 Basic stats: COMPLETE Column stats: NONE File Output Operator compressed: false GlobalTableId: 0 #### A masked pattern was here #### NumFilesPerFileSink: 1 Statistics: Num rows: 68 Data size: 722 Basic stats: COMPLETE Column stats: NONE #### A masked pattern was here #### table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat properties: columns _col0,_col1,_col2,_col3 columns.types string:string:string:string escape.delim \ hive.serialization.extend.additional.nesting.levels true serialization.format 1 serialization.lib org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe TotalFiles: 1 GatherStats: false MultiFileSpray: false Stage: Stage-0 Fetch Operator limit: -1 Processor Tree: ListSink PREHOOK: query: SELECT s.key, s.value FROM srcpart TABLESAMPLE (BUCKET 1 OUT OF 1 ON key) s JOIN srcpart TABLESAMPLE (BUCKET 1 OUT OF 10 ON key) t WHERE t.key = s.key and t.value = s.value and s.ds='2008-04-08' and s.hr='11' PREHOOK: type: QUERY PREHOOK: Input: default@srcpart PREHOOK: Input: default@srcpart@ds=2008-04-08/hr=11 PREHOOK: Input: default@srcpart@ds=2008-04-08/hr=12 PREHOOK: Input: default@srcpart@ds=2008-04-09/hr=11 PREHOOK: Input: default@srcpart@ds=2008-04-09/hr=12 #### A masked pattern was here #### POSTHOOK: query: SELECT s.key, s.value FROM srcpart TABLESAMPLE (BUCKET 1 OUT OF 1 ON key) s JOIN srcpart TABLESAMPLE (BUCKET 1 OUT OF 10 ON key) t WHERE t.key = s.key and t.value = s.value and s.ds='2008-04-08' and s.hr='11' POSTHOOK: type: QUERY POSTHOOK: Input: default@srcpart POSTHOOK: Input: default@srcpart@ds=2008-04-08/hr=11 POSTHOOK: Input: default@srcpart@ds=2008-04-08/hr=12 POSTHOOK: Input: default@srcpart@ds=2008-04-09/hr=11 POSTHOOK: Input: default@srcpart@ds=2008-04-09/hr=12 #### A masked pattern was here #### 105 val_105 105 val_105 105 val_105 105 val_105 114 val_114 114 val_114 114 val_114 114 val_114 150 val_150 150 val_150 150 val_150 150 val_150 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 169 val_169 178 val_178 178 val_178 178 val_178 178 val_178 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 187 val_187 196 val_196 196 val_196 196 val_196 196 val_196 2 val_2 2 val_2 2 val_2 2 val_2 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 213 val_213 222 val_222 222 val_222 222 val_222 222 val_222 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 277 val_277 286 val_286 286 val_286 286 val_286 286 val_286 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 321 val_321 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 367 val_367 394 val_394 394 val_394 394 val_394 394 val_394 402 val_402 402 val_402 402 val_402 402 val_402 411 val_411 411 val_411 411 val_411 411 val_411 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 439 val_439 448 val_448 448 val_448 448 val_448 448 val_448 457 val_457 457 val_457 457 val_457 457 val_457 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 466 val_466 475 val_475 475 val_475 475 val_475 475 val_475 484 val_484 484 val_484 484 val_484 484 val_484 493 val_493 493 val_493 493 val_493 493 val_493 77 val_77 77 val_77 77 val_77 77 val_77 86 val_86 86 val_86 86 val_86 86 val_86 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 95 val_95 PREHOOK: query: EXPLAIN SELECT * FROM src TABLESAMPLE(100 ROWS) a JOIN src1 TABLESAMPLE(10 ROWS) b ON a.key=b.key PREHOOK: type: QUERY POSTHOOK: query: EXPLAIN SELECT * FROM src TABLESAMPLE(100 ROWS) a JOIN src1 TABLESAMPLE(10 ROWS) b ON a.key=b.key POSTHOOK: type: QUERY STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 depends on stages: Stage-1 STAGE PLANS: Stage: Stage-1 Map Reduce Map Operator Tree: TableScan alias: a Row Limit Per Split: 100 Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: key is not null (type: boolean) Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: key (type: string) sort order: + Map-reduce partition columns: key (type: string) Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE value expressions: value (type: string) TableScan alias: b Row Limit Per Split: 10 Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: key is not null (type: boolean) Statistics: Num rows: 13 Data size: 99 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: key (type: string) sort order: + Map-reduce partition columns: key (type: string) Statistics: Num rows: 13 Data size: 99 Basic stats: COMPLETE Column stats: NONE value expressions: value (type: string) Reduce Operator Tree: Join Operator condition map: Inner Join 0 to 1 keys: 0 key (type: string) 1 key (type: string) outputColumnNames: _col0, _col1, _col5, _col6 Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string) outputColumnNames: _col0, _col1, _col2, _col3 Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE File Output Operator compressed: false Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Stage: Stage-0 Fetch Operator limit: -1 Processor Tree: ListSink PREHOOK: query: SELECT * FROM src TABLESAMPLE(100 ROWS) a JOIN src1 TABLESAMPLE(10 ROWS) b ON a.key=b.key PREHOOK: type: QUERY PREHOOK: Input: default@src PREHOOK: Input: default@src1 #### A masked pattern was here #### POSTHOOK: query: SELECT * FROM src TABLESAMPLE(100 ROWS) a JOIN src1 TABLESAMPLE(10 ROWS) b ON a.key=b.key POSTHOOK: type: QUERY POSTHOOK: Input: default@src POSTHOOK: Input: default@src1 #### A masked pattern was here #### 238 val_238 238 val_238 255 val_255 255 val_255 278 val_278 278 val_278 311 val_311 311 val_311 311 val_311 311 val_311 98 val_98 98 val_98 PREHOOK: query: EXPLAIN SELECT * FROM src TABLESAMPLE(100 ROWS) a, src1 TABLESAMPLE(10 ROWS) b WHERE a.key=b.key PREHOOK: type: QUERY POSTHOOK: query: EXPLAIN SELECT * FROM src TABLESAMPLE(100 ROWS) a, src1 TABLESAMPLE(10 ROWS) b WHERE a.key=b.key POSTHOOK: type: QUERY STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 depends on stages: Stage-1 STAGE PLANS: Stage: Stage-1 Map Reduce Map Operator Tree: TableScan alias: a Row Limit Per Split: 100 Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: key is not null (type: boolean) Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: key (type: string) sort order: + Map-reduce partition columns: key (type: string) Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE value expressions: value (type: string) TableScan alias: b Row Limit Per Split: 10 Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: key is not null (type: boolean) Statistics: Num rows: 13 Data size: 99 Basic stats: COMPLETE Column stats: NONE Reduce Output Operator key expressions: key (type: string) sort order: + Map-reduce partition columns: key (type: string) Statistics: Num rows: 13 Data size: 99 Basic stats: COMPLETE Column stats: NONE value expressions: value (type: string) Reduce Operator Tree: Join Operator condition map: Inner Join 0 to 1 keys: 0 key (type: string) 1 key (type: string) outputColumnNames: _col0, _col1, _col5, _col6 Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: (_col0 = _col5) (type: boolean) Statistics: Num rows: 137 Data size: 1455 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string) outputColumnNames: _col0, _col1, _col2, _col3 Statistics: Num rows: 137 Data size: 1455 Basic stats: COMPLETE Column stats: NONE File Output Operator compressed: false Statistics: Num rows: 137 Data size: 1455 Basic stats: COMPLETE Column stats: NONE table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Stage: Stage-0 Fetch Operator limit: -1 Processor Tree: ListSink PREHOOK: query: SELECT * FROM src TABLESAMPLE(100 ROWS) a, src1 TABLESAMPLE(10 ROWS) b WHERE a.key=b.key PREHOOK: type: QUERY PREHOOK: Input: default@src PREHOOK: Input: default@src1 #### A masked pattern was here #### POSTHOOK: query: SELECT * FROM src TABLESAMPLE(100 ROWS) a, src1 TABLESAMPLE(10 ROWS) b WHERE a.key=b.key POSTHOOK: type: QUERY POSTHOOK: Input: default@src POSTHOOK: Input: default@src1 #### A masked pattern was here #### 238 val_238 238 val_238 255 val_255 255 val_255 278 val_278 278 val_278 311 val_311 311 val_311 311 val_311 311 val_311 98 val_98 98 val_98