public class DataWritableReadSupport
extends parquet.hadoop.api.ReadSupport<org.apache.hadoop.io.ArrayWritable>
Modifier and Type | Field and Description |
---|---|
static String |
HIVE_TABLE_AS_PARQUET_SCHEMA |
static String |
PARQUET_COLUMN_INDEX_ACCESS |
Constructor and Description |
---|
DataWritableReadSupport() |
Modifier and Type | Method and Description |
---|---|
parquet.hadoop.api.ReadSupport.ReadContext |
init(parquet.hadoop.api.InitContext context)
It creates the readContext for Parquet side with the requested schema during the init phase.
|
parquet.io.api.RecordMaterializer<org.apache.hadoop.io.ArrayWritable> |
prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
parquet.schema.MessageType fileSchema,
parquet.hadoop.api.ReadSupport.ReadContext readContext)
It creates the hive read support to interpret data from parquet to hive
|
public static final String HIVE_TABLE_AS_PARQUET_SCHEMA
public static final String PARQUET_COLUMN_INDEX_ACCESS
public parquet.hadoop.api.ReadSupport.ReadContext init(parquet.hadoop.api.InitContext context)
init
in class parquet.hadoop.api.ReadSupport<org.apache.hadoop.io.ArrayWritable>
context
- public parquet.io.api.RecordMaterializer<org.apache.hadoop.io.ArrayWritable> prepareForRead(org.apache.hadoop.conf.Configuration configuration, Map<String,String> keyValueMetaData, parquet.schema.MessageType fileSchema, parquet.hadoop.api.ReadSupport.ReadContext readContext)
prepareForRead
in class parquet.hadoop.api.ReadSupport<org.apache.hadoop.io.ArrayWritable>
configuration
- // unusedkeyValueMetaData
- fileSchema
- // unusedreadContext
- containing the requested schema and the schema of the hive tableCopyright © 2017 The Apache Software Foundation. All rights reserved.