Modifier and Type | Field and Description |
---|---|
static org.apache.commons.logging.Log |
LOG |
Constructor and Description |
---|
ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat,
org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf oldJobConf,
org.apache.hadoop.mapred.Reporter reporter) |
ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat,
org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf oldJobConf,
org.apache.hadoop.mapred.Reporter reporter,
ProjectionPusher pusher) |
Modifier and Type | Method and Description |
---|---|
void |
close() |
Void |
createKey() |
org.apache.hadoop.io.ArrayWritable |
createValue() |
List<parquet.hadoop.metadata.BlockMetaData> |
getFiltedBlocks() |
long |
getPos() |
float |
getProgress() |
protected parquet.hadoop.ParquetInputSplit |
getSplit(org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf conf)
gets a ParquetInputSplit corresponding to a split given by Hive
|
boolean |
next(Void key,
org.apache.hadoop.io.ArrayWritable value) |
parquet.filter2.compat.FilterCompat.Filter |
setFilter(org.apache.hadoop.mapred.JobConf conf) |
public ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat, org.apache.hadoop.mapred.InputSplit oldSplit, org.apache.hadoop.mapred.JobConf oldJobConf, org.apache.hadoop.mapred.Reporter reporter) throws IOException, InterruptedException
IOException
InterruptedException
public ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat, org.apache.hadoop.mapred.InputSplit oldSplit, org.apache.hadoop.mapred.JobConf oldJobConf, org.apache.hadoop.mapred.Reporter reporter, ProjectionPusher pusher) throws IOException, InterruptedException
IOException
InterruptedException
public parquet.filter2.compat.FilterCompat.Filter setFilter(org.apache.hadoop.mapred.JobConf conf)
public void close() throws IOException
close
in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
IOException
public Void createKey()
createKey
in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
public org.apache.hadoop.io.ArrayWritable createValue()
createValue
in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
public long getPos() throws IOException
getPos
in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
IOException
public float getProgress() throws IOException
getProgress
in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
IOException
public boolean next(Void key, org.apache.hadoop.io.ArrayWritable value) throws IOException
next
in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
IOException
protected parquet.hadoop.ParquetInputSplit getSplit(org.apache.hadoop.mapred.InputSplit oldSplit, org.apache.hadoop.mapred.JobConf conf) throws IOException
oldSplit
- The split given by Hiveconf
- The JobConf of the Hive jobIOException
- if the config cannot be enhanced or if the footer cannot be read from the filepublic List<parquet.hadoop.metadata.BlockMetaData> getFiltedBlocks()
Copyright © 2017 The Apache Software Foundation. All rights reserved.