public class ReaderImpl extends org.apache.orc.impl.ReaderImpl implements Reader
Constructor and Description |
---|
ReaderImpl(org.apache.hadoop.fs.Path path,
OrcFile.ReaderOptions options)
Constructor that let's the user specify additional options.
|
Modifier and Type | Method and Description |
---|---|
CompressionKind |
getCompression()
Get the Compression kind in the compatibility mode.
|
ObjectInspector |
getObjectInspector()
Get the object inspector for looking at the objects.
|
ByteBuffer |
getSerializedFileFooter() |
RecordReader |
rows()
Create a RecordReader that reads everything with the default options.
|
RecordReader |
rows(boolean[] include)
Create a RecordReader that will scan the entire file.
|
RecordReader |
rows(long offset,
long length,
boolean[] include)
Create a RecordReader that will start reading at the first stripe after
offset up to the stripe that starts at offset + length.
|
RecordReader |
rows(long offset,
long length,
boolean[] include,
org.apache.hadoop.hive.ql.io.sarg.SearchArgument sarg,
String[] columnNames)
Create a RecordReader that will read a section of a file.
|
RecordReader |
rowsOptions(org.apache.orc.Reader.Options options)
Create a RecordReader that reads everything with the given options.
|
String |
toString() |
checkOrcVersion, ensureOrcFooter, ensureOrcFooter, extractFileTail, extractFileTail, extractFileTail, extractMetadata, getCompressionKind, getCompressionSize, getContentLength, getFileTail, getFileVersion, getFileVersion, getMetadataKeys, getMetadataSize, getMetadataValue, getNumberOfRows, getOrcProtoFileStatistics, getOrcProtoStripeStatistics, getOrcProtoUserMetadata, getRawDataSize, getRawDataSizeFromColIndices, getRawDataSizeFromColIndices, getRawDataSizeOfColumns, getRowIndexStride, getSchema, getStatistics, getStripes, getStripeStatistics, getTypes, getVersionList, getWriterVersion, getWriterVersion, hasMetadataValue, options, rows
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
getCompressionKind, getCompressionSize, getContentLength, getFileTail, getFileVersion, getMetadataKeys, getMetadataSize, getMetadataValue, getNumberOfRows, getOrcProtoFileStatistics, getOrcProtoStripeStatistics, getRawDataSize, getRawDataSizeFromColIndices, getRawDataSizeOfColumns, getRowIndexStride, getSchema, getStatistics, getStripes, getStripeStatistics, getTypes, getVersionList, getWriterVersion, hasMetadataValue, options, rows
public ReaderImpl(org.apache.hadoop.fs.Path path, OrcFile.ReaderOptions options) throws IOException
path
- pathname for fileoptions
- options for readingIOException
public ObjectInspector getObjectInspector()
Reader
public CompressionKind getCompression()
Reader
public ByteBuffer getSerializedFileFooter()
getSerializedFileFooter
in interface org.apache.orc.Reader
getSerializedFileFooter
in class org.apache.orc.impl.ReaderImpl
public RecordReader rows() throws IOException
Reader
rows
in interface org.apache.orc.Reader
rows
in class org.apache.orc.impl.ReaderImpl
IOException
public RecordReader rowsOptions(org.apache.orc.Reader.Options options) throws IOException
Reader
options
- the options to useIOException
public RecordReader rows(boolean[] include) throws IOException
Reader
include
- true for each column that should be includedIOException
public RecordReader rows(long offset, long length, boolean[] include) throws IOException
Reader
offset
- a byte offset in the filelength
- a number of bytes in the fileinclude
- true for each column that should be includedIOException
public RecordReader rows(long offset, long length, boolean[] include, org.apache.hadoop.hive.ql.io.sarg.SearchArgument sarg, String[] columnNames) throws IOException
Reader
offset
- the minimum offset of the first stripe to readlength
- the distance from offset of the first address to stop reading
atinclude
- true for each column that should be includedsarg
- a search argument that limits the rows that should be read.columnNames
- the names of the included columnsIOException
public String toString()
toString
in class org.apache.orc.impl.ReaderImpl
Copyright © 2021 The Apache Software Foundation. All rights reserved.