Package | Description |
---|---|
org.apache.hadoop.hive.contrib.fileformat.base64 | |
org.apache.hadoop.hive.druid.io | |
org.apache.hadoop.hive.hbase |
Implements an HBase storage handler for Hive.
|
org.apache.hadoop.hive.ql.exec |
Hive QL execution tasks, operators, functions and other handlers.
|
org.apache.hadoop.hive.ql.exec.persistence | |
org.apache.hadoop.hive.ql.io | |
org.apache.hadoop.hive.ql.io.avro | |
org.apache.hadoop.hive.ql.io.orc |
The Optimized Row Columnar (ORC) File Format.
|
org.apache.hadoop.hive.ql.io.parquet | |
org.apache.hadoop.hive.ql.io.parquet.write | |
org.apache.hive.storage.jdbc |
Modifier and Type | Class and Description |
---|---|
static class |
Base64TextOutputFormat.Base64RecordWriter
Base64RecordWriter.
|
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
Base64TextOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress) |
Constructor and Description |
---|
Base64RecordWriter(FileSinkOperator.RecordWriter writer) |
Modifier and Type | Class and Description |
---|---|
class |
DruidRecordWriter |
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
DruidOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress) |
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
HiveHFileOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progressable) |
Modifier and Type | Field and Description |
---|---|
protected FileSinkOperator.RecordWriter[] |
FileSinkOperator.rowOutWriters |
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
PTFRowContainer.PTFHiveSequenceFileOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress) |
protected FileSinkOperator.RecordWriter |
RowContainer.getRecordWriter() |
Modifier and Type | Interface and Description |
---|---|
interface |
StatsProvidingRecordWriter
If a file format internally gathers statistics (like ORC) while writing then
it can expose the statistics through this record writer interface.
|
Modifier and Type | Class and Description |
---|---|
class |
HivePassThroughRecordWriter<K extends org.apache.hadoop.io.WritableComparable<?>,V extends org.apache.hadoop.io.Writable> |
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
RCFileOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress)
create the final out file.
|
FileSinkOperator.RecordWriter |
HivePassThroughOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress) |
FileSinkOperator.RecordWriter |
HiveNullValueSequenceFileOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress) |
FileSinkOperator.RecordWriter |
HiveSequenceFileOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress)
create the final out file, and output an empty key as the key.
|
FileSinkOperator.RecordWriter |
HiveBinaryOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path outPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress)
create the final out file, and output row by row.
|
FileSinkOperator.RecordWriter |
HiveOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress)
create the final out file and get some specific settings.
|
FileSinkOperator.RecordWriter |
HiveIgnoreKeyTextOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path outPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress)
create the final out file, and output row by row.
|
static FileSinkOperator.RecordWriter |
HiveFileFormatUtils.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
TableDesc tableInfo,
Class<? extends org.apache.hadoop.io.Writable> outputClass,
FileSinkDesc conf,
org.apache.hadoop.fs.Path outPath,
org.apache.hadoop.mapred.Reporter reporter) |
FileSinkOperator.RecordWriter |
AcidOutputFormat.getRawRecordWriter(org.apache.hadoop.fs.Path path,
AcidOutputFormat.Options options)
Create a raw writer for ACID events.
|
static FileSinkOperator.RecordWriter |
HiveFileFormatUtils.getRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.mapred.OutputFormat<?,?> outputFormat,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProp,
org.apache.hadoop.fs.Path outPath,
org.apache.hadoop.mapred.Reporter reporter) |
Modifier and Type | Class and Description |
---|---|
class |
AvroGenericRecordWriter
Write an Avro GenericRecord to an Avro data file.
|
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
AvroContainerOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jobConf,
org.apache.hadoop.fs.Path path,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties properties,
org.apache.hadoop.util.Progressable progressable) |
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
OrcOutputFormat.getRawRecordWriter(org.apache.hadoop.fs.Path path,
AcidOutputFormat.Options options) |
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
MapredParquetOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jobConf,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress)
Create the parquet schema from the hive schema, and return the RecordWriterWrapper which
contains the real output format
|
Modifier and Type | Class and Description |
---|---|
class |
ParquetRecordWriterWrapper |
Modifier and Type | Method and Description |
---|---|
FileSinkOperator.RecordWriter |
JdbcOutputFormat.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
org.apache.hadoop.fs.Path finalOutPath,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProperties,
org.apache.hadoop.util.Progressable progress)
create the final out file and get some specific settings.
|
Copyright © 2021 The Apache Software Foundation. All rights reserved.