ConvertAvroToORC

Description:

Converts an Avro record into ORC file format. This processor provides a direct mapping of an Avro record to an ORC record, such that the resulting ORC file will have the same hierarchical structure as the Avro document. If an incoming FlowFile contains a stream of multiple Avro records, the resultant FlowFile will contain a ORC file containing all of the Avro records. If an incoming FlowFile does not contain any records, an empty ORC file is the output. NOTE: Many Avro datatypes (collections, primitives, and unions of primitives, e.g.) can be converted to ORC, but unions of collections and other complex datatypes may not be able to be converted to ORC.

Tags:

avro, orc, hive, convert

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

NameDefault ValueAllowable ValuesDescription
ORC Configuration ResourcesA file or comma separated list of files which contains the ORC configuration (hive-site.xml, e.g.). Without this, Hadoop will search the classpath for a 'hive-site.xml' file or will revert to a default configuration. Please see the ORC documentation for more details.

This property expects a comma-separated list of file resources.
Stripe Size64 MBThe size of the memory buffer (in bytes) for writing stripes to an ORC file
Buffer Size10 KBThe maximum size of the memory buffers (in bytes) used for compressing and storing a stripe in memory. This is a hint to the ORC writer, which may choose to use a smaller buffer size based on stripe size and number of columns for efficient stripe writing and memory utilization.
Compression TypeNONE
  • NONE
  • ZLIB
  • SNAPPY
  • LZO
No Description Provided.
Hive Table NameAn optional table name to insert into the hive.ddl attribute. The generated DDL can be used by a PutHiveQL processor (presumably after a PutHDFS processor) to create a table backed by the converted ORC file. If this property is not provided, the full name (including namespace) of the incoming Avro record will be normalized and used as the table name.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)

Relationships:

NameDescription
successA FlowFile is routed to this relationship after it has been converted to ORC format.
failureA FlowFile is routed to this relationship if it cannot be parsed as Avro or cannot be converted to ORC for any reason

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
mime.typeSets the mime type to application/octet-stream
filenameSets the filename to the existing filename with the extension replaced by / added to by .orc
record.countSets the number of records in the ORC file.
hive.ddlCreates a partial Hive DDL statement for creating a table in Hive from this ORC file. This can be used in ReplaceText for setting the content to the DDL. To make it valid DDL, add "LOCATION '<path_to_orc_file_in_hdfs>'", where the path is the directory that contains this ORC file on HDFS. For example, ConvertAvroToORC can send flow files to a PutHDFS processor to send the file to HDFS, then to a ReplaceText to set the content to this DDL (plus the LOCATION clause as described), then to PutHiveQL processor to create the table if it doesn't exist.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

System Resource Considerations:

None specified.