Package | Description |
---|---|
org.apache.hive.streaming |
Package grouping streaming classes.
|
Modifier and Type | Class and Description |
---|---|
class |
ConnectionError |
class |
InvalidTable |
class |
InvalidTransactionState
Invalid transaction.
|
class |
PartitionCreationFailed |
class |
SerializationError |
class |
StreamingIOFailure |
class |
TransactionError
Transaction error.
|
Modifier and Type | Method and Description |
---|---|
void |
StreamingTransaction.abort()
Abort a transaction.
|
void |
TransactionBatch.abort() |
void |
HiveStreamingConnection.abortTransaction() |
void |
StreamingConnection.abortTransaction()
Manually abort the opened transaction.
|
void |
HiveStreamingConnection.addWriteNotificationEvents()
Add Write notification events if it is enabled.
|
default void |
StreamingConnection.addWriteNotificationEvents()
Add Write notification events if it is enabled.
|
void |
UnManagedSingleTransaction.beginNextTransaction() |
void |
StreamingTransaction.beginNextTransaction()
get ready for the next transaction.
|
void |
TransactionBatch.beginNextTransaction() |
void |
HiveStreamingConnection.beginTransaction() |
void |
StreamingConnection.beginTransaction()
Begin a transaction for writing.
|
void |
RecordWriter.close()
Close the RecordUpdater.
|
void |
UnManagedSingleTransaction.close() |
void |
StreamingTransaction.close()
Free/close resources used by the streaming transaction.
|
void |
TransactionBatch.close()
Close the TransactionBatch.
|
void |
StreamingTransaction.commit()
commit transaction.
|
void |
StreamingTransaction.commit(Set<String> partitions)
Commit transaction and sent to the metastore the created partitions
in the process.
|
void |
UnManagedSingleTransaction.commit(Set<String> partitions,
String key,
String value) |
void |
StreamingTransaction.commit(Set<String> partitions,
String key,
String value)
Commits atomically together with a key and a value.
|
void |
TransactionBatch.commit(Set<String> partitions,
String key,
String value) |
void |
HiveStreamingConnection.commitTransaction() |
void |
StreamingConnection.commitTransaction()
Commit a transaction to make the writes visible for readers.
|
void |
HiveStreamingConnection.commitTransaction(Set<String> partitions) |
default void |
StreamingConnection.commitTransaction(Set<String> partitions)
Commit a transaction to make the writes visible for readers.
|
void |
HiveStreamingConnection.commitTransaction(Set<String> partitions,
String key,
String value) |
default void |
StreamingConnection.commitTransaction(Set<String> partitions,
String key,
String value)
Commits the transaction together with a key value atomically.
|
HiveStreamingConnection |
HiveStreamingConnection.Builder.connect()
Returning a streaming connection to hive.
|
PartitionInfo |
HiveStreamingConnection.createPartitionIfNotExists(List<String> partitionValues) |
PartitionInfo |
PartitionHandler.createPartitionIfNotExists(List<String> partitionValues)
Creates a partition if it does not exist.
|
void |
RecordWriter.flush()
Flush records from buffer.
|
org.apache.hadoop.fs.Path |
HiveStreamingConnection.getDeltaFileLocation(List<String> partitionValues,
Integer bucketId,
Long minWriteId,
Long maxWriteId,
Integer statementId)
Returns the file that would be used to store rows under this.
|
default org.apache.hadoop.fs.Path |
StreamingConnection.getDeltaFileLocation(List<String> partitionValues,
Integer bucketId,
Long minWriteId,
Long maxWriteId,
Integer statementId)
Returns the file that would be used by the writer to write the rows.
|
org.apache.hadoop.fs.Path |
AbstractRecordWriter.getDeltaFileLocation(List<String> partitionValues,
Integer bucketId,
Long minWriteId,
Long maxWriteId,
Integer statementId,
Table table)
Returns the file that would be used to store rows under this.
|
default org.apache.hadoop.fs.Path |
RecordWriter.getDeltaFileLocation(List<String> partitionValues,
Integer bucketId,
Long minWriteId,
Long maxWriteId,
Integer statementId,
Table table)
Returns the location of the delta directory.
|
void |
AbstractRecordWriter.init(StreamingConnection conn,
long minWriteId,
long maxWriteId) |
void |
RecordWriter.init(StreamingConnection connection,
long minWriteId,
long maxWriteID)
Initialize record writer.
|
void |
AbstractRecordWriter.init(StreamingConnection conn,
long minWriteId,
long maxWriteId,
int statementId) |
default void |
RecordWriter.init(StreamingConnection connection,
long minWriteId,
long maxWriteID,
int statementId)
Initialize record writer.
|
void |
StreamingTransaction.write(byte[] record)
Write data withing a transaction.
|
void |
HiveStreamingConnection.write(byte[] record) |
void |
StreamingConnection.write(byte[] record)
Write record using RecordWriter.
|
void |
StreamingTransaction.write(InputStream stream)
Write data within a transaction.
|
void |
HiveStreamingConnection.write(InputStream inputStream) |
void |
StreamingConnection.write(InputStream inputStream)
Write record using RecordWriter.
|
void |
AbstractRecordWriter.write(long writeId,
byte[] record) |
void |
RecordWriter.write(long writeId,
byte[] record)
Writes using a hive RecordUpdater.
|
void |
AbstractRecordWriter.write(long writeId,
InputStream inputStream) |
void |
RecordWriter.write(long writeId,
InputStream inputStream)
Writes using a hive RecordUpdater.
|
Constructor and Description |
---|
TransactionBatch(HiveStreamingConnection conn)
Represents a batch of transactions acquired from MetaStore.
|
UnManagedSingleTransaction(HiveStreamingConnection conn) |
Copyright © 2023 The Apache Software Foundation. All rights reserved.