|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
org.apache.hadoop.metrics2
usage.Serialization
supports the given class.
AccessControlException
instead.RemoteException
.
AccessControlException
with the specified detail message.
RemoteException
.
AccessControlException
with the specified detail message.
DistributedCache.addArchiveToClassPath(Path, Configuration, FileSystem)
instead. The FileSystem
should be obtained within an
appropriate doAs
.
DistributedCache.addFileToClassPath(Path, Configuration, FileSystem)
instead. The FileSystem
should be obtained within an
appropriate doAs
.
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
UserLogEvent
for processing.
HttpServer.addSslListener(InetSocketAddress, Configuration, boolean)
AuthenticatedURL.Token
to be
used by subsequent requests.
AuthenticationToken
only
after the Kerberos SPNEGO sequence has completed successfully.
AuthenticatedURL
class enables the use of the JDK URL
class
against HTTP endpoints protected with the AuthenticationFilter
.AuthenticatedURL
.
AuthenticatedURL
.
AuthenticationException
.
AuthenticationException
.
AuthenticationException
.
AuthenticationFilter
enables protecting web application resources with different (pluggable)
authentication mechanisms.AuthenticationToken
contains information about an authenticated HTTP client and doubles
as the Principal
to be returned by authenticated HttpServletRequest
s
The token can be serialized/deserialized to and from a string as it is sent and received in HTTP client
responses and requests as a HTTP cookie (this is done by the AuthenticationFilter
).InputFormat
that tries to deduce the types of the input files
automatically.WritableComparable
types supporting ordering/permutation by a representative set of bytes.BinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.BinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.FileSystemStore
.CompressorStream
which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockCompressorStream
.
BlockCompressorStream
with given output-stream and
compressor.
DecompressorStream
which works
with 'block-based' based compression algorithms, as opposed to
'stream-based' compression algorithms.BlockDecompressorStream
.
BlockDecompressorStream
.
MapFile
and provides very much the same
functionality.BufferedFSInputStream
with the specified buffer size,
and saves its argument, the input stream
in
, for later use.
Decompressor
based on the popular gzip compressed file format.BytesWritable
.
StringUtils.byteDesc(long)
instead.
DistributedCache.CACHE_ARCHIVES
is not a *public* constant.
DistributedCache.CACHE_ARCHIVES_SIZES
is not a *public* constant.
DistributedCache.CACHE_ARCHIVES_TIMESTAMPS
is not a *public* constant.
DistributedCache.CACHE_FILES
is not a *public* constant.
DistributedCache.CACHE_FILES_SIZES
is not a *public* constant.
DistributedCache.CACHE_FILES_TIMESTAMPS
is not a *public* constant.
DistributedCache.CACHE_LOCALARCHIVES
is not a *public* constant.
DistributedCache.CACHE_LOCALFILES
is not a *public* constant.
DistributedCache.CACHE_SYMLINK
is not a *public* constant.
Client.call(Writable, ConnectionId)
instead
Client.call(Writable, ConnectionId)
instead
Client.call(Writable, ConnectionId)
instead
param
, to the IPC server running at
address
which is servicing the protocol
protocol,
with the ticket
credentials, rpcTimeout
as timeout
and conf
as configuration for this connection, returning the
value.
param
, to the IPC server defined by
remoteId
, returning the value.
Client.call(Writable[], InetSocketAddress[],
Class, UserGroupInformation, Configuration)
instead
Client.call(Writable[], InetSocketAddress[],
Class, UserGroupInformation, Configuration)
instead
RPC.call(Method, Object[][], InetSocketAddress[], UserGroupInformation, Configuration)
instead
Server.call(Class, Writable, long)
instead
TaskTracker
is declared as 'lost/blacklisted'
by the JobTracker.
position
.
IOException
or
null pointers.
OutputCommitter.commitJob(JobContext)
or
OutputCommitter.abortJob(JobContext, int)
instead
OutputCommitter.commitJob(JobContext)
or
OutputCommitter.abortJob(JobContext, JobStatus.State)
instead
Writable
class.
JobClient
.
InputSplit
to future operations.
RecordWriter
to future operations.
InputSplit
to future operations.
RecordWriter
to future operations.
RecordWriter
to future operations.
RecordWriter
to future operations.
IOException
IOException
.
MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
InputFormat
that returns CombineFileSplit
's
in InputFormat.getSplits(JobConf, int)
method.InputFormat
that returns CombineFileSplit
's in
InputFormat.getSplits(JobContext)
method.CombineFileSplit
.CombineFileSplit
.Comparator.compare(Object, Object)
.
org.apache.hadoop.metrics2
usage.CompressionOutputStream
to compress data.Configuration
.JobConf
.
JobConf
.
JobConf
.
Configuration
.Configuration
.
Socket.connect(SocketAddress, int)
.
org.apache.hadoop.metrics2
usage.Counter
s that logically belong together.Group
of counters, comprising of counters from a particular
counter Enum
class.Compressor
for use by this CompressionCodec
.
Compressor
for use by this CompressionCodec
.
SequenceFile.Reader
returned.
Decompressor
for use by this CompressionCodec
.
Decompressor
for use by this CompressionCodec
.
FsPermission
object.
PermissionStatus
object.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
.
CompressionInputStream
that will read from the given
input stream.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
.
IOException
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
recordName
.
recordName
.
TFile.Reader.createScannerByKey(byte[], byte[])
instead.
TFile.Reader.createScannerByKey(RawComparable, RawComparable)
instead.
Iterator
for a NavigableMap
.Iterable
object,
so that an Iterator
can be created
for iterating the given NavigableMap
.
Thread.setDaemon(boolean)
with true.DataInput
implementation that reads from an in-memory
buffer.DataOutput
implementation that writes to an in-memory
buffer.DBWritable
.DBWritable
.CompressionInputStream
to compress data.PolicyProvider
without any defined services.
Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.WritableComparable
implementation.
Record
implementation.
InputFormat
that delegates behaviour of paths to multiple other
InputFormats.InputFormat
that delegates behavior of paths to multiple other
InputFormats.Mapper
that delegates behaviour of paths to multiple other
mappers.Mapper
that delegates behavior of paths to multiple other
mappers.TaggedInputSplit
UserLogEvent
sent when job logs should be deleted.InputStream
.RawComparator
that uses a Deserializer
to deserialize
the objects to be compared so that the standard Comparator
can
be used to compare them.AuthenticationHandler
.
TryIterator.tryNext()
must call this method
when there are no more elements left in the iteration.
Writer
The format of the output would be
{ "properties" : [ {key1,value1,key1.isFinal,key1.resource}, {key2,value2,
key2.isFinal,key2.resource}...
o
is a ByteWritable with the same value.
o
is a DoubleWritable with the same value.
o
is a FloatWritable with the same value.
o
is a IntWritable with the same value.
o
is a LongWritable with the same value.
o
is an MD5Hash whose digest contains the
same values.
o
is a Text with the same contents.
o
is a UTF8 with the same contents.
o
is a VIntWritable with the same value.
o
is a VLongWritable with the same value.
charToEscape
in the string
with the escape char escapeChar
EventCounter
insteadEventCounter
insteadFileSink
instead.InputFormat
.InputFormat
s.OutputCommitter
that commits files specified
in job output directory i.e.OutputCommitter
that commits files specified
in job output directory i.e.OutputFormat
.OutputFormat
s that read from FileSystem
s.INode
s and Block
s.FilterFileSystem
contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.FilterRecordWriter
is a convenience wrapper
class that extends the RecordWriter
.what
in the backing
buffer, starting as position start
.
UTF8ByteArrayUtils.findByte(byte[], int,
int, byte)
UTF8ByteArrayUtils.findBytes(byte[], int,
int, byte[])
UTF8ByteArrayUtils.findNthByte(byte[], int,
int, byte, int)
UTF8ByteArrayUtils.findNthByte(byte[],
byte, int)
StreamKeyValUtil.findTab(byte[], int, int)
StreamKeyValUtil.findTab(byte[])
FSInputStream
in a DataInputStream
and buffers input through a BufferedInputStream
.OutputStream
in a DataOutputStream
,
buffers output through a BufferedOutputStream
and creates a checksum
file.FsAction
.
Throwable
into a Runtime Exception.FileSystem
backed by an FTP client provided by Apache Commons Net.FileSystem.delete(Path, boolean)
org.apache.hadoop.metrics2
usage.GenericOptionsParser
is a utility to parse command line
arguments generic to the Hadoop framework.GenericOptionsParser to parse only the generic Hadoop
arguments.
- GenericOptionsParser(Configuration, Options, String[]) -
Constructor for class org.apache.hadoop.util.GenericOptionsParser
- Create a
GenericOptionsParser
to parse given options as well
as generic Hadoop options.
- GenericsUtil - Class in org.apache.hadoop.util
- Contains utility methods for dealing with Java Generics.
- GenericsUtil() -
Constructor for class org.apache.hadoop.util.GenericsUtil
-
- GenericWritable - Class in org.apache.hadoop.io
- A wrapper for Writable instances.
- GenericWritable() -
Constructor for class org.apache.hadoop.io.GenericWritable
-
- get(String) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property, null
if
no such property exists.
- get(String, String) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property.
- get(String) -
Method in class org.apache.hadoop.contrib.failmon.EventRecord
- Get the value of a property of the EventRecord.
- get(String) -
Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
- Get the value of a property of the EventRecord.
- get(URI, Configuration, String) -
Static method in class org.apache.hadoop.fs.FileSystem
-
- get(Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
- Returns the configured filesystem implementation.
- get(URI, Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
- Returns the FileSystem for this URI's scheme and authority.
- get(long, Writable) -
Method in class org.apache.hadoop.io.ArrayFile.Reader
- Return the
n
th value in the file.
- get() -
Method in class org.apache.hadoop.io.ArrayWritable
-
- get(WritableComparable, Writable) -
Method in class org.apache.hadoop.io.BloomMapFile.Reader
- Fast version of the
MapFile.Reader.get(WritableComparable, Writable)
method.
- get() -
Method in class org.apache.hadoop.io.BooleanWritable
- Returns the value of the BooleanWritable
- get() -
Method in class org.apache.hadoop.io.BytesWritable
- Deprecated. Use
BytesWritable.getBytes()
instead.
- get() -
Method in class org.apache.hadoop.io.ByteWritable
- Return the value of this ByteWritable.
- get() -
Method in class org.apache.hadoop.io.DoubleWritable
-
- get(BytesWritable, BytesWritable) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Copy the key and value in one shot into BytesWritables.
- get() -
Method in class org.apache.hadoop.io.FloatWritable
- Return the value of this FloatWritable.
- get() -
Method in class org.apache.hadoop.io.GenericWritable
- Return the wrapped instance.
- get() -
Method in class org.apache.hadoop.io.IntWritable
- Return the value of this IntWritable.
- get() -
Method in class org.apache.hadoop.io.LongWritable
- Return the value of this LongWritable.
- get(WritableComparable, Writable) -
Method in class org.apache.hadoop.io.MapFile.Reader
- Return the value for the named key, or null if none exists.
- get(Object) -
Method in class org.apache.hadoop.io.MapWritable
-
- get() -
Static method in class org.apache.hadoop.io.NullWritable
- Returns the single instance of this class.
- get() -
Method in class org.apache.hadoop.io.ObjectWritable
- Return the instance, or null if none.
- get(Text) -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
-
- get(WritableComparable) -
Method in class org.apache.hadoop.io.SetFile.Reader
- Read the matching key from a set into
key
.
- get(Object) -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- get() -
Method in class org.apache.hadoop.io.TwoDArrayWritable
-
- get() -
Method in class org.apache.hadoop.io.VIntWritable
- Return the value of this VIntWritable.
- get() -
Method in class org.apache.hadoop.io.VLongWritable
- Return the value of this LongWritable.
- get(Class<? extends WritableComparable>) -
Static method in class org.apache.hadoop.io.WritableComparator
- Get a comparator for a
WritableComparable
implementation.
- get() -
Static method in class org.apache.hadoop.ipc.Server
- Returns the server instance called under or null.
- get(int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
- Get ith child InputSplit.
- get(int) -
Method in class org.apache.hadoop.mapred.join.TupleWritable
- Get ith Writable from Tuple.
- get() -
Method in class org.apache.hadoop.metrics.util.MetricsIntValue
- Deprecated. Get value
- get() -
Method in class org.apache.hadoop.metrics.util.MetricsLongValue
- Deprecated. Get value
- get(String) -
Method in class org.apache.hadoop.metrics.util.MetricsRegistry
- Deprecated.
- get(String) -
Method in class org.apache.hadoop.metrics2.lib.MetricsRegistry
- Get a metric by name
- get(String, Collection<MetricsTag>) -
Method in class org.apache.hadoop.metrics2.util.MetricsCache
- Get the cached record
- get(DataInput) -
Static method in class org.apache.hadoop.record.BinaryRecordInput
- Get a thread-local record input for the supplied DataInput.
- get(DataOutput) -
Static method in class org.apache.hadoop.record.BinaryRecordOutput
- Get a thread-local record output for the supplied DataOutput.
- get() -
Method in class org.apache.hadoop.record.Buffer
- Get the data from the Buffer.
- get(DataInput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesInput
- Get a thread-local typed bytes input for the supplied
DataInput
.
- get(DataOutput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesOutput
- Get a thread-local typed bytes output for the supplied
DataOutput
.
- get(TypedBytesInput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordInput
- Get a thread-local typed bytes record input for the supplied
TypedBytesInput
.
- get(DataInput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordInput
- Get a thread-local typed bytes record input for the supplied
DataInput
.
- get(TypedBytesOutput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordOutput
- Get a thread-local typed bytes record input for the supplied
TypedBytesOutput
.
- get(DataOutput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesRecordOutput
- Get a thread-local typed bytes record output for the supplied
DataOutput
.
- get(TypedBytesInput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableInput
- Get a thread-local typed bytes writable input for the supplied
TypedBytesInput
.
- get(DataInput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableInput
- Get a thread-local typed bytes writable input for the supplied
DataInput
.
- get(TypedBytesOutput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableOutput
- Get a thread-local typed bytes writable input for the supplied
TypedBytesOutput
.
- get(DataOutput) -
Static method in class org.apache.hadoop.typedbytes.TypedBytesWritableOutput
- Get a thread-local typed bytes writable output for the supplied
DataOutput
.
- get() -
Method in class org.apache.hadoop.util.Progress
- Returns the overall progress of the root.
- getAbsolutePath(String) -
Method in class org.apache.hadoop.streaming.PathFinder
- Returns the full path name of this file if it is listed in the
path
- getAcceptAnonymous() -
Method in class org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler
- Returns if the handler is configured to support anonymous users.
- getAccessKey() -
Method in class org.apache.hadoop.fs.s3.S3Credentials
-
- getAccessTime() -
Method in class org.apache.hadoop.fs.FileStatus
- Get the access time of the file.
- getAclName() -
Method in enum org.apache.hadoop.mapreduce.JobACL
- Get the name of the ACL.
- getACLString() -
Method in class org.apache.hadoop.security.authorize.AccessControlList
- Returns the String representation of this ACL.
- getActiveTrackerNames() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the names of active task trackers in the cluster.
- getAddress(Configuration) -
Static method in class org.apache.hadoop.mapred.JobTracker
-
- getAdminAcls(Configuration, String) -
Static method in class org.apache.hadoop.security.SecurityUtil
- Get the ACL object representing the cluster administrators
The user who starts the daemon is automatically added as an admin
- getAlgorithmName() -
Method in class org.apache.hadoop.fs.FileChecksum
- The checksum algorithm name
- getAlgorithmName() -
Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
- The checksum algorithm name
- getAliveNodesInfoJson() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getAliveNodesInfoJson() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getAllAttempts() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.JVMInfo
-
- getAllContexts() -
Method in class org.apache.hadoop.metrics.ContextFactory
- Deprecated. Returns all MetricsContexts built by this factory.
- getAllJobs() -
Method in class org.apache.hadoop.mapred.JobClient
- Get the jobs that are submitted.
- getAllJobs() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getAllKeys() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager
-
- getAllLocalPathsToRead(String, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
- Get all of the paths that currently exist in the working directories.
- getAllRecords() -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Deprecated. Retrieves all the records managed by this MetricsContext.
- getAllRecords() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Deprecated. Retrieves all the records managed by this MetricsContext.
- getAllStaticResolutions() -
Static method in class org.apache.hadoop.net.NetUtils
- This is used to get all the resolutions that were added using
NetUtils.addStaticResolution(String, String)
.
- getAllStatistics() -
Static method in class org.apache.hadoop.fs.FileSystem
- Return the FileSystem classes that have Statistics
- getAllTasks() -
Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Returns all map and reduce tasks
.
- getAllTokens() -
Method in class org.apache.hadoop.security.Credentials
- Return all the tokens in the in-memory map
- getApproxChkSumLength(long) -
Static method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- getArchiveClassPaths(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Get the archive entries in classpath as an array of Path.
- getArchiveTimestamps(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Get the timestamps of the archives.
- getArchiveVisibilities(Configuration) -
Static method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
- Get the booleans on whether the archives are public or not.
- getAssignedJobID() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getAssignedTracker(TaskAttemptID) -
Method in class org.apache.hadoop.mapred.JobTracker
- Get tracker name for a given task id.
- getAttemptsToStartSkipping(Configuration) -
Static method in class org.apache.hadoop.mapred.SkipBadRecords
- Get the number of Task attempts AFTER which skip mode
will be kicked off.
- getAttribute(String) -
Method in class org.apache.hadoop.http.HttpServer
- Get the value in the webapp context.
- getAttribute(String) -
Method in class org.apache.hadoop.metrics.ContextFactory
- Deprecated. Returns the value of the named attribute, or null if there is no
attribute of that name.
- getAttribute(String) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Deprecated. Convenience method for subclasses to access factory attributes.
- getAttribute(String) -
Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
- Deprecated.
- getAttributeNames() -
Method in class org.apache.hadoop.metrics.ContextFactory
- Deprecated. Returns the names of all the factory's attributes.
- getAttributes(String[]) -
Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
- Deprecated.
- getAttributeTable(String) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Deprecated. Returns an attribute-value map derived from the factory attributes
by finding all factory attributes that begin with
contextName.tableName.
- getAuthenticationHandler() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationFilter
- Returns the authentication handler being used.
- getAuthenticationMethod() -
Method in class org.apache.hadoop.security.UserGroupInformation
- Get the authentication method from the subject
- getAutoIncrMapperProcCount(Configuration) -
Static method in class org.apache.hadoop.mapred.SkipBadRecords
- Get the flag which if set to true,
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS
is incremented
by MapRunner after invoking the map function.
- getAutoIncrReducerProcCount(Configuration) -
Static method in class org.apache.hadoop.mapred.SkipBadRecords
- Get the flag which if set to true,
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS
is incremented
by framework after invoking the reduce function.
- getAvailable() -
Method in class org.apache.hadoop.fs.DF
-
- getAvailableMapSlots() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Get available map slots.
- getAvailablePhysicalMemorySize() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the total size of the available physical memory present
in the system.
- getAvailablePhysicalMemorySize() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the total size of the available physical memory present
in the system.
- getAvailableReduceSlots() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Get available reduce slots.
- getAvailableSlots(TaskType) -
Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
- Get the number of currently available slots on this tasktracker for the
given type of the task.
- getAvailableVirtualMemorySize() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the total size of the available virtual memory present
in the system.
- getAvailableVirtualMemorySize() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the total size of the available virtual memory present
in the system.
- getBaseLogDir() -
Static method in class org.apache.hadoop.mapred.TaskLog
-
- getBasePathInJarOut(String) -
Method in class org.apache.hadoop.streaming.JarBuilder
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
-
- getBeginColumn() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- getBeginLine() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- getBlacklistedNodesInfoJson() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getBlacklistedNodesInfoJson() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getBlackListedTaskTrackerCount() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of blacklisted trackers in the cluster.
- getBlacklistedTrackerNames() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the names of blacklisted task trackers in the cluster.
- getBlacklistedTrackers() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the number of blacklisted task trackers in the cluster.
- getBlockIndex(BlockLocation[], long) -
Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getBlockIndex(BlockLocation[], long) -
Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- getBlocks() -
Method in class org.apache.hadoop.fs.s3.INode
-
- getBlockSize() -
Method in class org.apache.hadoop.fs.FileStatus
- Get the block size of the file.
- getBlockSize(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Deprecated. Use getFileStatus() instead
- getBlockSize() -
Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
- Returns the blocksize parameter specified at construction time.
- getBloomFilter() -
Method in class org.apache.hadoop.io.BloomMapFile.Reader
- Retrieve the Bloom filter used by this instance of the Reader.
- getBoolean(String, boolean) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property as a boolean
.
- getBoundAntProperty(String, String) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getBoundingValsQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getBuildVersion() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getBuildVersion() -
Static method in class org.apache.hadoop.util.VersionInfo
- Returns the buildVersion which includes version,
revision, user and date.
- getByName(String) -
Static method in class org.apache.hadoop.security.SecurityUtil
- Resolves a host subject to the security requirements determined by
hadoop.security.token.service.use_ip.
- getByName(String) -
Method in class org.apache.hadoop.security.SecurityUtil.QualifiedHostResolver
- Create an InetAddress with a fully qualified hostname of the given
hostname.
- getBytes() -
Method in class org.apache.hadoop.fs.FileChecksum
- The value of the checksum in bytes
- getBytes() -
Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
- The value of the checksum in bytes
- getBytes() -
Method in class org.apache.hadoop.io.BinaryComparable
- Return representative byte array for this instance.
- getBytes() -
Method in class org.apache.hadoop.io.BytesWritable
- Get the data from the BytesWritable.
- getBytes() -
Method in class org.apache.hadoop.io.Text
- Returns the raw bytes; however, only data up to
Text.getLength()
is
valid.
- getBytes() -
Method in class org.apache.hadoop.io.UTF8
- Deprecated. The raw bytes.
- getBytes(String) -
Static method in class org.apache.hadoop.io.UTF8
- Deprecated. Convert a string to a UTF-8 encoded byte array.
- getBytes() -
Method in class org.apache.hadoop.security.token.TokenIdentifier
- Get the bytes for the token identifier
- getBytes() -
Method in class org.apache.hadoop.util.bloom.Key
-
- getBytesPerChecksum() -
Method in class org.apache.hadoop.util.DataChecksum
-
- getBytesPerSum() -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- Return the bytes Per Checksum
- getBytesRead() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Get the total number of bytes read
- getBytesRead() -
Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
-
- getBytesRead() -
Method in interface org.apache.hadoop.io.compress.Compressor
- Return number of uncompressed bytes input so far.
- getBytesRead() -
Method in class org.apache.hadoop.io.compress.snappy.SnappyCompressor
- Return number of bytes given to this compressor since last reset.
- getBytesRead() -
Method in class org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor
- Returns the total number of compressed bytes input so far, including
gzip header/trailer bytes.
- getBytesRead() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
- Returns the total number of uncompressed bytes input so far.
- getBytesRead() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
- Returns the total number of compressed bytes input so far.
- getBytesWritten() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Get the total number of bytes written
- getBytesWritten() -
Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
-
- getBytesWritten() -
Method in interface org.apache.hadoop.io.compress.Compressor
- Return number of compressed bytes output so far.
- getBytesWritten() -
Method in class org.apache.hadoop.io.compress.snappy.SnappyCompressor
- Return number of bytes consumed by callers of compress since last reset.
- getBytesWritten() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
- Returns the total number of compressed bytes output so far.
- getBytesWritten() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
- Returns the total number of uncompressed bytes output so far.
- getCacheArchives(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Get cache archives set in the Configuration.
- getCacheFiles(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Get cache files set in the Configuration.
- getCallQueueLen() -
Method in class org.apache.hadoop.ipc.Server
- The number of rpc calls in the queue.
- getCanonicalServiceName() -
Method in class org.apache.hadoop.fs.FileSystem
- Get a canonical service name for this file system.
- getCanonicalServiceName() -
Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getCanonicalServiceName() -
Method in class org.apache.hadoop.fs.HarFileSystem
-
- getCanonicalUri() -
Method in class org.apache.hadoop.fs.FileSystem
- Resolve the uri's hostname and add the default port if not in the uri
- getCanonicalUri(URI, int) -
Static method in class org.apache.hadoop.net.NetUtils
- Resolve the uri's hostname and add the default port if not in the uri
- getCapacity() -
Method in class org.apache.hadoop.fs.DF
-
- getCapacity() -
Method in class org.apache.hadoop.io.BytesWritable
- Get the capacity, which is the maximum size that could handled without
resizing the backing storage.
- getCapacity() -
Method in class org.apache.hadoop.record.Buffer
- Get the capacity, which is the maximum count that could handled without
resizing the backing storage.
- getCategory(List<List<Pentomino.ColumnName>>) -
Method in class org.apache.hadoop.examples.dancing.Pentomino
- Find whether the solution has the x in the upper left quadrant, the
x-midline, the y-midline or in the center.
- getChannel() -
Method in class org.apache.hadoop.net.SocketInputStream
- Returns underlying channel used by inputstream.
- getChannel() -
Method in class org.apache.hadoop.net.SocketOutputStream
- Returns underlying channel used by this stream.
- getChecksumFile(Path) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- Return the name of the checksum file associated with a file.
- getChecksumFileLength(Path, long) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- Return the length of the checksum file given the size of the
actual file.
- getChecksumHeaderSize() -
Static method in class org.apache.hadoop.util.DataChecksum
-
- getChecksumLength(long, int) -
Static method in class org.apache.hadoop.fs.ChecksumFileSystem
- Calculated the length of the checksum file in bytes.
- getChecksumSize() -
Method in class org.apache.hadoop.util.DataChecksum
-
- getChecksumType() -
Method in class org.apache.hadoop.util.DataChecksum
-
- getChunkPosition(long) -
Method in class org.apache.hadoop.fs.FSInputChecker
- Return position of beginning of chunk containing pos.
- getClass(String, Class<?>) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property as a Class
.
- getClass(String, Class<? extends U>, Class<U>) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property as a Class
implementing the interface specified by xface
.
- getClass(byte) -
Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getClass(String, Configuration) -
Static method in class org.apache.hadoop.io.WritableName
- Return the class for a name.
- getClass(T) -
Static method in class org.apache.hadoop.util.GenericsUtil
- Returns the Class object (of type
Class<T>
) of the
argument of type T
.
- getClass(T) -
Static method in class org.apache.hadoop.util.ReflectionUtils
- Return the correctly-typed
Class
of the given object.
- getClassByName(String) -
Method in class org.apache.hadoop.conf.Configuration
- Load a class by name.
- getClassByName(String) -
Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
-
- getClasses(String, Class<?>...) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property
as an array of Class
.
- getClassLoader() -
Method in class org.apache.hadoop.conf.Configuration
- Get the
ClassLoader
for this job.
- getClassName() -
Method in exception org.apache.hadoop.ipc.RemoteException
-
- getClassPaths() -
Method in class org.apache.hadoop.filecache.TaskDistributedCacheManager
- Retrieves class paths (as local references) to add.
- getCleanupTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
- Get the information of the current state of the cleanup tasks of a job.
- getCleanupTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getClientInput() -
Method in class org.apache.hadoop.streaming.PipeMapRed
- Returns the DataInput from which the client output is read.
- getClientOutput() -
Method in class org.apache.hadoop.streaming.PipeMapRed
- Returns the DataOutput to which the client input is written.
- getClientVersion() -
Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
- Get the client's preferred version
- getClock() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getClosest(WritableComparable, Writable) -
Method in class org.apache.hadoop.io.MapFile.Reader
- Finds the record that is the closest match to the specified key.
- getClosest(WritableComparable, Writable, boolean) -
Method in class org.apache.hadoop.io.MapFile.Reader
- Finds the record that is the closest match to the specified key.
- getClusterMetrics() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getClusterNick() -
Method in class org.apache.hadoop.streaming.StreamJob
- Deprecated.
- getClusterStatus() -
Method in class org.apache.hadoop.mapred.JobClient
- Get status information about the Map-Reduce cluster.
- getClusterStatus(boolean) -
Method in class org.apache.hadoop.mapred.JobClient
- Get status information about the Map-Reduce cluster.
- getClusterStatus() -
Method in class org.apache.hadoop.mapred.JobTracker
- Deprecated. use
JobTracker.getClusterStatus(boolean)
- getClusterStatus(boolean) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getCodec(Path) -
Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
- Find the relevant compression codec for the given file based on its
filename suffix.
- getCodecClasses(Configuration) -
Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
- Get the list of codecs listed in the configuration
- getCollector(String, Reporter) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Gets the output collector for a named output.
- getCollector(String, String, Reporter) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Gets the output collector for a multi named output.
- getColumnName(int) -
Method in class org.apache.hadoop.examples.dancing.DancingLinks
- Get the name of a given column as a string
- getCombinerClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- getCombinerClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the combiner class for the job.
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
-
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
-
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
-
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
-
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
-
- getCombinerOutput() -
Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
-
- getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getCommandLine() -
Method in class org.apache.hadoop.util.GenericOptionsParser
- Returns the commons-cli
CommandLine
object
to process the parsed arguments.
- getCommandName() -
Method in class org.apache.hadoop.fs.shell.Command
- Return the command's name excluding the leading character -
- getCommandName() -
Method in class org.apache.hadoop.fs.shell.Count
-
- getComparator() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get an instance of the RawComparator that is constructed based on the
string comparator representation.
- getComparator() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Return comparator defining the ordering for RecordReaders in this
composite.
- getComparatorName() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get the string representation of the comparator.
- getCompressedData() -
Method in class org.apache.hadoop.io.compress.BlockDecompressorStream
-
- getCompressedData() -
Method in class org.apache.hadoop.io.compress.DecompressorStream
-
- getCompressionCodec() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns the compression codec of data in this file.
- getCompressionCodec() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
- Returns the compression codec of data in this file.
- getCompressionLevel(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
-
- getCompressionStrategy(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
-
- getCompressionType(Configuration) -
Static method in class org.apache.hadoop.io.SequenceFile
- Deprecated. Use
SequenceFileOutputFormat.getOutputCompressionType(org.apache.hadoop.mapred.JobConf)
to get SequenceFile.CompressionType
for job-outputs.
- getCompressMapOutput() -
Method in class org.apache.hadoop.mapred.JobConf
- Are the outputs of the maps be compressed?
- getCompressor(CompressionCodec, Configuration) -
Static method in class org.apache.hadoop.io.compress.CodecPool
- Get a
Compressor
for the given CompressionCodec
from the
pool or a new one.
- getCompressor(CompressionCodec) -
Static method in class org.apache.hadoop.io.compress.CodecPool
-
- getCompressorType() -
Method in class org.apache.hadoop.io.compress.BZip2Codec
- This functionality is currently not supported.
- getCompressorType() -
Method in interface org.apache.hadoop.io.compress.CompressionCodec
- Get the type of
Compressor
needed by this CompressionCodec
.
- getCompressorType() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getCompressorType() -
Method in class org.apache.hadoop.io.compress.GzipCodec
-
- getCompressorType() -
Method in class org.apache.hadoop.io.compress.SnappyCodec
- Get the type of
Compressor
needed by this CompressionCodec
.
- getCompressOutput(JobConf) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Is the job output compressed?
- getCompressOutput(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Is the job output compressed?
- getConditions() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getConf() -
Method in interface org.apache.hadoop.conf.Configurable
- Return the configuration used by this object.
- getConf() -
Method in class org.apache.hadoop.conf.Configured
-
- getConf() -
Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getConf() -
Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getConf() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getConf() -
Method in class org.apache.hadoop.io.compress.SnappyCodec
- Return the configuration used by this object.
- getConf() -
Method in class org.apache.hadoop.io.GenericWritable
-
- getConf() -
Method in class org.apache.hadoop.io.ObjectWritable
-
- getConf() -
Method in class org.apache.hadoop.mapred.JobTracker
- Returns a handle to the JobTracker's Configuration
- getConf() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Return the configuration used by this object.
- getConf() -
Method in class org.apache.hadoop.mapred.lib.InputSampler
-
- getConf() -
Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
-
- getConf() -
Method in class org.apache.hadoop.mapred.Task
-
- getConf() -
Method in class org.apache.hadoop.mapred.TaskController
-
- getConf() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getConf() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConf() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.FilterBase
-
- getConf() -
Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- getConf() -
Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getConf() -
Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getConf() -
Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getConf() -
Method in class org.apache.hadoop.net.ScriptBasedMapping
-
- getConf() -
Method in class org.apache.hadoop.net.SocksSocketFactory
-
- getConf() -
Method in class org.apache.hadoop.streaming.DumpTypedBytes
-
- getConf() -
Method in class org.apache.hadoop.streaming.LoadTypedBytes
-
- getConf() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- getConf() -
Method in class org.apache.hadoop.typedbytes.TypedBytesWritableInput
-
- getConfiguration() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the underlying configuration object.
- getConfiguration() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Return the configuration for the job.
- getConfiguration(String, FilterConfig) -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationFilter
- Returns the filtered configuration (only properties starting with the specified prefix).
- getConfiguration() -
Method in class org.apache.hadoop.streaming.PipeMapRed
- Returns the Configuration.
- getConfiguration() -
Method in class org.apache.hadoop.util.GenericOptionsParser
- Get the modified configuration
- getConfigVersion() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getConfigVersion() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getConfigVersion() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getConfigVersion() -
Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getConfResourceAsInputStream(String) -
Method in class org.apache.hadoop.conf.Configuration
- Get an input stream attached to the configuration resource with the
given
name
.
- getConfResourceAsReader(String) -
Method in class org.apache.hadoop.conf.Configuration
- Get a
Reader
attached to the configuration resource with the
given name
.
- getConnectAddress(Server) -
Static method in class org.apache.hadoop.net.NetUtils
- Returns InetSocketAddress that a client can use to
connect to the server.
- getConnection() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
- Returns a connection object o the DB
- getConnection() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConnection() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
- getConnection() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getContentSummary(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Return the
ContentSummary
of a given Path
.
- getContext(String, String) -
Method in class org.apache.hadoop.metrics.ContextFactory
- Deprecated. Returns the named MetricsContext instance, constructing it if necessary
using the factory's current configuration attributes.
- getContext(String) -
Method in class org.apache.hadoop.metrics.ContextFactory
- Deprecated.
- getContext(String) -
Static method in class org.apache.hadoop.metrics.MetricsUtil
- Deprecated.
- getContext(String, String) -
Static method in class org.apache.hadoop.metrics.MetricsUtil
- Deprecated. Utility method to return the named context.
- getContext() -
Method in class org.apache.hadoop.streaming.PipeMapRed
-
- getContextFactory() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Deprecated. Returns the factory by which this context was created.
- getContextName() -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Deprecated. Returns the context name.
- getContextName() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Deprecated. Returns the context name.
- getCookieDomain() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationFilter
- Returns the cookie domain to use for the HTTP cookie.
- getCookiePath() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationFilter
- Returns the cookie path to use for the HTTP cookie.
- getCount() -
Method in class org.apache.hadoop.record.Buffer
- Get the current count of the buffer.
- getCounter() -
Method in class org.apache.hadoop.mapred.Counters.Counter
- What is the current value of this counter?
- getCounter(Enum) -
Method in class org.apache.hadoop.mapred.Counters
- Returns current value of the specified counter, or 0 if the counter
does not exist.
- getCounter(String) -
Method in class org.apache.hadoop.mapred.Counters.Group
- Returns the value of the specified counter, or 0 if the counter does
not exist.
- getCounter(int, String) -
Method in class org.apache.hadoop.mapred.Counters.Group
- Deprecated. use
Counters.Group.getCounter(String)
instead
- getCounter(Enum<?>) -
Method in interface org.apache.hadoop.mapred.Reporter
- Get the
Counters.Counter
of the given group with the given name.
- getCounter(String, String) -
Method in interface org.apache.hadoop.mapred.Reporter
- Get the
Counters.Counter
of the given group with the given name.
- getCounter(String, String) -
Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getCounter(Enum<?>) -
Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getCounter(Enum<?>) -
Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- getCounter(String, String) -
Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- getCounter(Enum<?>) -
Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
-
- getCounter(String, String) -
Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
-
- getCounterForName(String) -
Method in class org.apache.hadoop.mapred.Counters.Group
- Get the counter for the given name and create it if it doesn't exist.
- getCounters(Counters) -
Method in class org.apache.hadoop.mapred.JobInProgress
- Returns the total job counters, by adding together the job,
the map and the reduce counters.
- getCounters() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Gets the counters for this job.
- getCounters() -
Method in class org.apache.hadoop.mapred.TaskReport
- A table of counters.
- getCounters() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get task's counters.
- getCounters() -
Method in class org.apache.hadoop.mapreduce.Job
- Gets the counters for this job.
- getCountersEnabled(JobConf) -
Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Returns if the counters for the named outputs are enabled or not.
- getCountersEnabled(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
- Returns if the counters for the named outputs are enabled or not.
- getCountQuery() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
- Returns the query for getting the total number of rows,
subclasses can override this for custom behaviour.
- getCountQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
- Returns the query for getting the total number of rows,
subclasses can override this for custom behaviour.
- getCpuFrequency() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the CPU frequency of on the system.
- getCpuFrequency() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the CPU frequency of on the system.
- getCpuUsage() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the CPU usage % of the machine.
- getCpuUsage() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the CPU usage % of the machine.
- getCredentials() -
Method in class org.apache.hadoop.mapred.JobConf
- Get credentials for the job.
- getCredentials() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get credentials for the job.
- getCumulativeCpuTime() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the cumulative CPU time since the system is on.
- getCumulativeCpuTime() -
Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
- Get the CPU time in millisecond used by all the processes in the
process-tree since the process-tree created
- getCumulativeCpuTime() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the cumulative CPU time since the system is on.
- getCumulativeCpuTime() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin.ProcResourceValues
- Obtain the cumulative CPU time used by a current process tree.
- getCumulativeRssmem() -
Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
- Get the cumulative resident set size (rss) memory used by all the processes
in the process-tree.
- getCumulativeRssmem(int) -
Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
- Get the cumulative resident set size (rss) memory used by all the processes
in the process-tree that are older than the passed in age.
- getCumulativeVmem() -
Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
- Get the cumulative virtual memory used by all the processes in the
process-tree.
- getCumulativeVmem(int) -
Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
- Get the cumulative virtual memory used by all the processes in the
process-tree that are older than the passed in age.
- getCurrentIntervalValue() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
- Deprecated. The Value at the current interval
- getCurrentIntervalValue() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
- Deprecated. The Value at the current interval
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
- Get the current key
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.MapContext
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.RecordReader
- Get the current key
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.ReduceContext
-
- getCurrentKey() -
Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
- Get the current key.
- getCurrentKey() -
Method in class org.apache.hadoop.streaming.io.OutputReader
- Returns the current key.
- getCurrentKey() -
Method in class org.apache.hadoop.streaming.io.RawBytesOutputReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.streaming.io.TextOutputReader
-
- getCurrentKey() -
Method in class org.apache.hadoop.streaming.io.TypedBytesOutputReader
-
- getCurrentSegmentGeneration(Directory) -
Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
- Get the generation (N) of the current segments_N file in the directory.
- getCurrentSegmentGeneration(String[]) -
Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
- Get the generation (N) of the current segments_N file from a list of
files.
- getCurrentSplit(JobConf) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getCurrentStatus() -
Method in class org.apache.hadoop.mapred.TaskReport
- The current status
- getCurrentTrashDir() -
Method in class org.apache.hadoop.fs.FsShell
- Returns the Trash object associated with this shell.
- getCurrentUser() -
Static method in class org.apache.hadoop.security.UserGroupInformation
- Return the current user, including any doAs in the current stack.
- getCurrentValue(Writable) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Get the 'value' corresponding to the last read 'key'.
- getCurrentValue(Object) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Get the 'value' corresponding to the last read 'key'.
- getCurrentValue(V) -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
- Get the current value.
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.MapContext
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.RecordReader
- Get the current value.
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.ReduceContext
-
- getCurrentValue() -
Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
- Get the current value.
- getCurrentValue() -
Method in class org.apache.hadoop.streaming.io.OutputReader
- Returns the current value.
- getCurrentValue() -
Method in class org.apache.hadoop.streaming.io.RawBytesOutputReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.streaming.io.TextOutputReader
-
- getCurrentValue() -
Method in class org.apache.hadoop.streaming.io.TypedBytesOutputReader
-
- getData() -
Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- getData() -
Method in class org.apache.hadoop.io.DataInputBuffer
-
- getData() -
Method in class org.apache.hadoop.io.DataOutputBuffer
- Returns the current contents of the buffer.
- getData() -
Method in class org.apache.hadoop.io.OutputBuffer
- Returns the current contents of the buffer.
- getDate() -
Static method in class org.apache.hadoop.util.VersionInfo
- The date that Hadoop was compiled.
- getDBConf() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDBConf() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getDBProductName() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDeclaredClass() -
Method in class org.apache.hadoop.io.ObjectWritable
- Return the class this is meant to be.
- getDecommissionedTaskTrackerCount() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of decommissioned trackers in the cluster.
- getDecompressor(CompressionCodec) -
Static method in class org.apache.hadoop.io.compress.CodecPool
- Get a
Decompressor
for the given CompressionCodec
from the
pool or a new one.
- getDecompressorType() -
Method in class org.apache.hadoop.io.compress.BZip2Codec
- This functionality is currently not supported.
- getDecompressorType() -
Method in interface org.apache.hadoop.io.compress.CompressionCodec
- Get the type of
Decompressor
needed by this CompressionCodec
.
- getDecompressorType() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getDecompressorType() -
Method in class org.apache.hadoop.io.compress.GzipCodec
-
- getDecompressorType() -
Method in class org.apache.hadoop.io.compress.SnappyCodec
- Get the type of
Decompressor
needed by this CompressionCodec
.
- getDefault() -
Static method in class org.apache.hadoop.fs.permission.FsPermission
- Get the default permission.
- getDefaultAuthenticator() -
Static method in class org.apache.hadoop.security.authentication.client.AuthenticatedURL
- Returns the default
Authenticator
class to use when an AuthenticatedURL
instance
is created without specifying an authenticator.
- getDefaultBlockSize() -
Method in class org.apache.hadoop.fs.FileSystem
- Return the number of bytes that large input files should be optimally
be split into to minimize i/o time.
- getDefaultBlockSize() -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Return the number of bytes that large input files should be optimally
be split into to minimize i/o time.
- getDefaultBlockSize() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- getDefaultExtension() -
Method in class org.apache.hadoop.io.compress.BZip2Codec
- .bz2 is recognized as the default extension for compressed BZip2 files
- getDefaultExtension() -
Method in interface org.apache.hadoop.io.compress.CompressionCodec
- Get the default filename extension for this kind of compression.
- getDefaultExtension() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- getDefaultExtension() -
Method in class org.apache.hadoop.io.compress.GzipCodec
-
- getDefaultExtension() -
Method in class org.apache.hadoop.io.compress.SnappyCodec
- Get the default filename extension for this kind of compression.
- getDefaultHost(String, String) -
Static method in class org.apache.hadoop.net.DNS
- Returns the default (first) host name associated by the provided
nameserver with the address bound to the specified network interface
- getDefaultHost(String) -
Static method in class org.apache.hadoop.net.DNS
- Returns the default (first) host name associated by the default
nameserver with the address bound to the specified network interface
- getDefaultIP(String) -
Static method in class org.apache.hadoop.net.DNS
- Returns the first available IP address associated with the provided
network interface
- getDefaultMaps() -
Method in class org.apache.hadoop.mapred.JobClient
- Get status information about the max available Maps in the cluster.
- getDefaultPort() -
Method in class org.apache.hadoop.fs.FileSystem
- Get the default port for this file system.
- getDefaultRealm() -
Static method in class org.apache.hadoop.security.authentication.util.KerberosUtil
-
- getDefaultRealm() -
Method in class org.apache.hadoop.security.KerberosName
- Get the configured default realm.
- getDefaultReduces() -
Method in class org.apache.hadoop.mapred.JobClient
- Get status information about the max available Reduces in the cluster.
- getDefaultReplication() -
Method in class org.apache.hadoop.fs.FileSystem
- Get the default replication.
- getDefaultReplication() -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Get the default replication.
- getDefaultReplication() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- getDefaultSocketFactory(Configuration) -
Static method in class org.apache.hadoop.net.NetUtils
- Get the default socket factory as specified by the configuration
parameter hadoop.rpc.socket.factory.default
- getDefaultUri(Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
- Get the default filesystem URI from a configuration.
- getDefaultWorkFile(TaskAttemptContext, String) -
Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Get the default path and filename for the output format.
- getDelegate() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Obtain an iterator over the child RRs apropos of the value type
ultimately emitted from this join.
- getDelegate() -
Method in class org.apache.hadoop.mapred.join.JoinRecordReader
- Return an iterator wrapping the JoinCollector.
- getDelegate() -
Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
- Return an iterator returning a single value from the tuple.
- getDelegationToken(String) -
Method in class org.apache.hadoop.fs.FileSystem
- Get a new delegation token for this file system.
- getDelegationToken(Text) -
Method in class org.apache.hadoop.mapred.JobClient
-
- getDelegationToken(Text) -
Method in class org.apache.hadoop.mapred.JobTracker
- Get a new delegation token.
- getDelegationToken(Credentials, String) -
Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
- getDelegationTokens(Configuration, Credentials) -
Static method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
- For each archive or cache file - get the corresponding delegation token
- getDelegationTokenSecretManager() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getDependentJobs() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getDependingJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getDescription() -
Method in class org.apache.hadoop.metrics.util.MetricsBase
- Deprecated.
- getDeserializer(Class<Serializable>) -
Method in class org.apache.hadoop.io.serializer.JavaSerialization
-
- getDeserializer(Class<T>) -
Method in interface org.apache.hadoop.io.serializer.Serialization
-
- getDeserializer(Class<T>) -
Method in class org.apache.hadoop.io.serializer.SerializationFactory
-
- getDeserializer(Class<Writable>) -
Method in class org.apache.hadoop.io.serializer.WritableSerialization
-
- getDiagnosticInfo() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getDiagnostics() -
Method in class org.apache.hadoop.mapred.TaskReport
- A list of error messages.
- getDigest() -
Method in class org.apache.hadoop.io.MD5Hash
- Returns the digest bytes.
- getDirectory() -
Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
- Get the ram directory of the intermediate form.
- getDirectory() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
- Get the directory where this shard resides.
- getDirectoryCount() -
Method in class org.apache.hadoop.fs.ContentSummary
-
- getDirPath() -
Method in class org.apache.hadoop.fs.DF
-
- getDirPath() -
Method in class org.apache.hadoop.fs.DU
-
- getDisplayName() -
Method in class org.apache.hadoop.mapred.Counters.Group
- Returns localized name of the group.
- getDisplayName() -
Method in class org.apache.hadoop.mapreduce.Counter
- Get the name of the counter.
- getDisplayName() -
Method in class org.apache.hadoop.mapreduce.CounterGroup
- Get the display name of the group.
- getDistance(Node, Node) -
Method in class org.apache.hadoop.net.NetworkTopology
- Return the distance between two nodes
It is assumed that the distance from one node to its parent is 1
The distance between two nodes is calculated by summing up their distances
to their closest common ancestor.
- getDistributionPolicyClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the distribution policy class.
- getDmax(String) -
Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
- Deprecated.
- getDocument() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
- Get the document.
- getDocumentAnalyzerClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the analyzer class.
- getDoubleValue(Object) -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
- getDU(File) -
Static method in class org.apache.hadoop.fs.FileUtil
- Takes an input dir and returns the du on that local directory.
- getElementTypeID() -
Method in class org.apache.hadoop.record.meta.VectorTypeID
-
- getEmptier() -
Method in class org.apache.hadoop.fs.Trash
- Return a
Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
- getEnd() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
-
- getEnd() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getEndColumn() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- getEndLine() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) -
Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
- Get an entry from output generated by this class.
- getEntryComparator() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get a Comparator object to compare Entries.
- getEntryCount() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get the number of key-value pair entries in TFile.
- getEnum(String, T) -
Method in class org.apache.hadoop.conf.Configuration
- Return value matching this enumerated type.
- getErrno() -
Method in exception org.apache.hadoop.io.nativeio.NativeIOException
-
- getError() -
Static method in class org.apache.hadoop.log.metrics.EventCounter
-
- getEventId() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Returns event Id.
- getEventType() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogEvent
- Return the
UserLogEvent.EventType
.
- getExceptions() -
Method in exception org.apache.hadoop.io.MultipleIOException
-
- getExcludedHosts() -
Method in class org.apache.hadoop.util.HostsFileReader
-
- getExecString() -
Method in class org.apache.hadoop.fs.DF
-
- getExecString() -
Method in class org.apache.hadoop.fs.DU
-
- getExecString() -
Method in class org.apache.hadoop.util.Shell
- return an array containing the command name & its parameters
- getExecString() -
Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
-
- getExecutable(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Get the URI of the application's executable.
- getExitCode() -
Method in exception org.apache.hadoop.util.Shell.ExitCodeException
-
- getExitCode() -
Method in class org.apache.hadoop.util.Shell
- get the exit code
- getExpires() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationToken
- Returns the expiration time of the token.
- getExpiryDate() -
Method in class org.apache.hadoop.security.token.delegation.DelegationKey
-
- getFactor() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Get the number of streams to merge at once.
- getFactory(Class) -
Static method in class org.apache.hadoop.io.WritableFactories
- Define a factory for a class.
- getFactory() -
Static method in class org.apache.hadoop.metrics.ContextFactory
- Deprecated. Returns the singleton ContextFactory instance, constructing it if
necessary.
- getFailedJobList() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getFailedJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getFailureInfo() -
Method in class org.apache.hadoop.mapred.JobStatus
- gets any available info on the reason of failure of the job.
- getFailureInfo() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get failure info for the job.
- getFailures() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Get the number of tasks that have failed on this tracker.
- getFallBackAuthenticator() -
Method in class org.apache.hadoop.security.authentication.client.KerberosAuthenticator
- If the specified URL does not support SPNEGO authentication, a fallback
Authenticator
will be used.
- getFatal() -
Static method in class org.apache.hadoop.log.metrics.EventCounter
-
- getFetchFailedMaps() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get the list of maps from which output-fetches failed.
- getFieldID() -
Method in class org.apache.hadoop.record.meta.FieldTypeInfo
- get the field's id (name)
- getFieldNames() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getFieldSeparator() -
Method in class org.apache.hadoop.streaming.PipeMapper
-
- getFieldSeparator() -
Method in class org.apache.hadoop.streaming.PipeMapRed
- Returns the field separator to be used.
- getFieldSeparator() -
Method in class org.apache.hadoop.streaming.PipeReducer
-
- getFieldTypeInfos() -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
- Return a collection of field type infos
- getFieldTypeInfos() -
Method in class org.apache.hadoop.record.meta.StructTypeID
-
- getFile(String, String) -
Method in class org.apache.hadoop.conf.Configuration
- Get a local file name under a directory named in dirsProp with
the given path.
- getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.FileSystem
- Return an array containing hostnames, offset and size of
portions of the given file.
- getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.HarFileSystem
- Get block locations from the underlying fs and fix their
offsets and lengths.
- getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Return null if the file doesn't exist; otherwise, get the
locations of the various chunks of the file file from KFS.
- getFileBlockLocations(FileSystem, FileStatus) -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getFileChecksum(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Get the checksum of a file.
- getFileChecksum(Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Get the checksum of a file.
- getFileChecksum(Path) -
Method in class org.apache.hadoop.fs.HarFileSystem
-
- getFileClassPaths(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Get the file entries in classpath as an array of Path.
- getFileCount() -
Method in class org.apache.hadoop.fs.ContentSummary
-
- getFileName() -
Method in class org.apache.hadoop.metrics.file.FileContext
- Deprecated. Returns the configured file name, or null.
- getFiles(PathFilter) -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
- Deprecated.
- getFileStatus(Configuration, URI) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Returns
FileStatus
of a given cache file on hdfs.
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Return a file status object that represents the path.
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Get file status.
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.HarFileSystem
- return the filestatus of files in har archive.
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
- FileStatus for S3 file systems.
- getFileStatus(Path) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getFilesystem() -
Method in class org.apache.hadoop.fs.DF
-
- getFileSystem(Configuration) -
Method in class org.apache.hadoop.fs.Path
- Return the FileSystem that owns this Path.
- getFileSystemCounterNames(String) -
Static method in class org.apache.hadoop.mapred.Task
- Counters to measure the usage of the different file systems.
- getFilesystemName() -
Method in class org.apache.hadoop.mapred.JobTracker
- Grab the local fs name
- getFileTimestamps(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Get the timestamps of the files.
- getFileType() -
Method in class org.apache.hadoop.fs.s3.INode
-
- getFileVisibilities(Configuration) -
Static method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
- Get the booleans on whether the files are public or not.
- getFinalSync(JobConf) -
Static method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
- Does the user want a final sync at close?
- getFinishTime() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getFinishTime() -
Method in class org.apache.hadoop.mapred.TaskReport
- Get finish time of task.
- getFinishTime() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get task finish time.
- getFirst() -
Method in class org.apache.hadoop.examples.SecondarySort.IntPair
-
- getFirstKey() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get the first key in the TFile.
- getFlippable() -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getFloat(String, float) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property as a float
.
- getFormatMinSplitSize() -
Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
- Get the lower bound on split size imposed by the format.
- getFormatMinSplitSize() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- getFormattedTimeWithDiff(DateFormat, long, long) -
Static method in class org.apache.hadoop.util.StringUtils
- Formats time in ms and appends difference (finishTime - startTime)
as returned by formatTimeDiff().
- getFs() -
Method in class org.apache.hadoop.mapred.JobClient
- Get a filesystem handle.
- getFSSize() -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
- Deprecated.
- getFsStatistics(Path, Configuration) -
Static method in class org.apache.hadoop.mapred.Task
- Gets a handle to the Statistics instance based on the scheme associated
with path.
- getGangliaConfForMetric(String) -
Method in class org.apache.hadoop.metrics2.sink.ganglia.AbstractGangliaSink
- Lookup GangliaConf from cache.
- getGeneration() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
- Get the generation of the Lucene instance.
- getGET_PERMISSION_COMMAND() -
Static method in class org.apache.hadoop.util.Shell
- Return a Unix command to get permission information.
- getGraylistedNodesInfoJson() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getGraylistedNodesInfoJson() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getGrayListedTaskTrackerCount() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of graylisted trackers in the cluster.
- getGraylistedTrackerNames() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the names of graylisted task trackers in the cluster.
- getGraylistedTrackers() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the number of graylisted task trackers in the cluster.
- getGroup() -
Method in class org.apache.hadoop.fs.FileStatus
- Get the group associated with the file.
- getGroup(String) -
Method in class org.apache.hadoop.mapred.Counters
- Returns the named counter group, or an empty group if there is none
with the specified name.
- getGroup(String) -
Method in class org.apache.hadoop.mapreduce.Counters
- Returns the named counter group, or an empty group if there is none
with the specified name.
- getGroupAction() -
Method in class org.apache.hadoop.fs.permission.FsPermission
- Return group
FsAction
.
- getGroupingComparator() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the user defined
RawComparator
comparator for
grouping keys of inputs to the reduce.
- getGroupName() -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
- Return group name
- getGroupNames() -
Method in class org.apache.hadoop.mapred.Counters
- Returns the names of all counter classes.
- getGroupNames() -
Method in class org.apache.hadoop.mapreduce.Counters
- Returns the names of all counter classes.
- getGroupNames() -
Method in class org.apache.hadoop.security.UserGroupInformation
- Get the group names for this user.
- getGroups(String) -
Method in class org.apache.hadoop.security.Groups
- Get the group memberships of a given user.
- getGroups(String) -
Method in class org.apache.hadoop.security.JniBasedUnixGroupsMapping
-
- getGroups(String) -
Method in class org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping
- Gets unix groups and netgroups for the user.
- getGroups(String) -
Method in class org.apache.hadoop.security.ShellBasedUnixGroupsMapping
-
- getGroups(String) -
Method in class org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping
-
- getGroupsCommand() -
Static method in class org.apache.hadoop.util.Shell
- a Unix command to get the current user's groups list
- getGroupsForUserCommand(String) -
Static method in class org.apache.hadoop.util.Shell
- a Unix command to get a given user's groups list
- getHadoopClientHome() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- getHarHash(Path) -
Static method in class org.apache.hadoop.fs.HarFileSystem
- the hash of the path p inside iniside
the filesystem
- getHarVersion() -
Method in class org.apache.hadoop.fs.HarFileSystem
-
- getHashType(Configuration) -
Static method in class org.apache.hadoop.util.hash.Hash
- This utility method converts the name of the configured
hash type to a symbolic constant.
- getHeader(boolean) -
Static method in class org.apache.hadoop.fs.ContentSummary
- Return the header of the output.
- getHeader() -
Method in class org.apache.hadoop.util.DataChecksum
-
- getHealthStatus() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Returns health status of the task tracker.
- getHistoryFilePath(JobID) -
Static method in class org.apache.hadoop.mapred.JobHistory
- Given the job id, return the history file path from the cache
- getHomeDirectory() -
Method in class org.apache.hadoop.fs.FileSystem
- Return the current user's home directory in this filesystem.
- getHomeDirectory() -
Method in class org.apache.hadoop.fs.FilterFileSystem
-
- getHomeDirectory() -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getHomeDirectory() -
Method in class org.apache.hadoop.fs.HarFileSystem
- return the top level archive path.
- getHomeDirectory() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getHost() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getHost() -
Method in class org.apache.hadoop.streaming.Environment
-
- getHostAddress() -
Method in class org.apache.hadoop.ipc.Server.Connection
-
- getHostFromPrincipal(String) -
Static method in class org.apache.hadoop.security.SecurityUtil
- Get the host name from the principal name of format
/host@realm.
- getHostInetAddress() -
Method in class org.apache.hadoop.ipc.Server.Connection
-
- getHostname() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getHostname() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getHostname() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getHostname() -
Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getHostName() -
Method in class org.apache.hadoop.metrics2.sink.ganglia.AbstractGangliaSink
-
- getHostName() -
Method in class org.apache.hadoop.security.KerberosName
- Get the second component of the name.
- getHostname() -
Static method in class org.apache.hadoop.util.StringUtils
- Return hostname without throwing exception.
- getHosts() -
Method in class org.apache.hadoop.fs.BlockLocation
- Get the list of hosts (hostname) hosting this block
- getHosts(String, String) -
Static method in class org.apache.hadoop.net.DNS
- Returns all the host names associated by the provided nameserver with the
address bound to the specified network interface
- getHosts(String) -
Static method in class org.apache.hadoop.net.DNS
- Returns all the host names associated by the default nameserver with the
address bound to the specified network interface
- getHosts() -
Method in class org.apache.hadoop.util.HostsFileReader
-
- getHttpPort() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getHttpPort() -
Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getHttpPort() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Get the port that this task tracker is serving http requests on.
- getId() -
Method in class org.apache.hadoop.fs.s3.Block
-
- getId(Class) -
Method in class org.apache.hadoop.io.AbstractMapWritable
-
- getID() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get the job identifier.
- getId() -
Method in class org.apache.hadoop.mapreduce.ID
- returns the int which represents the identifier
- getIdentifier(String, SecretManager<T>) -
Static method in class org.apache.hadoop.security.SaslRpcServer
-
- getIdentifier() -
Method in class org.apache.hadoop.security.token.Token
- Get the token identifier
- GetImage() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- getIncludeCounters() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getIndexInputFormatClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the index input format class.
- getIndexInterval() -
Method in class org.apache.hadoop.io.MapFile.Writer
- The number of entries that are added before an index entry is added.
- getIndexMaxFieldLength() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the max field length for a Lucene instance.
- getIndexMaxNumSegments() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the max number of segments for a Lucene instance.
- getIndexShards() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the string representation of a number of shards.
- getIndexShards(IndexUpdateConfiguration) -
Static method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- getIndexUpdaterClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the index updater class.
- getIndexUseCompoundFile() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Check whether to use the compound file format for a Lucene instance.
- getInfo() -
Method in class org.apache.hadoop.contrib.failmon.CPUParser
- Return a String with information about this class
- getInfo() -
Method in class org.apache.hadoop.contrib.failmon.HadoopLogParser
- Return a String with information about this class
- getInfo() -
Method in interface org.apache.hadoop.contrib.failmon.Monitored
- Return a String with information about the implementing
class
- getInfo() -
Method in class org.apache.hadoop.contrib.failmon.NICParser
- Return a String with information about this class
- getInfo() -
Method in class org.apache.hadoop.contrib.failmon.SensorsParser
- Return a String with information about this class
- getInfo() -
Method in class org.apache.hadoop.contrib.failmon.SMARTParser
- Return a String with information about this class
- getInfo() -
Method in class org.apache.hadoop.contrib.failmon.SystemLogParser
- Return a String with information about this class
- getInfo() -
Static method in class org.apache.hadoop.log.metrics.EventCounter
-
- getInfoPort() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getInputBoundingQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputClass() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputConditions() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputCountQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputDataLength() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getInputDataLength() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getInputFieldNames() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputFileBasedOutputFileName(JobConf, String) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
- Generate the outfile name based on a given anme and the input file name.
- getInputFormat() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
InputFormat
implementation for the map-reduce job,
defaults to TextInputFormat
if not specified explicity.
- getInputFormatClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the
InputFormat
class for the job.
- getInputOrderBy() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputPathFilter(JobConf) -
Static method in class org.apache.hadoop.mapred.FileInputFormat
- Get a PathFilter instance of the filter set for the input paths.
- getInputPathFilter(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
- Get a PathFilter instance of the filter set for the input paths.
- getInputPaths(JobConf) -
Static method in class org.apache.hadoop.mapred.FileInputFormat
- Get the list of input
Path
s for the map-reduce job.
- getInputPaths(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
- Get the list of input
Path
s for the map-reduce job.
- getInputQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputSeparator() -
Method in class org.apache.hadoop.streaming.PipeMapper
-
- getInputSeparator() -
Method in class org.apache.hadoop.streaming.PipeMapRed
- Returns the input separator to be used.
- getInputSeparator() -
Method in class org.apache.hadoop.streaming.PipeReducer
-
- getInputSplit() -
Method in interface org.apache.hadoop.mapred.Reporter
- Get the
InputSplit
object for a map.
- getInputSplit() -
Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getInputSplit() -
Method in class org.apache.hadoop.mapreduce.MapContext
- Get the input split for this map.
- getInputStream(Socket) -
Static method in class org.apache.hadoop.net.NetUtils
- Same as getInputStream(socket, socket.getSoTimeout()).
From documentation for NetUtils.getInputStream(Socket, long)
:
Returns InputStream for the socket.
- getInputStream(Socket, long) -
Static method in class org.apache.hadoop.net.NetUtils
- Returns InputStream for the socket.
- getInputStream(InputStream) -
Method in class org.apache.hadoop.security.SaslRpcClient
- Get a SASL wrapped InputStream.
- getInputTableName() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputWriterClass() -
Method in class org.apache.hadoop.streaming.io.IdentifierResolver
- Returns the resolved
InputWriter
class.
- getInstance(int) -
Static method in class org.apache.hadoop.util.hash.Hash
- Get a singleton instance of hash function of a given type.
- getInstance(Configuration) -
Static method in class org.apache.hadoop.util.hash.Hash
- Get a singleton instance of hash function of a type
defined in the configuration.
- getInstance() -
Static method in class org.apache.hadoop.util.hash.JenkinsHash
-
- getInstance() -
Static method in class org.apache.hadoop.util.hash.MurmurHash
-
- getInstances(String, Class<U>) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property as a List
of objects implementing the interface specified by xface
.
- getInt(String, int) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property as an int
.
- getInterfaceName() -
Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
- Get the interface name
- getInterval(ArrayList<MonitorJob>) -
Static method in class org.apache.hadoop.contrib.failmon.Environment
- Determines the minimum interval at which the executor thread
needs to wake upto execute jobs.
- getIOSortMB() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the IO sort space in MB.
- getIPs(String) -
Static method in class org.apache.hadoop.net.DNS
- Returns all the IPs associated with the provided interface, if any, in
textual form.
- getIsCleanup() -
Method in class org.apache.hadoop.mapred.TaskLogAppender
- Get whether task is cleanup attempt or not.
- getIsJavaMapper(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Check whether the job is using a Java Mapper.
- getIsJavaRecordReader(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Check whether the job is using a Java RecordReader
- getIsJavaRecordWriter(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Will the reduce use a Java RecordWriter?
- getIsJavaReducer(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Check whether the job is using a Java Reducer.
- getIsMap() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getIssueDate() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier
-
- getJar() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the user jar for the map-reduce job.
- getJar() -
Method in class org.apache.hadoop.mapreduce.Job
- Get the pathname of the job's jar.
- getJar() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the pathname of the job's jar.
- getJob(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
- Get an
RunningJob
object to track an ongoing job.
- getJob(String) -
Method in class org.apache.hadoop.mapred.JobClient
- Deprecated. Applications should rather use
JobClient.getJob(JobID)
.
- getJob(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getJob() -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- getJob() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobACLs() -
Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Get the job acls.
- getJobACLs() -
Method in class org.apache.hadoop.mapred.JobStatus
- Get the acls for Job.
- getJobCacheSubdir(String) -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getJobClient() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobClient() -
Method in class org.apache.hadoop.mapred.TaskTracker
- The connection to the JobTracker, used by the TaskRunner
for locating remote files.
- getJobCompletionTime() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobCompletedEvent
- Get the job completion time-stamp in milli-seconds.
- getJobConf() -
Method in class org.apache.hadoop.mapred.JobContext
- Get the job Configuration
- getJobConf() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobConf() -
Method in class org.apache.hadoop.mapred.TaskAttemptContext
-
- getJobConfPath(Path) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
- Get the job conf path.
- getJobCounters() -
Method in class org.apache.hadoop.mapred.JobInProgress
- Returns the job-level counters.
- getJobCounters(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobDir(String) -
Static method in class org.apache.hadoop.mapred.TaskLog
- Get the user log directory for the job jobid.
- getJobDir(JobID) -
Static method in class org.apache.hadoop.mapred.TaskLog
- Get the user log directory for the job jobid.
- getJobDistCacheArchives(Path) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
- Get the job distributed cache archives path.
- getJobDistCacheFiles(Path) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
- Get the job distributed cache files path.
- getJobDistCacheLibjars(Path) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
- Get the job distributed cache libjars path.
- getJobEndNotificationURI() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
- getJobFile() -
Method in class org.apache.hadoop.mapred.JobProfile
- Get the configuration file for the job.
- getJobFile() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get the path of the submitted job configuration.
- getJobFile() -
Method in class org.apache.hadoop.mapred.Task
-
- getJobForFallowSlot(TaskType) -
Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
- Get the
JobInProgress
for which the fallow slot(s) are held.
- getJobHistoryFileName(JobConf, JobID) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Recover the job history filename from the history folder.
- getJobHistoryLogLocation(String) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Get the job history file path given the history filename
- getJobHistoryLogLocationForUser(String, JobConf) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Get the user job history file path
- getJobID() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobID() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getJobID() -
Method in class org.apache.hadoop.mapred.JobProfile
- Get the job id.
- getJobId() -
Method in class org.apache.hadoop.mapred.JobProfile
- Deprecated. use getJobID() instead
- getJobId() -
Method in class org.apache.hadoop.mapred.JobStatus
- Deprecated. use getJobID instead
- getJobID() -
Method in class org.apache.hadoop.mapred.JobStatus
-
- getJobID() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Deprecated. This method is deprecated and will be removed. Applications should
rather use
RunningJob.getID()
.
- getJobID() -
Method in class org.apache.hadoop.mapred.Task
- Get the job name for this task.
- getJobID() -
Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getJobID() -
Method in class org.apache.hadoop.mapred.TaskID
-
- getJobID() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the unique ID for the job.
- getJobID() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobId() -
Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
- Get the jobid
- getJobID() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.DeleteJobEvent
- Get the jobid.
- getJobID() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobCompletedEvent
- Get the job id.
- getJobID() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobStartedEvent
- Get the job id.
- getJobID() -
Method in class org.apache.hadoop.mapreduce.TaskAttemptID
- Returns the
JobID
object that this task attempt belongs to
- getJobID() -
Method in class org.apache.hadoop.mapreduce.TaskID
- Returns the
JobID
object that this tip belongs to
- getJobIDsPattern(String, Integer) -
Static method in class org.apache.hadoop.mapred.JobID
- Deprecated.
- getJobJar(Path) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
- Get the job jar path.
- getJobJarFile(String, String) -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getJobLocalDir() -
Method in class org.apache.hadoop.mapred.JobConf
- Get job-specific shared directory for use as scratch space
- getJobName() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the user-specified job name.
- getJobName() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobName() -
Method in class org.apache.hadoop.mapred.JobProfile
- Get the user-specified job name.
- getJobName() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get the name of the job.
- getJobName() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the user-specified job name.
- getJobName() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobPriority() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
JobPriority
for this job.
- getJobPriority() -
Method in class org.apache.hadoop.mapred.JobStatus
- Return the priority of the job
- getJobProfile(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobRunState(int) -
Static method in class org.apache.hadoop.mapred.JobStatus
- Helper method to get human-readable state of the job.
- getJobs() -
Static method in class org.apache.hadoop.contrib.failmon.Environment
- Scans the configuration file to determine which monitoring
utilities are available in the system.
- getJobsFromQueue(String) -
Method in class org.apache.hadoop.mapred.JobClient
- Gets all the jobs which were added to particular Job Queue
- getJobsFromQueue(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobSplitFile(Path) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
- getJobSplitMetaFile(Path) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
- getJobState() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Returns the current state of the Job.
- getJobState() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobStatus(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobSubmitHostAddress() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getJobSubmitHostName() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getJobToken(Credentials) -
Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
- getJobTokenSecret() -
Method in class org.apache.hadoop.mapred.Task
- Get the job token secret
- getJobTrackerHostPort() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- getJobTrackerMachine() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getJobTrackerState() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the current state of the
JobTracker
,
as JobTracker.State
- getJobTrackerUrl() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getJobTrackerUrl() -
Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getJtIdentifier() -
Method in class org.apache.hadoop.mapreduce.JobID
-
- getJvmContext() -
Method in class org.apache.hadoop.mapred.Task
- Gets the task JvmContext
- getJvmInfo() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JvmFinishedEvent
- Get the jvm info.
- getJvmManagerInstance() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getKeepCommandFile(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Does the user want to keep the command file for debugging? If this is
true, pipes will write a copy of the command data to a file in the
task directory named "downlink.data", which may be used to run the C++
program under the debugger.
- getKeepFailedTaskFiles() -
Method in class org.apache.hadoop.mapred.JobConf
- Should the temporary files for failed tasks be kept?
- getKeepTaskFilesPattern() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the regular expression that is matched against the task names
to see if we need to keep the files.
- getKey(BytesWritable) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Copy the key into BytesWritable.
- getKey(byte[]) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Copy the key into user supplied buffer.
- getKey(byte[], int) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Copy the key into user supplied buffer.
- getKey() -
Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
- Gets the current raw key
- getKey() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
- Returns the stored rawKey
- getKey() -
Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
- Gets the current raw key.
- getKey() -
Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getKey() -
Method in class org.apache.hadoop.security.token.delegation.DelegationKey
-
- getKeyClass() -
Method in class org.apache.hadoop.io.MapFile.Reader
- Returns the class of keys in this file.
- getKeyClass() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns the class of keys in this file.
- getKeyClass() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
- Returns the class of keys in this file.
- getKeyClass() -
Method in class org.apache.hadoop.io.WritableComparator
- Returns the WritableComparable implementation class.
- getKeyClass() -
Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getKeyClass() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
- The class of key that must be passed to
SequenceFileRecordReader.next(Object, Object)
..
- getKeyClass() -
Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getKeyClassName() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns the name of the key class.
- getKeyClassName() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
- Retrieve the name of the key class for this SequenceFile.
- getKeyClassName() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
- Retrieve the name of the key class for this SequenceFile.
- getKeyFieldComparatorOption() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
KeyFieldBasedComparator
options
- getKeyFieldComparatorOption(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
- Get the
KeyFieldBasedComparator
options
- getKeyFieldPartitionerOption() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
KeyFieldBasedPartitioner
options
- getKeyFieldPartitionerOption(JobContext) -
Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
- Get the
KeyFieldBasedPartitioner
options
- getKeyId() -
Method in class org.apache.hadoop.security.token.delegation.DelegationKey
-
- getKeyLength() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Get the length of the key.
- getKeyList() -
Method in class org.apache.hadoop.metrics.util.MetricsRegistry
- Deprecated.
- getKeyNear(long) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get a sample key that is within a block whose starting offset is greater
than or equal to the specified offset.
- getKeyStream() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Streaming access to the key.
- getKeytab() -
Method in class org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler
- Returns the keytab used by the authentication handler.
- getKeyTypeID() -
Method in class org.apache.hadoop.record.meta.MapTypeID
- get the TypeID of the map's key element
- getKind() -
Method in class org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier
-
- getKind() -
Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
- Get the token kind
- getKind() -
Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier.Renewer
-
- getKind() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier
-
- getKind() -
Method in class org.apache.hadoop.security.token.Token
- Get the token kind
- getKind() -
Method in class org.apache.hadoop.security.token.Token.TrivialRenewer
-
- getKind() -
Method in class org.apache.hadoop.security.token.TokenIdentifier
- Get the token kind
- getKrb5LoginModuleName() -
Static method in class org.apache.hadoop.security.authentication.util.KerberosUtil
-
- getLargeReadOps() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Get the number of large file system read operations such as list files
under a large directory
- getLastContact() -
Method in class org.apache.hadoop.ipc.Server.Connection
-
- getLastKey() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get the last key in the TFile.
- getLastOutput() -
Method in class org.apache.hadoop.streaming.io.OutputReader
- Returns the last output from the client as a String.
- getLastOutput() -
Method in class org.apache.hadoop.streaming.io.RawBytesOutputReader
-
- getLastOutput() -
Method in class org.apache.hadoop.streaming.io.TextOutputReader
-
- getLastOutput() -
Method in class org.apache.hadoop.streaming.io.TypedBytesOutputReader
-
- getLastSeen() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getLaunchTime() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getLen() -
Method in class org.apache.hadoop.fs.FileStatus
-
- getLength() -
Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
-
- getLength() -
Method in class org.apache.hadoop.fs.BlockLocation
- Get the length of the block
- getLength() -
Method in class org.apache.hadoop.fs.ContentSummary
-
- getLength() -
Method in class org.apache.hadoop.fs.FileChecksum
- The length of the checksum in bytes
- getLength(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Deprecated. Use getFileStatus() instead
- getLength(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Deprecated.
- getLength() -
Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
- The length of the checksum in bytes
- getLength() -
Method in class org.apache.hadoop.fs.s3.Block
-
- getLength() -
Method in class org.apache.hadoop.io.BinaryComparable
- Return n st bytes 0..n-1 from {#getBytes()} are valid.
- getLength() -
Method in class org.apache.hadoop.io.BytesWritable
- Get the current size of the buffer.
- getLength() -
Method in class org.apache.hadoop.io.DataInputBuffer
- Returns the length of the input.
- getLength() -
Method in class org.apache.hadoop.io.DataOutputBuffer
- Returns the length of the valid data currently in the buffer.
- getLength() -
Method in class org.apache.hadoop.io.InputBuffer
- Returns the length of the input.
- getLength() -
Method in class org.apache.hadoop.io.OutputBuffer
- Returns the length of the valid data currently in the buffer.
- getLength() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
- Returns the current length of the output file.
- getLength() -
Method in class org.apache.hadoop.io.Text
- Returns the number of bytes in the byte array
- getLength() -
Method in class org.apache.hadoop.io.UTF8
- Deprecated. The number of bytes in the encoded string.
- getLength() -
Method in class org.apache.hadoop.mapred.FileSplit
- The number of bytes in the file to process.
- getLength() -
Method in interface org.apache.hadoop.mapred.InputSplit
- Get the total number of bytes in the data of the
InputSplit
.
- getLength() -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
- Return the aggregate length of all child InputSplits currently added.
- getLength(int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
- Get the length of ith child InputSplit.
- getLength() -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- getLength(int) -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns the length of the ith Path
- getLength() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
-
- getLength() -
Method in class org.apache.hadoop.mapreduce.InputSplit
- Get the size of the split, so that the input splits can be sorted by size.
- getLength() -
Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getLength() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getLength() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- getLength(int) -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns the length of the ith Path
- getLength() -
Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
- The number of bytes in the file to process.
- getLengths() -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns an array containing the lengths of the files in the split
- getLengths() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns an array containing the lengths of the files in the split
- getLevel() -
Method in interface org.apache.hadoop.net.Node
- Return this node's level in the tree.
- getLevel() -
Method in class org.apache.hadoop.net.NodeBase
- Return this node's level in the tree.
- getLibJars(Configuration) -
Static method in class org.apache.hadoop.util.GenericOptionsParser
- If libjars are set in the conf, parse the libjars.
- getLinkCount(File) -
Static method in class org.apache.hadoop.fs.HardLink
- Retrieves the number of links to the specified file.
- getLinkMultArgLength(File, String[], File) -
Static method in class org.apache.hadoop.fs.HardLink
- Calculate the nominal length of all contributors to the total
commandstring length, including fixed overhead of the OS-dependent
command.
- getListenerAddress() -
Method in class org.apache.hadoop.ipc.Server
- Return the socket (ip+port) on which the RPC server is listening to.
- getLoadNativeLibraries(Configuration) -
Method in class org.apache.hadoop.util.NativeCodeLoader
- Return if native hadoop libraries, if present, can be used for this job.
- getLocal(Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
- Get the local file syste
- getLocalAnalysisClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the local analysis class.
- getLocalCacheArchives(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Return the path array of the localized caches.
- getLocalCacheFiles(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Return the path array of the localized files.
- getLocalDirs() -
Method in class org.apache.hadoop.mapred.JobConf
-
- getLocalDirs() -
Method in class org.apache.hadoop.mapred.TaskController
-
- getLocalInetAddress(String) -
Static method in class org.apache.hadoop.net.NetUtils
- Checks if
host
is a local host name and return InetAddress
corresponding to that address.
- getLocalJobDir(String, String) -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getLocalJobFilePath(JobID) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Get the path of the locally stored job file
- getLocalJobFilePath(JobID) -
Static method in class org.apache.hadoop.mapred.JobTracker
- Get the localized job file path on the job trackers local file system
- getLocalPath(String, String) -
Method in class org.apache.hadoop.conf.Configuration
- Get a local file under a directory named by dirsProp with
the given path.
- getLocalPath(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Constructs a local file name.
- getLocalPathForWrite(String, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
- Get a path from the local FS.
- getLocalPathForWrite(String, long, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
- Get a path from the local FS.
- getLocalPathForWrite(String, long, Configuration, boolean) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
- Get a path from the local FS.
- getLocalPathToRead(String, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
- Get a path from the local FS for reading.
- getLocalTaskDir(String, String, String) -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getLocalTaskDir(String, String, String, boolean) -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getLocation(int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
- getLocations from ith InputSplit.
- getLocations() -
Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
-
- getLocations() -
Method in class org.apache.hadoop.mapred.FileSplit
-
- getLocations() -
Method in interface org.apache.hadoop.mapred.InputSplit
- Get the list of hostnames where the input split is located.
- getLocations() -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
- Collect a set of hosts from all child InputSplits.
- getLocations() -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns all the Paths where this input-split resides
- getLocations() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
- Get the list of hostnames where the input split is located.
- getLocations() -
Method in class org.apache.hadoop.mapred.MultiFileSplit
- Deprecated.
- getLocations() -
Method in class org.apache.hadoop.mapreduce.InputSplit
- Get the list of nodes by name where the data for the split would be local.
- getLocations() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
- Get the list of nodes by name where the data for the split would be local.
- getLocations() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns all the Paths where this input-split resides
- getLocations() -
Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
- getLocations() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getLocations() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getLoginUser() -
Static method in class org.apache.hadoop.security.UserGroupInformation
- Get the currently logged in user.
- getLogLocation() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.JVMInfo
-
- getLong(String, long) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property as a long
.
- getLongValue(Object) -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
-
- getLowerClause() -
Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getMajor() -
Method in class org.apache.hadoop.io.file.tfile.Utils.Version
- Get the major version.
- getMap() -
Method in class org.apache.hadoop.contrib.failmon.EventRecord
- Return the HashMap of properties of the EventRecord.
- getMapCompletionEvents(JobID, int, int, TaskAttemptID, JvmContext) -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getMapCompletionEvents(JobID, int, int, TaskAttemptID, JvmContext) -
Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
- Called by a reduce task to get the map output locations for finished maps.
- getMapCounters(Counters) -
Method in class org.apache.hadoop.mapred.JobInProgress
- Returns map phase counters by summing over all map tasks in progress.
- getMapDebugScript() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the map task's debug script.
- getMapOutputCompressorClass(Class<? extends CompressionCodec>) -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
CompressionCodec
for compressing the map outputs.
- getMapOutputKeyClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
- Get the map output key class.
- getMapOutputKeyClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the key class for the map output data.
- getMapOutputKeyClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the key class for the map output data.
- getMapOutputValueClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
- Get the map output value class.
- getMapOutputValueClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the value class for the map output data.
- getMapOutputValueClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the value class for the map output data.
- getMapper() -
Method in class org.apache.hadoop.mapred.MapRunner
-
- getMapperClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
Mapper
class for the job.
- getMapperClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the
Mapper
class for the job.
- getMapperClass(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
- Get the application's mapper class.
- getMapperMaxSkipRecords(Configuration) -
Static method in class org.apache.hadoop.mapred.SkipBadRecords
- Get the number of acceptable skip records surrounding the bad record PER
bad record in mapper.
- getMapredJobID() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Deprecated. use
Job.getAssignedJobID()
instead
- getMapredJobID() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMapredTempDir() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Get the Map/Reduce temp directory.
- getMapRunnerClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
MapRunnable
class for the job.
- getMapSlotCapacity() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the total number of map slots in the cluster.
- getMapSpeculativeExecution() -
Method in class org.apache.hadoop.mapred.JobConf
- Should speculative execution be used for this job for map tasks?
Defaults to
true
.
- getMapTaskCompletionEvents() -
Method in class org.apache.hadoop.mapred.MapTaskCompletionEventsUpdate
-
- getMapTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
- Get the information of the current state of the map tasks of a job.
- getMapTaskReports(String) -
Method in class org.apache.hadoop.mapred.JobClient
- Deprecated. Applications should rather use
JobClient.getMapTaskReports(JobID)
- getMapTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getMapTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the number of currently running map tasks in the cluster.
- getMasterKeyId() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier
-
- getMaxAllowedCmdArgLength() -
Static method in class org.apache.hadoop.fs.HardLink
- Return this private value for use by unit tests.
- getMaxDate() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier
-
- getMaxDepth(int) -
Static method in class org.apache.hadoop.util.QuickSort
- Deepest recursion before giving up and doing a heapsort.
- getMaxMapAttempts() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the configured number of maximum attempts that will be made to run a
map task, as specified by the
mapred.map.max.attempts
property.
- getMaxMapSlots() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Get the maximum map slots for this node.
- getMaxMapTaskFailuresPercent() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the maximum percentage of map tasks that can fail without
the job being aborted.
- getMaxMapTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the maximum capacity for running map tasks in the cluster.
- getMaxMemory() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the maximum configured heap memory that can be used by the
JobTracker
- getMaxPhysicalMemoryForTask() -
Method in class org.apache.hadoop.mapred.JobConf
- Deprecated. this variable is deprecated and nolonger in use.
- getMaxReduceAttempts() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the
mapred.reduce.max.attempts
property.
- getMaxReduceSlots() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Get the maximum reduce slots for this node.
- getMaxReduceTaskFailuresPercent() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the maximum percentage of reduce tasks that can fail without
the job being aborted.
- getMaxReduceTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the maximum capacity for running reduce tasks in the cluster.
- getMaxSplitSize(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
- Get the maximum split size.
- getMaxTaskFailuresPerTracker() -
Method in class org.apache.hadoop.mapred.JobConf
- Expert: Get the maximum no.
- getMaxTime() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Deprecated. The max time for a single operation since the last reset
MetricsTimeVaryingRate.resetMinMax()
- getMaxVirtualMemoryForTask() -
Method in class org.apache.hadoop.mapred.JobConf
- Deprecated. Use
JobConf.getMemoryForMapTask()
and
JobConf.getMemoryForReduceTask()
- getMBeanInfo() -
Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
- Deprecated.
- getMD5Hash(String) -
Static method in class org.apache.hadoop.contrib.failmon.Anonymizer
- Create the MD5 digest of an input text.
- getMechanismName() -
Method in enum org.apache.hadoop.security.SaslRpcServer.AuthMethod
- Return the SASL mechanism name
- getMemory() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Get the total amount of buffer memory, in bytes.
- getMemoryCalculatorPlugin(Class<? extends MemoryCalculatorPlugin>, Configuration) -
Static method in class org.apache.hadoop.util.MemoryCalculatorPlugin
- Deprecated. Get the MemoryCalculatorPlugin from the class name and configure it.
- getMemoryForMapTask() -
Method in class org.apache.hadoop.mapred.JobConf
- Get memory required to run a map task of the job, in MB.
- getMemoryForReduceTask() -
Method in class org.apache.hadoop.mapred.JobConf
- Get memory required to run a reduce task of the job, in MB.
- getMessage() -
Method in exception org.apache.hadoop.mapred.InvalidInputException
- Get a summary message of the problems found.
- getMessage() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getMessage() -
Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
- Get a summary message of the problems found.
- getMessage() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMessage() -
Method in exception org.apache.hadoop.record.compiler.generated.ParseException
- This method has the standard behavior when this object has been
created using the standard constructors.
- getMessage() -
Method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
- You can also modify the body of this method to customize your error messages.
- getMetaBlock(String) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Stream access to a meta block.``
- getMetadata() -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
-
- getMetadata() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns the metadata object of the file
- getMetric(String) -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
- Deprecated. Returns the metric object which can be a Float, Integer, Short or Byte.
- getMetric(String) -
Method in class org.apache.hadoop.metrics2.util.MetricsCache.Record
- Get the metric value
- getMetricInstance(String) -
Method in class org.apache.hadoop.metrics2.util.MetricsCache.Record
- Get the metric value
- getMetricNames() -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
- Deprecated. Returns the set of metric names.
- getMetrics(MetricsBuilder, boolean) -
Method in class org.apache.hadoop.ipc.metrics.RpcInstrumentation
-
- getMetrics(MetricsBuilder, boolean) -
Method in class org.apache.hadoop.mapred.TaskTrackerMetricsSource
-
- getMetrics(MetricsBuilder, boolean) -
Method in class org.apache.hadoop.metrics2.lib.AbstractMetricsSource
-
- getMetrics(MetricsBuilder, boolean) -
Method in interface org.apache.hadoop.metrics2.MetricsSource
- Get metrics from the source
- getMetrics(MetricsBuilder, boolean) -
Method in class org.apache.hadoop.metrics2.source.JvmMetricsSource
-
- getMetricsCopy() -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
- Deprecated. Returns a copy of this record's metrics.
- getMetricsList() -
Method in class org.apache.hadoop.metrics.util.MetricsRegistry
- Deprecated.
- getMinor() -
Method in class org.apache.hadoop.io.file.tfile.Utils.Version
- Get the minor version.
- getMinSplitSize(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
- Get the minimum split size
- getMinTime() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Deprecated. The min time for a single operation since the last reset
MetricsTimeVaryingRate.resetMinMax()
- getMode() -
Method in class org.apache.hadoop.io.nativeio.NativeIO.Stat
-
- getModificationTime() -
Method in class org.apache.hadoop.fs.FileStatus
- Get the modification time of the file.
- getMount() -
Method in class org.apache.hadoop.fs.DF
-
- getName() -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getName() -
Method in class org.apache.hadoop.fs.FileSystem
- Deprecated. call #getUri() instead.
- getName() -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Deprecated. call #getUri() instead.
- getName() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Deprecated.
- getName() -
Method in class org.apache.hadoop.fs.Path
- Returns the final component of this path.
- getName() -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getName(Class) -
Static method in class org.apache.hadoop.io.WritableName
- Return the name for a class.
- getName() -
Method in class org.apache.hadoop.mapred.Counters.Group
- Returns raw name of the group.
- getName() -
Method in class org.apache.hadoop.mapreduce.Counter
-
- getName() -
Method in class org.apache.hadoop.mapreduce.CounterGroup
- Get the internal name of the group
- getName() -
Method in class org.apache.hadoop.metrics.util.MetricsBase
- Deprecated.
- getName() -
Method in interface org.apache.hadoop.net.Node
- Return this node's name
- getName() -
Method in class org.apache.hadoop.net.NodeBase
- Return this node's name
- getName() -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
- return the name of the record
- getName() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationToken
- Returns the principal name (this method name comes from the JDK
Principal
interface).
- getNamed(String, Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
- Deprecated. call #get(URI,Configuration) instead.
- getNamedOutputFormatClass(JobConf, String) -
Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Returns the named output OutputFormat.
- getNamedOutputKeyClass(JobConf, String) -
Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Returns the key class for a named output.
- getNamedOutputs() -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Returns iterator with the defined name outputs.
- getNamedOutputsList(JobConf) -
Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Returns list of channel names.
- getNamedOutputValueClass(JobConf, String) -
Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
- Returns the value class for a named output.
- getNames() -
Method in class org.apache.hadoop.fs.BlockLocation
- Get the list of names (hostname:port) hosting this block
- getNestedStructTypeInfo(String) -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
- Return the type info of a nested record.
- getNetgroupNames() -
Method in class org.apache.hadoop.security.NetgroupCache
-
- getNetgroups(String, List<String>) -
Method in class org.apache.hadoop.security.NetgroupCache
-
- getNetgroups(String, List<String>) -
Method in class org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping
-
- getNetworkLocation() -
Method in interface org.apache.hadoop.net.Node
- Return the string representation of this node's network location
- getNetworkLocation() -
Method in class org.apache.hadoop.net.NodeBase
- Return this node's network location
- getNewJobId() -
Method in class org.apache.hadoop.mapred.JobTracker
- Allocates a new JobId string.
- getNext() -
Method in class org.apache.hadoop.contrib.failmon.LogParser
- Continue parsing the log file until a valid log entry is identified.
- getNextHeartbeatInterval() -
Method in class org.apache.hadoop.mapred.JobTracker
- Calculates next heartbeat interval using cluster size.
- getNextRecordRange() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get the next record range which is going to be processed by Task.
- getNextToken() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- getNextToken() -
Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- getNode(String) -
Method in class org.apache.hadoop.mapred.JobTracker
- Return the Node in the network topology that corresponds to the hostname
- getNode() -
Method in class org.apache.hadoop.mapred.join.Parser.NodeToken
-
- getNode() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNode(String) -
Method in class org.apache.hadoop.net.NetworkTopology
- Given a string representation of a node, return its reference
- getNodesAtMaxLevel() -
Method in class org.apache.hadoop.mapred.JobTracker
- Returns a collection of nodes at the max level
- getNullContext(String) -
Static method in class org.apache.hadoop.metrics.ContextFactory
- Deprecated. Returns a "null" context - one which does nothing.
- getNum() -
Method in class org.apache.hadoop.mapred.join.Parser.NumToken
-
- getNum() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNumber() -
Method in class org.apache.hadoop.metrics.spi.MetricValue
- Deprecated.
- getNumberColumns() -
Method in class org.apache.hadoop.examples.dancing.DancingLinks
- Get the number of columns.
- getNumberOfThreads(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
- The number of threads in the thread pool that will run the map function.
- getNumberOfUniqueHosts() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getNumBytesInSum() -
Method in class org.apache.hadoop.util.DataChecksum
-
- getNumExcludedNodes() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the number of excluded hosts in the cluster.
- getNumFiles(PathFilter) -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
- Deprecated.
- getNumLinesPerSplit(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
- Get the number of lines per split
- getNumMapTasks() -
Method in class org.apache.hadoop.mapred.JobConf
- Get configured the number of reduce tasks for this job.
- getNumOfKeyFields() -
Method in class org.apache.hadoop.streaming.PipeMapper
-
- getNumOfKeyFields() -
Method in class org.apache.hadoop.streaming.PipeMapRed
- Returns the number of key fields.
- getNumOfKeyFields() -
Method in class org.apache.hadoop.streaming.PipeReducer
-
- getNumOfLeaves() -
Method in class org.apache.hadoop.net.NetworkTopology
- Return the total number of nodes
- getNumOfRacks() -
Method in class org.apache.hadoop.net.NetworkTopology
- Return the total number of racks
- getNumOpenConnections() -
Method in class org.apache.hadoop.ipc.Server
- The number of open RPC conections
- getNumPaths() -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns the number of Paths in the split
- getNumPaths() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns the number of Paths in the split
- getNumProcessors() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the total number of processors present on the system.
- getNumProcessors() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the total number of processors present on the system.
- getNumReduceTasks() -
Method in class org.apache.hadoop.mapred.JobConf
- Get configured the number of reduce tasks for this job.
- getNumReduceTasks() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get configured the number of reduce tasks for this job.
- getNumRequests() -
Method in class org.apache.hadoop.mapred.ShuffleExceptionTracker
- Gets the number of requests we are tracking
- getNumReservedTaskTrackersForMaps() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumReservedTaskTrackersForReduces() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumResolvedTaskTrackers() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getNumSchedulingOpportunities() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumSlots() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getNumSlotsPerTask(TaskType) -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getNumSlotsRequired() -
Method in class org.apache.hadoop.mapred.Task
-
- getNumTaskCacheLevels() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getNumTasksToExecutePerJvm() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the number of tasks that a spawned JVM should execute
- getOccupiedMapSlots() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get number of occupied map slots in the cluster.
- getOccupiedReduceSlots() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of occupied reduce slots in the cluster.
- getOffset() -
Method in class org.apache.hadoop.fs.BlockLocation
- Get the start offset of file associated with this block
- getOffset(int) -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns the start offset of the ith Path
- getOffset(int) -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns the start offset of the ith Path
- getOidInstance(String) -
Static method in class org.apache.hadoop.security.authentication.util.KerberosUtil
-
- getOp() -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
- Get the type of the operation.
- getOp() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
- Get the type of operation.
- getOperations() -
Method in class org.apache.hadoop.mapred.QueueAclsInfo
-
- getOpt(String) -
Method in class org.apache.hadoop.fs.shell.CommandFormat
- Return if the option is set or not
- getOtherAction() -
Method in class org.apache.hadoop.fs.permission.FsPermission
- Return other
FsAction
.
- getOutput() -
Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
- Get the output of the shell command.
- getOutputCommitter() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
OutputCommitter
implementation for the map-reduce job,
defaults to FileOutputCommitter
if not specified explicitly.
- getOutputCommitter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- getOutputCommitter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getOutputCommitter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getOutputCommitter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getOutputCommitter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getOutputCommitter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.OutputFormat
- Get the output committer for this output format.
- getOutputCommitter() -
Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
-
- getOutputCompressionType(JobConf) -
Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
- Get the
SequenceFile.CompressionType
for the output SequenceFile
.
- getOutputCompressionType(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
- Get the
SequenceFile.CompressionType
for the output SequenceFile
.
- getOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Get the
CompressionCodec
for compressing the job outputs.
- getOutputCompressorClass(JobContext, Class<? extends CompressionCodec>) -
Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Get the
CompressionCodec
for compressing the job outputs.
- getOutputFieldCount() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFieldNames() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFormat() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
OutputFormat
implementation for the map-reduce job,
defaults to TextOutputFormat
if not specified explicity.
- getOutputFormatClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the
OutputFormat
class for the job.
- getOutputKeyClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
- Get the reduce output key class.
- getOutputKeyClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the key class for the job output data.
- getOutputKeyClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the key class for the job output data.
- getOutputKeyClass() -
Method in class org.apache.hadoop.streaming.io.IdentifierResolver
- Returns the resolved output key class.
- getOutputKeyComparator() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
RawComparator
comparator used to compare keys.
- getOutputName(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Get the base output name for the output file.
- getOutputPath(JobConf) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Get the
Path
to the output directory for the map-reduce job.
- getOutputPath(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Get the
Path
to the output directory for the map-reduce job.
- getOutputReaderClass() -
Method in class org.apache.hadoop.streaming.io.IdentifierResolver
- Returns the resolved
OutputReader
class.
- getOutputSize() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Returns the number of bytes of output from this map.
- getOutputStream(Socket) -
Static method in class org.apache.hadoop.net.NetUtils
- Same as getOutputStream(socket, 0).
- getOutputStream(Socket, long) -
Static method in class org.apache.hadoop.net.NetUtils
- Returns OutputStream for the socket.
- getOutputStream(OutputStream) -
Method in class org.apache.hadoop.security.SaslRpcClient
- Get a SASL wrapped OutputStream.
- getOutputTableName() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputValueClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
- Get the reduce output value class.
- getOutputValueClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the value class for job outputs.
- getOutputValueClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the value class for job outputs.
- getOutputValueClass() -
Method in class org.apache.hadoop.streaming.io.IdentifierResolver
- Returns the resolved output value class.
- getOutputValueGroupingComparator() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the user defined
WritableComparable
comparator for
grouping keys of inputs to the reduce.
- getOwner() -
Method in class org.apache.hadoop.fs.FileStatus
- Get the owner of the file.
- getOwner(FileDescriptor) -
Static method in class org.apache.hadoop.io.nativeio.NativeIO
-
- getOwner() -
Method in class org.apache.hadoop.io.nativeio.NativeIO.Stat
-
- getParameter(String) -
Method in class org.apache.hadoop.http.HttpServer.QuotingInputFilter.RequestQuoter
- Unquote the name and quote the value.
- getParameter(ServletRequest, String) -
Static method in class org.apache.hadoop.util.ServletUtil
- Get a parameter from a ServletRequest.
- getParameterMap() -
Method in class org.apache.hadoop.http.HttpServer.QuotingInputFilter.RequestQuoter
-
- getParameterNames() -
Method in class org.apache.hadoop.http.HttpServer.QuotingInputFilter.RequestQuoter
- Return the set of parameter names, quoting each name.
- getParameterValues(String) -
Method in class org.apache.hadoop.http.HttpServer.QuotingInputFilter.RequestQuoter
-
- getParent() -
Method in class org.apache.hadoop.fs.Path
- Returns the parent of a path or null if at root.
- getParent() -
Method in interface org.apache.hadoop.net.Node
- Return this node's parent
- getParent() -
Method in class org.apache.hadoop.net.NodeBase
- Return this node's parent
- getParentNode(Node, int) -
Static method in class org.apache.hadoop.mapred.JobTracker
-
- getPartition(Shard, IntermediateForm, int) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
-
- getPartition(SecondarySort.IntPair, IntWritable, int) -
Method in class org.apache.hadoop.examples.SecondarySort.FirstPartitioner
-
- getPartition(IntWritable, NullWritable, int) -
Method in class org.apache.hadoop.examples.SleepJob
-
- getPartition(K2, V2, int) -
Method in class org.apache.hadoop.mapred.lib.HashPartitioner
- Use
Object.hashCode()
to partition.
- getPartition(K2, V2, int) -
Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- getPartition(int, int) -
Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- getPartition(K, V, int) -
Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- getPartition(K2, V2, int) -
Method in interface org.apache.hadoop.mapred.Partitioner
- Get the paritition number for a given key (hence record) given the total
number of partitions i.e.
- getPartition() -
Method in class org.apache.hadoop.mapred.Task
- Get the index of this task within the job.
- getPartition(BinaryComparable, V, int) -
Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
- Use (the specified slice of the array returned by)
BinaryComparable.getBytes()
to partition.
- getPartition(K, V, int) -
Method in class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner
- Use
Object.hashCode()
to partition.
- getPartition(K2, V2, int) -
Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(int, int) -
Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(K, V, int) -
Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getPartition(KEY, VALUE, int) -
Method in class org.apache.hadoop.mapreduce.Partitioner
- Get the partition number for a given key (hence record) given the total
number of partitions i.e.
- getPartitionerClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
Partitioner
used to partition Mapper
-outputs
to be sent to the Reducer
s.
- getPartitionerClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the
Partitioner
class for the job.
- getPartitionFile(JobConf) -
Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
- Get the path to the SequenceFile storing the sorted partition keyset.
- getPartitionFile(Configuration) -
Static method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
- Get the path to the SequenceFile storing the sorted partition keyset.
- getPassword() -
Method in class org.apache.hadoop.security.token.Token
- Get the token password/secret
- getPath() -
Method in class org.apache.hadoop.fs.FileStatus
-
- getPath() -
Method in class org.apache.hadoop.mapred.FileSplit
- The file containing this split's data.
- getPath(int) -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns the ith Path
- getPath(int) -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns the ith Path
- getPath() -
Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
- The file containing this split's data.
- getPath(Node) -
Static method in class org.apache.hadoop.net.NodeBase
- Return this node's path
- getPathForCustomFile(JobConf, String) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Helper function to generate a
Path
for a file that is unique for
the task within the job output directory.
- getPathForWorkFile(TaskInputOutputContext<?, ?, ?, ?>, String, String) -
Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Helper function to generate a
Path
for a file that is unique for
the task within the job output directory.
- getPaths() -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns all the Paths in the split
- getPaths() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns all the Paths in the split
- getPercentExceptions() -
Method in class org.apache.hadoop.mapred.ShuffleExceptionTracker
- Gets the percent of the requests that had exceptions occur.
- getPercentUsed() -
Method in class org.apache.hadoop.fs.DF
-
- getPercentUsed() -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
- Deprecated.
- getPeriod() -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Deprecated. Returns the timer period.
- getPeriod() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Deprecated. Returns the timer period.
- getPermission() -
Method in class org.apache.hadoop.fs.FileStatus
- Get FsPermission associated with the file.
- getPermission() -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
- Return permission
- getPhase() -
Method in class org.apache.hadoop.mapred.Task
- Return current phase of the task.
- getPhase() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get current phase of this task.
- getPhysicalMemorySize() -
Method in class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
- Deprecated. Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() -
Method in class org.apache.hadoop.util.MemoryCalculatorPlugin
- Deprecated. Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the total size of the physical memory present in the system.
- getPhysicalMemorySize() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin.ProcResourceValues
- Obtain the physical memory size used by current process tree.
- getPlatformName() -
Static method in class org.apache.hadoop.util.PlatformName
- Get the complete platform as per the java-vm.
- getPort() -
Method in class org.apache.hadoop.http.HttpServer
- Get the port that the server is on
- getPos() -
Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- getPos() -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
-
- getPos() -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
-
- getPos() -
Method in exception org.apache.hadoop.fs.ChecksumException
-
- getPos() -
Method in class org.apache.hadoop.fs.FSDataInputStream
-
- getPos() -
Method in class org.apache.hadoop.fs.FSDataOutputStream
-
- getPos() -
Method in class org.apache.hadoop.fs.FSInputChecker
-
- getPos() -
Method in class org.apache.hadoop.fs.FSInputStream
- Return the current offset from the start of the file
- getPos() -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- getPos() -
Method in interface org.apache.hadoop.fs.Seekable
- Return the current offset from the start of the file
- getPos() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Unsupported (returns zero in all cases).
- getPos() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Request position from proxied RR.
- getPos() -
Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getPos() -
Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
- return the amount of data processed
- getPos() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
- Returns the current position in the input.
- getPos() -
Method in class org.apache.hadoop.mapred.LineRecordReader
-
- getPos() -
Method in interface org.apache.hadoop.mapred.RecordReader
- Returns the current position in the input.
- getPos() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getPos() -
Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getPos() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getPos() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
- Deprecated.
- getPos() -
Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
- Returns the current position in the input.
- getPosition() -
Method in class org.apache.hadoop.io.DataInputBuffer
- Returns the current position in the input.
- getPosition() -
Method in class org.apache.hadoop.io.InputBuffer
- Returns the current position in the input.
- getPosition() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Return the current byte position in the input file.
- getPreviousIntervalAverageTime() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Deprecated. The average rate of an operation in the previous interval
- getPreviousIntervalNumOps() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Deprecated. The number of operations in the previous interval
- getPreviousIntervalValue() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
- Deprecated. The Value at the Previous interval
- getPreviousIntervalValue() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
- Deprecated. The Value at the Previous interval
- getPrincipal() -
Method in class org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler
- Returns the Kerberos principal used by the authentication handler.
- getPriority() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getPrivateDistributedCacheDir(String) -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getProblems() -
Method in exception org.apache.hadoop.mapred.InvalidInputException
- Get the complete list of the problems reported.
- getProblems() -
Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
- Get the complete list of the problems reported.
- getProcess() -
Method in class org.apache.hadoop.util.Shell
- get the current sub-process executing the given command
- getProcessTree() -
Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
- Get the process-tree with latest state.
- getProcessTreeDump() -
Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
- Get a dump of the process-tree.
- getProcResourceValues() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
-
- getProcResourceValues() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain resource status used by current process tree.
- getProfile() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getProfileEnabled() -
Method in class org.apache.hadoop.mapred.JobConf
- Get whether the task profiling is enabled.
- getProfileParams() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the profiler configuration arguments.
- getProfileTaskRange(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Get the range of maps or reduces to profile.
- getProgress() -
Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- getProgress() -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
-
- getProgress() -
Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
- Gets the Progress object; this has a float (0.0 - 1.0)
indicating the bytes processed by the iterator so far
- getProgress() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Report progress as the minimum of all child RR progress.
- getProgress() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Request progress from proxied RR.
- getProgress() -
Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getProgress() -
Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
- return progress based on the amount of data processed so far.
- getProgress() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
- How much of the input has the
RecordReader
consumed i.e.
- getProgress() -
Method in class org.apache.hadoop.mapred.LineRecordReader
- Get the progress within the split
- getProgress() -
Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
- Gets the Progress object; this has a float (0.0 - 1.0)
indicating the bytes processed by the iterator so far
- getProgress() -
Method in interface org.apache.hadoop.mapred.RecordReader
- How much of the input has the
RecordReader
consumed i.e.
- getProgress() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
- Return the progress within the input split
- getProgress() -
Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getProgress() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
- Return the progress within the input split
- getProgress() -
Method in class org.apache.hadoop.mapred.Task
-
- getProgress() -
Method in class org.apache.hadoop.mapred.TaskReport
- The amount completed, between zero and one.
- getProgress() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
- The current progress of the record reader through its data.
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
- return progress based on the amount of data processed so far.
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
- Get the progress within the split
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
- Return the progress within the input split
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getProgress() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
- Return the progress within the input split
- getProgress() -
Method in class org.apache.hadoop.mapreduce.RecordReader
- The current progress of the record reader through its data.
- getProgress() -
Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
- getProgressible() -
Method in class org.apache.hadoop.mapred.JobContext
- Get the progress mechanism for reporting progress.
- getProgressible() -
Method in class org.apache.hadoop.mapred.TaskAttemptContext
-
- getProperty(String) -
Static method in class org.apache.hadoop.contrib.failmon.Environment
- Fetches the value of a property from the configuration file.
- getProtocol() -
Method in class org.apache.hadoop.security.authorize.Service
- Get the protocol for the service
- getProtocolVersion(String, long) -
Method in interface org.apache.hadoop.ipc.VersionedProtocol
- Return protocol version corresponding to protocol interface.
- getProtocolVersion(String, long) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getProtocolVersion(String, long) -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getProxy(Class<? extends VersionedProtocol>, long, InetSocketAddress, Configuration, SocketFactory) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a client-side proxy object that implements the named protocol,
talking to a server at the named address.
- getProxy(Class<? extends VersionedProtocol>, long, InetSocketAddress, Configuration, SocketFactory, int) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a client-side proxy object that implements the named protocol,
talking to a server at the named address.
- getProxy(Class<? extends VersionedProtocol>, long, InetSocketAddress, UserGroupInformation, Configuration, SocketFactory) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a client-side proxy object that implements the named protocol,
talking to a server at the named address.
- getProxy(Class<? extends VersionedProtocol>, long, InetSocketAddress, UserGroupInformation, Configuration, SocketFactory, int) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a client-side proxy object that implements the named protocol,
talking to a server at the named address.
- getProxy(Class<? extends VersionedProtocol>, long, InetSocketAddress, Configuration) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a client-side proxy object with the default SocketFactory
- getProxy(Class<? extends VersionedProtocol>, long, InetSocketAddress, Configuration, int) -
Static method in class org.apache.hadoop.ipc.RPC
-
- getProxySuperuserGroupConfKey(String) -
Static method in class org.apache.hadoop.security.authorize.ProxyUsers
- Returns configuration key for effective user groups allowed for a superuser
- getProxySuperuserIpConfKey(String) -
Static method in class org.apache.hadoop.security.authorize.ProxyUsers
- Return configuration key for superuser ip addresses
- getPublicDistributedCacheDir() -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getQueueAclsForCurrentUser() -
Method in class org.apache.hadoop.mapred.JobClient
- Gets the Queue ACLs for current user
- getQueueAclsForCurrentUser() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueAdmins(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueInfo(String) -
Method in class org.apache.hadoop.mapred.JobClient
- Gets the queue information associated to a particular Job Queue
- getQueueInfo(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueInfoJson() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueInfoJson() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getQueueManager() -
Method in class org.apache.hadoop.mapred.JobTracker
- Return the
QueueManager
associated with the JobTracker.
- getQueueMetrics() -
Method in class org.apache.hadoop.mapred.JobInProgress
- Get the QueueMetrics object associated with this job
- getQueueName() -
Method in class org.apache.hadoop.mapred.JobConf
- Return the name of the queue to which this job is submitted.
- getQueueName() -
Method in class org.apache.hadoop.mapred.JobProfile
- Get the name of the queue to which the job is submitted.
- getQueueName() -
Method in class org.apache.hadoop.mapred.JobQueueInfo
- Get the queue name from JobQueueInfo
- getQueueName() -
Method in class org.apache.hadoop.mapred.QueueAclsInfo
-
- getQueues() -
Method in class org.apache.hadoop.mapred.JobClient
- Return an array of queue information objects about all the Job Queues
configured.
- getQueues() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getQueueState() -
Method in class org.apache.hadoop.mapred.JobQueueInfo
- Return the queue state
- getQuota() -
Method in class org.apache.hadoop.fs.ContentSummary
- Return the directory quota
- getRange(String, String) -
Method in class org.apache.hadoop.conf.Configuration
- Parse the given attribute as a set of integer ranges
- getRaw(String) -
Method in class org.apache.hadoop.conf.Configuration
- Get the value of the
name
property, without doing
variable expansion.
- getRaw() -
Method in class org.apache.hadoop.fs.LocalFileSystem
-
- getRawFileSystem() -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- get the raw file system
- getReader() -
Method in class org.apache.hadoop.contrib.failmon.LogParser
- Return the BufferedReader, that reads the log file
- getReaders(FileSystem, Path, Configuration) -
Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
- Open the output generated by this format.
- getReaders(Configuration, Path) -
Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
- Open the output generated by this format.
- getReadOps() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Get the number of file system read operations such as list files
- getReadyJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getReadyJobsList() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getRealm() -
Method in class org.apache.hadoop.security.KerberosName
- Get the realm of the name.
- getRealUser() -
Method in class org.apache.hadoop.security.UserGroupInformation
- get RealUser (vs.
- getReasonsForBlacklisting(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getReasonsForGraylisting(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getRecordName() -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Deprecated. Returns the record name.
- getRecordName() -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Deprecated. Returns the record name.
- getRecordNum() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
- Get the RecordNum corresponding to the entry pointed by the cursor.
- getRecordNumNear(long) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
- Get the RecordNum for the first key-value pair in a compressed block
whose byte offset in the TFile is greater than or equal to the specified
offset.
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.contrib.index.example.LineDocInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MyInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.examples.SleepJob.SleepInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.examples.terasort.TeraInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in interface org.apache.hadoop.mapred.InputFormat
- Get the
RecordReader
for the given InputSplit
.
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in interface org.apache.hadoop.mapred.join.ComposableInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
- Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
- This is not implemented yet.
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
- Get the
RecordReader
for the given InputSplit
.
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.MultiFileInputFormat
- Deprecated.
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
- Create a record reader for the given split
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.TextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.streaming.AutoInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.streaming.StreamInputFormat
-
- getRecordReaderQueue() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Return sorted list of RecordReaders for this composite.
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
- Get the
RecordWriter
for the given job.
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
- Create a composite record writer that can write key/value data to different
output files
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in interface org.apache.hadoop.mapred.OutputFormat
- Get the
RecordWriter
for the given job.
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
- Get the
RecordWriter
for the given task.
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) -
Method in class org.apache.hadoop.mapreduce.OutputFormat
- Get the
RecordWriter
for the given task.
- getRecoveryDuration() -
Method in class org.apache.hadoop.mapred.JobTracker
- How long the jobtracker took to recover from restart.
- getReduceCounters(Counters) -
Method in class org.apache.hadoop.mapred.JobInProgress
- Returns map phase counters by summing over all map tasks in progress.
- getReduceDebugScript() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the reduce task's debug Script
- getReducerClass() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the
Reducer
class for the job.
- getReducerClass() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the
Reducer
class for the job.
- getReducerMaxSkipGroups(Configuration) -
Static method in class org.apache.hadoop.mapred.SkipBadRecords
- Get the number of acceptable skip groups surrounding the bad group PER
bad group in reducer.
- getReduceSlotCapacity() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the total number of reduce slots in the cluster.
- getReduceSpeculativeExecution() -
Method in class org.apache.hadoop.mapred.JobConf
- Should speculative execution be used for this job for reduce tasks?
Defaults to
true
.
- getReduceTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
- Get the information of the current state of the reduce tasks of a job.
- getReduceTaskReports(String) -
Method in class org.apache.hadoop.mapred.JobClient
- Deprecated. Applications should rather use
JobClient.getReduceTaskReports(JobID)
- getReduceTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getReduceTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the number of currently running reduce tasks in the cluster.
- getRemaining() -
Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
-
- getRemaining() -
Method in interface org.apache.hadoop.io.compress.Decompressor
- Returns the number of bytes remaining in the compressed-data buffer;
typically called after the decompressor has finished decompressing
the current gzip stream (a.k.a.
- getRemaining() -
Method in class org.apache.hadoop.io.compress.snappy.SnappyDecompressor
- Returns
0
.
- getRemaining() -
Method in class org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor
- Returns the number of bytes remaining in the input buffer; normally
called when finished() is true to determine amount of post-gzip-stream
data.
- getRemaining() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
- Returns the number of bytes remaining in the input buffers; normally
called when finished() is true to determine amount of post-gzip-stream
data.
- getRemainingArgs() -
Method in class org.apache.hadoop.util.GenericOptionsParser
- Returns an array of Strings containing only application-specific arguments.
- getRemoteAddress() -
Static method in class org.apache.hadoop.ipc.Server
- Returns remote address as a string when invoked inside an RPC.
- getRemoteIp() -
Static method in class org.apache.hadoop.ipc.Server
- Returns the remote side ip address when invoked inside an RPC
Returns null incase of an error.
- getRenewDate() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.DelegationTokenInformation
- returns renew date
- getRenewer() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier
-
- getReplication() -
Method in class org.apache.hadoop.fs.FileStatus
- Get the replication factor of a file.
- getReplication(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Deprecated. Use getFileStatus() instead
- getReplication(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Deprecated.
- getReport() -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
- log the counters
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
-
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
-
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
-
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
-
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
-
- getReport() -
Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
-
- getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getReportDetails() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getReportItems() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- getRequestURL() -
Method in class org.apache.hadoop.http.HttpServer.QuotingInputFilter.RequestQuoter
- Quote the url so that users specifying the HOST HTTP header
can't inject attacks.
- getRequestURL(HttpServletRequest) -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationFilter
- Returns the full URL of the request including the query string.
- getReservedMapSlots() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get number of reserved map slots in the cluster.
- getReservedReduceSlots() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of reserved reduce slots in the cluster.
- getResource(String) -
Method in class org.apache.hadoop.conf.Configuration
- Get the
URL
for the named resource.
- getResourceCalculatorPlugin(Class<? extends ResourceCalculatorPlugin>, Configuration) -
Static method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Get the ResourceCalculatorPlugin from the class name and configure it.
- getResult() -
Method in class org.apache.hadoop.examples.Sort
- Get the last job that was run using this instance.
- getRetainHours() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.JobCompletedEvent
- Get the number of hours for which job logs should be retained.
- getRevision() -
Static method in class org.apache.hadoop.util.VersionInfo
- Get the subversion revision number for the root directory
- getRotations() -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getRpcMetrics() -
Method in class org.apache.hadoop.ipc.Server
- Returns a handle to the rpcMetrics (required in tests)
- getRpcPort() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getRpcPort() -
Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getRunAsUser(JobConf) -
Method in class org.apache.hadoop.mapred.TaskController
- Returns the local unix user that a given job will run as.
- getRunnable() -
Method in class org.apache.hadoop.util.Daemon
-
- getRunningJobList() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getRunningJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getRunningJobs() -
Method in class org.apache.hadoop.mapred.JobTracker
- Version that is called from a timer thread, and therefore needs to be
careful to synchronize.
- getRunningMaps() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of running map tasks in the cluster.
- getRunningReduces() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of running reduce tasks in the cluster.
- getRunningTaskAttempts() -
Method in class org.apache.hadoop.mapred.TaskReport
- Get the running task attempt IDs for this task
- getRunState() -
Method in class org.apache.hadoop.mapred.JobStatus
-
- getRunState() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getSample(InputFormat<K, V>, JobConf) -
Method in class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
- For each split sampled, emit when the ratio of the number of records
retained to the total record count is less than the specified
frequency.
- getSample(InputFormat<K, V>, JobConf) -
Method in class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
- Randomize the split order, then take the specified number of keys from
each split sampled, where each key is selected with the specified
probability and possibly replaced by a subsequently selected key when
the quota of keys from that split is satisfied.
- getSample(InputFormat<K, V>, JobConf) -
Method in interface org.apache.hadoop.mapred.lib.InputSampler.Sampler
- For a given job, collect and return a subset of the keys from the
input data.
- getSample(InputFormat<K, V>, JobConf) -
Method in class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
- From each split sampled, take the first numSamples / numSplits records.
- getSample(InputFormat<K, V>, Job) -
Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.IntervalSampler
- For each split sampled, emit when the ratio of the number of records
retained to the total record count is less than the specified
frequency.
- getSample(InputFormat<K, V>, Job) -
Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.RandomSampler
- Randomize the split order, then take the specified number of keys from
each split sampled, where each key is selected with the specified
probability and possibly replaced by a subsequently selected key when
the quota of keys from that split is satisfied.
- getSample(InputFormat<K, V>, Job) -
Method in interface org.apache.hadoop.mapreduce.lib.partition.InputSampler.Sampler
- For a given job, collect and return a subset of the keys from the
input data.
- getSample(InputFormat<K, V>, Job) -
Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.SplitSampler
- From each split sampled, take the first numSamples / numSplits records.
- getSaslQop() -
Method in enum org.apache.hadoop.security.SaslRpcServer.QualityOfProtection
-
- getSchedulingInfo() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getSchedulingInfo() -
Method in class org.apache.hadoop.mapred.JobQueueInfo
- Gets the scheduling information associated to particular job queue.
- getSchedulingInfo() -
Method in class org.apache.hadoop.mapred.JobStatus
- Gets the Scheduling information associated to a particular Job.
- getScheme() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Get the uri scheme associated with this statistics object.
- getSecond() -
Method in class org.apache.hadoop.examples.SecondarySort.IntPair
-
- getSecretAccessKey() -
Method in class org.apache.hadoop.fs.s3.S3Credentials
-
- getSecretKey(Credentials, Text) -
Static method in class org.apache.hadoop.mapreduce.security.TokenCache
- auxiliary method to get user's secret keys..
- getSecretKey(Text) -
Method in class org.apache.hadoop.security.Credentials
- Returns the key bytes for the alias
- getSelectQuery() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
- Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBRecordReader
- Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
- Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() -
Method in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
- Returns the query for selecting the records from an Oracle DB.
- getSequenceFileOutputKeyClass(JobConf) -
Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
- Get the key class for the
SequenceFile
- getSequenceFileOutputKeyClass(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
- Get the key class for the
SequenceFile
- getSequenceFileOutputValueClass(JobConf) -
Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
- Get the value class for the
SequenceFile
- getSequenceFileOutputValueClass(JobContext) -
Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
- Get the value class for the
SequenceFile
- getSequenceNumber() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier
-
- getSequenceWriter(TaskAttemptContext, Class<?>, Class<?>) -
Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getSerialization(Class<T>) -
Method in class org.apache.hadoop.io.serializer.SerializationFactory
-
- getSerializedLength() -
Method in class org.apache.hadoop.fs.s3.INode
-
- getSerializer(Class<Serializable>) -
Method in class org.apache.hadoop.io.serializer.JavaSerialization
-
- getSerializer(Class<T>) -
Method in interface org.apache.hadoop.io.serializer.Serialization
-
- getSerializer(Class<T>) -
Method in class org.apache.hadoop.io.serializer.SerializationFactory
-
- getSerializer(Class<Writable>) -
Method in class org.apache.hadoop.io.serializer.WritableSerialization
-
- getServer(Object, String, int, Configuration) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a server for a protocol implementation instance listening on a
port and address.
- getServer(Object, String, int, int, boolean, Configuration) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a server for a protocol implementation instance listening on a
port and address.
- getServer(Object, String, int, int, boolean, Configuration, SecretManager<? extends TokenIdentifier>) -
Static method in class org.apache.hadoop.ipc.RPC
- Construct a server for a protocol implementation instance listening on a
port and address, with a secret manager.
- getServerAddress(Configuration, String, String, String) -
Static method in class org.apache.hadoop.net.NetUtils
- Deprecated.
- getServerName() -
Method in class org.apache.hadoop.http.HttpServer.QuotingInputFilter.RequestQuoter
- Quote the server name so that users specifying the HOST HTTP header
can't inject attacks.
- getServerPrincipal(String, String) -
Static method in class org.apache.hadoop.security.SecurityUtil
- Convert Kerberos principal name pattern to valid Kerberos principal
names.
- getServerPrincipal(String, InetAddress) -
Static method in class org.apache.hadoop.security.SecurityUtil
- Convert Kerberos principal name pattern to valid Kerberos principal names.
- getServerVersion() -
Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
- Get the server's agreed to version.
- getService() -
Method in class org.apache.hadoop.security.token.Token
- Get the service on which the token is supposed to be used
- getServiceKey() -
Method in class org.apache.hadoop.security.authorize.Service
- Get the configuration key for the service.
- getServiceName() -
Method in class org.apache.hadoop.security.KerberosName
- Get the first component of the name.
- getServices() -
Method in class org.apache.hadoop.mapred.MapReducePolicyProvider
-
- getServices() -
Method in class org.apache.hadoop.security.authorize.PolicyProvider
- Get the
Service
definitions from the PolicyProvider
.
- getSessionId() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the user-specified session identifier.
- getSetupTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
- Get the information of the current state of the setup tasks of a job.
- getSetupTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getShape(boolean, int) -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- getShortName() -
Method in class org.apache.hadoop.security.KerberosName
- Get the translation of the principal name into an operating system
user name.
- getShortUserName() -
Method in class org.apache.hadoop.security.UserGroupInformation
- Get the user's login name.
- getShuffleFinishTime() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get shuffle finish time for the task.
- getSize() -
Method in class org.apache.hadoop.io.BytesWritable
- Deprecated. Use
BytesWritable.getLength()
instead.
- getSize() -
Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
- Size of stored data.
- getSize() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- getSize() -
Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- getSkipOutputPath(Configuration) -
Static method in class org.apache.hadoop.mapred.SkipBadRecords
- Get the directory to which skipped records are written.
- getSkipRanges() -
Method in class org.apache.hadoop.mapred.Task
- Get skipRanges.
- getSlope(String) -
Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
- Deprecated.
- getSocketFactory(Configuration, Class<?>) -
Static method in class org.apache.hadoop.net.NetUtils
- Get the socket factory for the given class according to its
configuration parameter
hadoop.rpc.socket.factory.class.<ClassName>.
- getSocketFactoryFromProperty(Configuration, String) -
Static method in class org.apache.hadoop.net.NetUtils
- Get the socket factory corresponding to the given proxy URI.
- getSortComparator() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the
RawComparator
comparator used to compare keys.
- getSortFinishTime() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get sort finish time for the task,.
- getSpace(int) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getSpaceConsumed() -
Method in class org.apache.hadoop.fs.ContentSummary
- Retuns (disk) space consumed
- getSpaceQuota() -
Method in class org.apache.hadoop.fs.ContentSummary
- Returns (disk) space quota
- getSpeculativeExecution() -
Method in class org.apache.hadoop.mapred.JobConf
- Should speculative execution be used for this job?
Defaults to
true
.
- getSplit() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getSplitHosts(BlockLocation[], long, long, NetworkTopology) -
Method in class org.apache.hadoop.mapred.FileInputFormat
- This function identifies and returns the hosts that contribute
most for a given split.
- getSplitIndex() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getSplitLocation() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitIndex
-
- getSplitLocation() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getSplits(int) -
Method in class org.apache.hadoop.examples.dancing.Pentomino
- Generate a list of prefixes to a given depth
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.examples.SleepJob.SleepInputFormat
-
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.examples.terasort.TeraInputFormat
-
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.FileInputFormat
- Splits files returned by
FileInputFormat.listStatus(JobConf)
when
they're too big.
- getSplits(JobConf, int) -
Method in interface org.apache.hadoop.mapred.InputFormat
- Logically split the set of input files for the job.
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
- Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
- Logically split the set of input files for the job.
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
-
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
- Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.MultiFileInputFormat
- Deprecated.
- getSplits(JobContext) -
Method in class org.apache.hadoop.mapreduce.InputFormat
- Logically split the set of input files for the job.
- getSplits(JobContext) -
Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
- Logically split the set of input files for the job.
- getSplits(JobContext) -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
- Logically split the set of input files for the job.
- getSplits(JobContext) -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getSplits(JobContext) -
Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat
-
- getSplits(JobContext) -
Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
- Generate the list of files and make them into FileSplits.
- getSplits(JobContext) -
Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
- Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplitsForFile(FileStatus, Configuration, int) -
Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- getSplitter(int) -
Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getSplitter(int) -
Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- getSrcChecksum() -
Static method in class org.apache.hadoop.util.VersionInfo
- Get the checksum of the source files from which Hadoop was
built.
- getStackTrace() -
Method in exception org.apache.hadoop.security.authorize.AuthorizationException
-
- getStagingAreaDir() -
Method in class org.apache.hadoop.mapred.JobClient
- Grab the jobtracker's view of the staging directory path where
job-specific files will be placed.
- getStagingAreaDir() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getStagingDir(JobClient, Configuration) -
Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
- Initializes the staging directory and returns the path.
- getStart() -
Method in class org.apache.hadoop.mapred.FileSplit
- The position of the first byte in the file to process.
- getStart() -
Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
-
- getStart() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getStart() -
Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
- The position of the first byte in the file to process.
- getStartOffset() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getStartOffset() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitIndex
-
- getStartOffset() -
Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getStartOffsets() -
Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
- Returns an array containing the startoffsets of the files in the split
- getStartOffsets() -
Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
- Returns an array containing the start offsets of the files in the split
- getStartTime() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getStartTime() -
Method in class org.apache.hadoop.mapred.JobStatus
-
- getStartTime() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getStartTime() -
Method in class org.apache.hadoop.mapred.TaskReport
- Get start time of task.
- getStartTime() -
Method in class org.apache.hadoop.mapred.TaskStatus
- Get start time of the task.
- getState(String) -
Static method in class org.apache.hadoop.contrib.failmon.PersistentState
- Read and return the state of parsing for a particular log file.
- getState() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getState() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getState() -
Method in class org.apache.hadoop.mapred.TaskReport
- The most recent state, reported by a
Reporter
.
- getStatement() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
- getStatement() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getStateString() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getStaticResolution(String) -
Static method in class org.apache.hadoop.net.NetUtils
- Retrieves the resolved name for the passed host.
- getStatistics() -
Static method in class org.apache.hadoop.fs.FileSystem
- Deprecated. use
FileSystem.getAllStatistics()
instead
- getStatistics(String, Class<? extends FileSystem>) -
Static method in class org.apache.hadoop.fs.FileSystem
- Get the statistics for a particular file system
- getStatus() -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getStatus() -
Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
- Get the current
TaskTrackerStatus
of the TaskTracker
.
- getStatus() -
Method in class org.apache.hadoop.mapreduce.TaskAttemptContext
- Get the last set status message.
- getStr() -
Method in class org.apache.hadoop.mapred.join.Parser.StrToken
-
- getStr() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getStringCollection(String) -
Method in class org.apache.hadoop.conf.Configuration
- Get the comma delimited values of the
name
property as
a collection of String
s.
- getStringCollection(String) -
Static method in class org.apache.hadoop.util.StringUtils
- Returns a collection of strings.
- getStrings(String) -
Method in class org.apache.hadoop.conf.Configuration
- Get the comma delimited values of the
name
property as
an array of String
s.
- getStrings(String, String...) -
Method in class org.apache.hadoop.conf.Configuration
- Get the comma delimited values of the
name
property as
an array of String
s.
- getStrings(String) -
Static method in class org.apache.hadoop.util.StringUtils
- Returns an arraylist of strings.
- getSubject() -
Method in class org.apache.hadoop.security.UserGroupInformation
- Get the underlying subject from this ugi.
- getSuccessfulJobList() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getSuccessfulJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getSuccessfulTaskAttempt() -
Method in class org.apache.hadoop.mapred.TaskReport
- Get the attempt ID that took this task to completion
- GetSuffix(int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- getSum() -
Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
-
- getSum() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
-
- getSummaryJson() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getSummaryJson() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getSupportedCompressionAlgorithms() -
Static method in class org.apache.hadoop.io.file.tfile.TFile
- Get names of supported compression algorithms.
- getSymlink(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- This method checks to see if symlinks are to be create for the
localized cache files in the current working directory
Used by internal DistributedCache code.
- getSystemDir() -
Method in class org.apache.hadoop.mapred.JobClient
- Grab the jobtracker system directory path where job-specific files are to be placed.
- getSystemDir() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getTableName() -
Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getTabSize(int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- getTag() -
Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- getTag(String) -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
- Deprecated. Returns a tag object which is can be a String, Integer, Short or Byte.
- getTag(String) -
Method in class org.apache.hadoop.metrics2.util.MetricsCache.Record
- Get the tag value
- getTagNames() -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
- Deprecated. Returns the set of tag names
- getTagsCopy() -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
- Deprecated. Returns a copy of this record's tags.
- getTask() -
Method in class org.apache.hadoop.mapred.JvmTask
-
- getTask(JvmContext) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Called upon startup by the child process, to fetch Task data.
- getTask(JvmContext) -
Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
- Called when a child task process starts, to get its task.
- getTaskAttemptID() -
Method in class org.apache.hadoop.mapred.TaskAttemptContext
- Get the taskAttemptID.
- getTaskAttemptId() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Returns task id.
- getTaskAttemptID() -
Method in class org.apache.hadoop.mapreduce.TaskAttemptContext
- Get the unique name for this task attempt.
- getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer) -
Static method in class org.apache.hadoop.mapred.TaskAttemptID
- Deprecated.
- getTaskAttemptLogDir(TaskAttemptID, String, String[]) -
Static method in class org.apache.hadoop.mapred.TaskLog
- Get attempt log directory path for the given attempt-id under randomly
selected mapred local directory.
- getTaskAttempts() -
Method in class org.apache.hadoop.mapred.JobHistory.Task
- Returns all task attempts for this task.
- getTaskCompletionEvents(int, int) -
Method in class org.apache.hadoop.mapred.JobInProgress
-
- getTaskCompletionEvents(JobID, int, int) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getTaskCompletionEvents(int) -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get events indicating completion (success/failure) of component tasks.
- getTaskCompletionEvents(int) -
Method in class org.apache.hadoop.mapreduce.Job
- Get events indicating completion (success/failure) of component tasks.
- getTaskController() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTaskController() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogManager
- Get the taskController for deleting logs.
- getTaskDiagnostics(TaskAttemptID) -
Method in class org.apache.hadoop.mapred.JobTracker
- Get the diagnostics for a given task
- getTaskDiagnostics(TaskAttemptID) -
Method in interface org.apache.hadoop.mapred.RunningJob
- Gets the diagnostic messages for a given task attempt.
- getTaskDistributedCacheManager(JobID) -
Method in class org.apache.hadoop.filecache.TrackerDistributedCacheManager
-
- getTaskID() -
Method in class org.apache.hadoop.mapred.Task
-
- getTaskID() -
Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getTaskId() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Deprecated. use
TaskCompletionEvent.getTaskAttemptId()
instead.
- getTaskId() -
Method in class org.apache.hadoop.mapred.TaskLogAppender
- Getter/Setter methods for log4j.
- getTaskId() -
Method in class org.apache.hadoop.mapred.TaskReport
- Deprecated. use
TaskReport.getTaskID()
instead
- getTaskID() -
Method in class org.apache.hadoop.mapred.TaskReport
- The id of the task.
- getTaskID() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getTaskID() -
Method in class org.apache.hadoop.mapreduce.TaskAttemptID
- Returns the
TaskID
object that this task attempt belongs to
- getTaskIDsPattern(String, Integer, Boolean, Integer) -
Static method in class org.apache.hadoop.mapred.TaskID
- Deprecated.
- getTaskInfo(JobConf) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- getTaskInProgress(TaskID) -
Method in class org.apache.hadoop.mapred.JobInProgress
- Return the TaskInProgress that matches the tipid.
- getTaskLogFile(TaskAttemptID, boolean, TaskLog.LogName) -
Static method in class org.apache.hadoop.mapred.TaskLog
-
- getTaskLogLength(JobConf) -
Static method in class org.apache.hadoop.mapred.TaskLog
- Get the desired maximum length of task's logs.
- getTaskLogsUrl(JobHistory.TaskAttempt) -
Static method in class org.apache.hadoop.mapred.JobHistory
- Return the TaskLogsUrl of a particular TaskAttempt
- getTaskLogUrl(String, String, String) -
Static method in class org.apache.hadoop.mapred.TaskLogServlet
- Construct the taskLogUrl
- getTaskMemoryManager() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTaskOutputFilter(JobConf) -
Static method in class org.apache.hadoop.mapred.JobClient
- Get the task output filter out of the JobConf.
- getTaskOutputFilter() -
Method in class org.apache.hadoop.mapred.JobClient
- Deprecated.
- getTaskOutputPath(JobConf, String) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Helper function to create the task's temporary output directory and
return the path to the task's output file.
- getTaskReports() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
- Get the current tasks at the TaskTracker.
- getTaskRunTime() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Returns time (in millisec) the task took to complete.
- getTasksInfoJson() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTasksInfoJson() -
Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getTaskStatus() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Returns enum Status.SUCESS or Status.FAILURE.
- getTaskTracker(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getTaskTracker() -
Method in class org.apache.hadoop.mapred.TaskStatus
-
- getTaskTrackerCount() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the number of active trackers in the cluster.
- getTaskTrackerHttp() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- http location of the tasktracker where this task ran.
- getTaskTrackerInstrumentation() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getTaskTrackerReportAddress() -
Method in class org.apache.hadoop.mapred.TaskTracker
- Return the port at which the tasktracker bound to
- getTaskTrackers() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the number of active task trackers in the cluster.
- getTaskTrackerStatus(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getTerm() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
- Get the term.
- getText() -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
- Get the text that represents a document.
- getText() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
- The text of the document id.
- getThreadCount() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getThreadCount() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getThreadState() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getTimestamp(Configuration, URI) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Returns mtime of a given cache file on hdfs.
- getTip(TaskID) -
Method in class org.apache.hadoop.mapred.JobTracker
- Returns specified TaskInProgress, or null.
- getTmax(String) -
Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
- Deprecated.
- getToken(int) -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- getToken(HttpServletRequest) -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationFilter
- Returns the
AuthenticationToken
for the request.
- getToken(Text) -
Method in class org.apache.hadoop.security.Credentials
- Returns the Token object for the alias
- getTokenIdentifiers() -
Method in class org.apache.hadoop.security.UserGroupInformation
- Get the set of TokenIdentifiers belonging to this UGI
- getTokens() -
Method in class org.apache.hadoop.security.UserGroupInformation
- Obtain the collection of tokens associated with this user.
- getTokenServiceAddr(Token<?>) -
Static method in class org.apache.hadoop.security.SecurityUtil
- Decode the given token's service field into an InetAddress
- getTopologyPaths() -
Method in class org.apache.hadoop.fs.BlockLocation
- Get the list of network topology paths for each of the hosts.
- getTotalJobSubmissions() -
Method in class org.apache.hadoop.mapreduce.ClusterMetrics
- Get the total number of job submissions in the cluster.
- getTotalLogFileSize() -
Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- getTotalSubmissions() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getTrackerIdentifier() -
Method in class org.apache.hadoop.mapred.JobTracker
- Get the unique identifier (ie.
- getTrackerName() -
Method in class org.apache.hadoop.mapred.TaskTrackerStatus
-
- getTrackerName() -
Method in class org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker
- Get the unique identifier for the
TaskTracker
- getTrackerPort() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getTrackingURL() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get the URL where some job progress information will be displayed.
- getTrackingURL() -
Method in class org.apache.hadoop.mapreduce.Job
- Get the URL where some job progress information will be displayed.
- getTTExpiryInterval() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the tasktracker expiry interval for the cluster
- getType() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getType() -
Method in interface org.apache.hadoop.security.authentication.server.AuthenticationHandler
- Returns the authentication type of the authentication handler.
- getType() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationToken
- Returns the authentication mechanism of the token.
- getType() -
Method in class org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler
- Returns the authentication type of the authentication handler, 'kerberos'.
- getType() -
Method in class org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler
- Returns the authentication type of the authentication handler, 'simple'.
- getType() -
Method in class org.apache.hadoop.typedbytes.TypedBytesWritable
- Get the type code embedded in the first byte.
- getTypeID() -
Method in class org.apache.hadoop.record.meta.FieldTypeInfo
- get the field's TypeID object
- getTypes() -
Method in class org.apache.hadoop.io.GenericWritable
- Return all classes that may be wrapped.
- getTypeVal() -
Method in class org.apache.hadoop.record.meta.TypeID
- Get the type value.
- getUlimitMemoryCommand(int) -
Static method in class org.apache.hadoop.util.Shell
- Get the Unix command for setting the maximum virtual memory available
to a given child process.
- getUlimitMemoryCommand(Configuration) -
Static method in class org.apache.hadoop.util.Shell
- Deprecated. Use
Shell.getUlimitMemoryCommand(int)
- getUMask(Configuration) -
Static method in class org.apache.hadoop.fs.permission.FsPermission
- Get the user file creation mask (umask)
UMASK_LABEL
config param has umask value that is either symbolic
or octal.
- getUniqueFile(TaskAttemptContext, String, String) -
Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Generate a unique filename, based on the task id, name, and extension
- getUniqueItems() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
-
- getUniqueName(JobConf, String) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Helper function to generate a name that is unique for the task.
- getUnits(String) -
Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
- Deprecated.
- getUpperClause() -
Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getUri() -
Method in class org.apache.hadoop.fs.FileSystem
- Returns a URI whose scheme and authority identify this FileSystem.
- getUri() -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Returns a URI whose scheme and authority identify this FileSystem.
- getUri() -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getUri() -
Method in class org.apache.hadoop.fs.HarFileSystem
- Returns the uri of this filesystem.
- getUri() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- getUri() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getUri() -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getUri() -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getURIs(String, String) -
Method in class org.apache.hadoop.streaming.StreamJob
- get the uris of all the files/caches
- getURL() -
Method in class org.apache.hadoop.mapred.JobProfile
- Get the link to the web-ui for details of the job.
- getUrl() -
Static method in class org.apache.hadoop.util.VersionInfo
- Get the subversion URL for the root Hadoop directory.
- getUsed() -
Method in class org.apache.hadoop.fs.DF
-
- getUsed() -
Method in class org.apache.hadoop.fs.DU
-
- getUsed() -
Method in class org.apache.hadoop.fs.FileSystem
- Return the total size of all files in the filesystem.
- getUsedMemory() -
Method in class org.apache.hadoop.mapred.ClusterStatus
- Get the total heap memory used by the
JobTracker
- getUseNewMapper() -
Method in class org.apache.hadoop.mapred.JobConf
- Should the framework use the new context-object code for running
the mapper?
- getUseNewReducer() -
Method in class org.apache.hadoop.mapred.JobConf
- Should the framework use the new context-object code for running
the reducer?
- getUser() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the reported username for this job.
- getUser() -
Method in class org.apache.hadoop.mapred.JobInProgress
- Get the user for the job
- getUser() -
Method in class org.apache.hadoop.mapred.JobProfile
- Get the user id.
- getUser() -
Method in class org.apache.hadoop.mapred.Task
- Get the name of the user running the job/task.
- getUser() -
Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
- Get the Ugi with the username encoded in the token identifier
- getUser() -
Method in class org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier
- Get the username encoded in the token identifier
- getUser() -
Method in class org.apache.hadoop.security.token.TokenIdentifier
- Get the Ugi with the username encoded in the token identifier
- getUser() -
Static method in class org.apache.hadoop.util.VersionInfo
- The user that compiled Hadoop.
- getUserAction() -
Method in class org.apache.hadoop.fs.permission.FsPermission
- Return user
FsAction
.
- getUserDir(String) -
Static method in class org.apache.hadoop.mapred.TaskTracker
-
- getUserLogCleaner() -
Method in class org.apache.hadoop.mapreduce.server.tasktracker.userlogs.UserLogManager
- Get
UserLogCleaner
.
- getUserLogDir() -
Static method in class org.apache.hadoop.mapred.TaskLog
-
- getUserName() -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
- Return user name
- getUserName(JobConf) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Get the user name from the job conf
- getUsername() -
Method in class org.apache.hadoop.mapred.JobStatus
-
- getUserName() -
Method in class org.apache.hadoop.security.authentication.client.PseudoAuthenticator
- Returns the current user name.
- getUserName() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationToken
- Returns the user name.
- getUserName() -
Method in class org.apache.hadoop.security.UserGroupInformation
- Get the user's full principal name.
- getUsersForNetgroup(String) -
Method in class org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping
- Calls JNI function to get users for a netgroup, since C functions
are not reentrant we need to make this synchronized (see
documentation for setnetgrent, getnetgrent and endnetgrent)
- getUsersForNetgroupCommand(String) -
Static method in class org.apache.hadoop.util.Shell
- a Unix command to get a given netgroup's user list
- getUserToGroupsMappingService() -
Static method in class org.apache.hadoop.security.Groups
- Get the groups being used to map user-to-groups.
- getUserToGroupsMappingService(Configuration) -
Static method in class org.apache.hadoop.security.Groups
-
- getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
-
- getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
-
- getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
- getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
- getValByRegex(String) -
Method in class org.apache.hadoop.conf.Configuration
- get keys matching the the regex
- getValidity() -
Method in class org.apache.hadoop.security.authentication.server.AuthenticationFilter
- Returns the validity time of the generated tokens.
- getValue(BytesWritable) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Copy the value into BytesWritable.
- getValue(byte[]) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Copy value into user-supplied buffer.
- getValue(byte[], int) -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Copy value into user-supplied buffer.
- getValue() -
Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
- Gets the current raw value
- getValue() -
Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
- Gets the current raw value.
- getValue() -
Method in class org.apache.hadoop.mapreduce.Counter
- What is the current value of this counter?
- getValue() -
Method in enum org.apache.hadoop.mapreduce.JobStatus.State
-
- getValue() -
Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getValue() -
Method in class org.apache.hadoop.typedbytes.TypedBytesWritable
- Get the typed bytes as a Java object.
- getValue() -
Method in class org.apache.hadoop.util.DataChecksum
-
- getValue() -
Method in enum org.apache.hadoop.util.ProcessTree.Signal
-
- getValueClass() -
Method in class org.apache.hadoop.io.ArrayWritable
-
- getValueClass() -
Method in class org.apache.hadoop.io.MapFile.Reader
- Returns the class of values in this file.
- getValueClass() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns the class of values in this file.
- getValueClass() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
- Returns the class of values in this file.
- getValueClass() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
- The class of value that must be passed to
SequenceFileRecordReader.next(Object, Object)
..
- getValueClassName() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns the name of the value class.
- getValueClassName() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
- Retrieve the name of the value class for this SequenceFile.
- getValueClassName() -
Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
- Retrieve the name of the value class for this SequenceFile.
- getValueLength() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Get the length of the value.
- getValues() -
Method in class org.apache.hadoop.mapreduce.ReduceContext
- Iterate through the values for the current key, reusing the same value
object, which is stored in the context.
- getValueStream() -
Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
- Stream access to value.
- getValueTypeID() -
Method in class org.apache.hadoop.record.meta.MapTypeID
- get the TypeID of the map's value element
- getVectorSize() -
Method in class org.apache.hadoop.util.bloom.BloomFilter
-
- getVersion() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
- Get the version number of the entire index.
- getVersion() -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- getVersion() -
Method in class org.apache.hadoop.io.VersionedWritable
- Return the version number of the current implementation.
- getVersion() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- getVersion() -
Method in interface org.apache.hadoop.mapred.JobTrackerMXBean
-
- getVersion() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- getVersion() -
Method in interface org.apache.hadoop.mapred.TaskTrackerMXBean
-
- getVersion() -
Static method in class org.apache.hadoop.util.VersionInfo
- Get the Hadoop version.
- getVIntSize(long) -
Static method in class org.apache.hadoop.io.WritableUtils
- Get the encoded length if an integer is stored in a variable-length format
- getVIntSize(long) -
Static method in class org.apache.hadoop.record.Utils
- Get the encoded length if an integer is stored in a variable-length format
- getVirtualMemorySize() -
Method in class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
- Deprecated. Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() -
Method in class org.apache.hadoop.util.LinuxResourceCalculatorPlugin
- Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() -
Method in class org.apache.hadoop.util.MemoryCalculatorPlugin
- Deprecated. Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin
- Obtain the total size of the virtual memory present in the system.
- getVirtualMemorySize() -
Method in class org.apache.hadoop.util.ResourceCalculatorPlugin.ProcResourceValues
- Obtain the virtual memory size used by a current process tree.
- getWaitingJobList() -
Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getWaitingJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getWarn() -
Static method in class org.apache.hadoop.log.metrics.EventCounter
-
- getWebAppsPath() -
Method in class org.apache.hadoop.http.HttpServer
- Get the pathname to the webapps files.
- getWeight() -
Method in class org.apache.hadoop.util.bloom.Key
-
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.FileSystem
- Get the current working directory for the given file system
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Get the current working directory for the given file system
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.HarFileSystem
- return the top level archive.
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- getWorkingDirectory() -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- getWorkingDirectory() -
Method in class org.apache.hadoop.mapred.JobConf
- Get the current working directory for the default file system.
- getWorkingDirectory() -
Method in class org.apache.hadoop.mapreduce.JobContext
- Get the current working directory for the default file system.
- getWorkOutputPath(JobConf) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Get the
Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkOutputPath(TaskInputOutputContext<?, ?, ?, ?>) -
Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
- Get the
Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkPath() -
Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
- Get the directory that the task should write results into
- getWrappedStream() -
Method in class org.apache.hadoop.fs.FSDataOutputStream
-
- getWriteOps() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Get the number of file system write operations such as create, append
rename etc.
- getZlibCompressor(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate implementation of the zlib compressor.
- getZlibCompressorType(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate type of the zlib compressor.
- getZlibDecompressor(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate implementation of the zlib decompressor.
- getZlibDecompressorType(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate type of the zlib decompressor.
- GlobFilter - Class in org.apache.hadoop.metrics2.filter
- A glob pattern filter for metrics
- GlobFilter() -
Constructor for class org.apache.hadoop.metrics2.filter.GlobFilter
-
- GlobPattern - Class in org.apache.hadoop.fs
- A class for POSIX glob pattern with brace expansions.
- GlobPattern(String) -
Constructor for class org.apache.hadoop.fs.GlobPattern
- Construct the glob pattern object with a glob pattern string
- globStatus(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Return all the files that match filePattern and are not checksum
files.
- globStatus(Path, PathFilter) -
Method in class org.apache.hadoop.fs.FileSystem
- Return an array of FileStatus objects whose path names match pathPattern
and is accepted by the user-supplied path filter.
- go() -
Method in class org.apache.hadoop.streaming.StreamJob
- Deprecated. use
StreamJob.run(String[])
instead.
- goodClassOrNull(Configuration, String, String) -
Static method in class org.apache.hadoop.streaming.StreamUtil
- It may seem strange to silently switch behaviour when a String
is not a classname; the reason is simplified Usage:
- graylistedTaskTrackers() -
Method in class org.apache.hadoop.mapred.JobTracker
- Get the statuses of the graylisted task trackers in the cluster.
- GREATER_ICOST -
Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
- This constant is accessible by subclasses for historical purposes.
- Grep - Class in org.apache.hadoop.examples
-
- Groups - Class in org.apache.hadoop.security
- A user-to-groups mapping service.
- Groups(Configuration) -
Constructor for class org.apache.hadoop.security.Groups
-
- GT_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- GzipCodec - Class in org.apache.hadoop.io.compress
- This class creates gzip compressors/decompressors.
- GzipCodec() -
Constructor for class org.apache.hadoop.io.compress.GzipCodec
-
- GzipCodec.GzipOutputStream - Class in org.apache.hadoop.io.compress
- A bridge that wraps around a DeflaterOutputStream to make it
a CompressionOutputStream.
- GzipCodec.GzipOutputStream(OutputStream) -
Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
-
- GzipCodec.GzipOutputStream(CompressorStream) -
Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
- Allow children types to put a different type in here.
length
, and
the provided seed value
Object.hashCode()
.Object.hashCode()
.TaskTracker
and
the JobTracker
.
Enum
type, by the specified amount.
IndexedSorter
algorithms.IndexedSortable
items.JobTracker
.
InputStream
implementation that reads from an in-memory
buffer.InputFormat
describes the input-specification for a
Map-Reduce job.InputFormat
describes the input-specification for a
Map-Reduce job.TotalOrderPartitioner
.TotalOrderPartitioner
.InputFormat
.InputFormat
.InputSplit
represents the data to be processed by an
individual Mapper
.InputSplit
represents the data to be processed by an
individual Mapper
.Mapper
that swaps keys and values.Mapper
that swaps keys and values.Iterator
to go through the list of String
key-value pairs in the configuration.
Serialization
for Java Serializable
classes.RawComparator
that uses a JavaSerialization
Deserializer
to deserialize objects that are then compared via
their Comparable
interfaces.JenkinsHash
.
GroupMappingServiceProvider
that invokes libC calls to get the group
memberships of a given user.GroupMappingServiceProvider
that invokes libC calls to get the group
memberships of a given user.JobClient
is the primary interface for the user-job to interact
with the JobTracker
.JobConf
, and connect to the
default JobTracker
.
UserLogEvent
sent when the job completesJobHistoryServer
is responsible for servicing all job history
related requests from client.JobProfile
.
JobProfile
the userid, jobid,
job config-file, job-details url and job name.
JobProfile
the userid, jobid,
job config-file, job-details url and job name.
UserLogEvent
sent when the job starts.UserLogEvent
sent when the jvm finishes.org.apache.hadoop.metrics2
usage.KerberosAuthenticationHandler
implements the Kerberos SPNEGO authentication mechanism for HTTP.KerberosAuthenticator
implements the Kerberos SPNEGO authentication sequence.ArrayFile.Reader.seek(long)
, ArrayFile.Reader.next(Writable)
, or ArrayFile.Reader.get(long,Writable)
.
KeyFieldBasedComparator
.KeyFieldBasedComparator
.InputFormat
for plain text files.InputFormat
for plain text files.RunningJob.killTask(TaskAttemptID, boolean)
SslSocketConnector
to optionally also provide
Kerberos5ized SSL sockets.Krb5AndCertsSslSocketConnector
and provides it the to the servlet
at runtime, setting the principal and short name.StringUtils.limitDecimalTo2(double)
instead.
io.file.buffer.size
specified in the given
Configuration
.
LineReader
instead.LinuxResourceCalculatorPlugin
insteadFile.list()
.
File.listFiles()
.
f
is a file, this method will make a single call to S3.
JobHistory.MapAttempt.logFailed(TaskAttemptID, long, String, String, String)
JobHistory.ReduceAttempt.logFailed(TaskAttemptID, long, String, String, String)
JobHistory.MapAttempt.logFinished(TaskAttemptID, long, String, String, String, Counters)
JobHistory.ReduceAttempt.logFinished(TaskAttemptID, long, long, long, String, String, String, Counters)
JobHistory.JobInfo.logJobInfo(JobID, long, long)
instead.
JobHistory.MapAttempt.logKilled(TaskAttemptID, long, String, String, String)
JobHistory.ReduceAttempt.logKilled(TaskAttemptID, long, String, String, String)
JobHistory.JobInfo.logInited(JobID, long, int, int)
and
JobHistory.JobInfo.logStarted(JobID)
JobHistory.MapAttempt.logStarted(TaskAttemptID, long, String, int, String)
JobHistory.ReduceAttempt.logStarted(TaskAttemptID, long, String, int, String)
JobHistory.JobInfo.logSubmitted(JobID, JobConf, String, long, boolean)
instead.
Reducer
that sums long values.LinuxResourceCalculatorPlugin
map(...)
methods of the Mappers in the chain.
Mapper
.OutputFormat
that writes MapFile
s.JobConf.MAPRED_MAP_TASK_ENV
or
JobConf.MAPRED_REDUCE_TASK_ENV
JobConf.MAPRED_MAP_TASK_JAVA_OPTS
or
JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS
JobConf.MAPRED_JOB_MAP_MEMORY_MB_PROPERTY
and
JobConf.MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY
JobConf.MAPRED_MAP_TASK_ULIMIT
or
JobConf.MAPRED_REDUCE_TASK_ULIMIT
Mapper
and Reducer
implementations.PolicyProvider
for Map-Reduce protocols.Mapper
s.MapRunnable
implementation.mark
and
reset
methods, which it does not.
MBeans.register(String, String, Object)
MBeans
.ResourceCalculatorPlugin
insteadSegmentDescriptor
org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.MetricsException
.MetricMutableGaugeInt
.org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.MetricsRegistry
.org.apache.hadoop.metrics2
usage.MetricMutableCounterInt
.MetricMutableCounterLong
.MetricMutableGauge
.org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.FileSystem.mkdirs(Path, FsPermission)
with default permission.
CombineFileInputFormat
insteadCombineFileSplit
insteadMultiFileWordCount.MapClass
.MultiFileInputFormat
, one should extend it, to return a
(custom) RecordReader
.InputFormat
and Mapper
for each pathInputFormat
and Mapper
for each pathIOException
into an IOException
OutputCollector
passed to
the map()
and reduce()
methods of the
Mapper
and Reducer
implementations.MurmurHash
.
FileSystem
for reading and writing files stored on
Amazon S3.true
if a preset dictionary is needed for decompression.
false
.
true
if a preset dictionary is needed for decompression.
Decompressor.setInput(byte[], int, int)
should be called to
provide more input.
SnappyDecompressor.setInput(byte[], int, int)
should be called to
provide more input.
Decompressor.setInput(byte[], int, int)
should be called to
provide more input.
WritableComparable
instance.
key
and
val
.
key
, skipping its
value.
key
and
val
.
SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes)
.
key
.
DBRecordReader.nextKeyValue()
org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.HttpURLConnection
.
FSDataInputStream
returned.
FileSystem
that uses Amazon S3
as a backing store.FileSystem
for reading and writing files on
Amazon S3.JMXJsonServlet
class.org.apache.hadoop.metrics2
usage.org.apache.hadoop.metrics2
usage.OutputStream
implementation that writes to an in-memory
buffer.<key, value>
pairs output by Mapper
s
and Reducer
s.OutputCommitter
describes the commit of task output for a
Map-Reduce job.OutputCommitter
describes the commit of task output for a
Map-Reduce job.OutputFormat
describes the output-specification for a
Map-Reduce job.OutputFormat
describes the output-specification for a
Map-Reduce job.Utils.OutputFileUtils.OutputLogFilter
instead.org.apache.hadoop.metrics2
usage.FileSystem
.PolicyProvider
implementation.
PolicyProvider
provides the Service
definitions to the
security Policy
in effect for Hadoop.QueueProcessingStatistics.preCheckIsLastCycle(int)
.
PseudoAuthenticationHandler
provides a pseudo authentication mechanism that accepts
the user name specified as a query string parameter.PseudoAuthenticator
implementation provides an authentication equivalent to Hadoop's
Simple authentication, it trusts the value of the 'user.name' Java System property.FSNamesystem.neededReplications
With a properly throttled queue, a worker thread cycles repeatedly,
doing a chunk of work each cycle then resting a bit, until the queue is
empty.QueueProcessingStatistics
.RawComparator
.Comparator
that operates directly on byte representations of
objects.RawKeyValueIterator
is an iterator used to iterate over
the raw keys and values during sort/merge of intermediate data.FsPermission
from DataInput
.
PermissionStatus
from DataInput
.
b.length
bytes of data from this input stream into
an array of bytes.
len
bytes of data from this input stream into an
array of bytes.
Type.BOOL
code.
Type.BYTE
code.
Type.BYTES
code.
buf
at offset
and checksum into checksum
.
Type.DOUBLE
code.
in
.
in
.
in
.
in
.
in
.
in
.
in
.
in
.
in
.
in
.
in
.
ResultSet
.
in
.
in
.
ResultSet
.
in
.
in
.
CompressedWritable.readFields(DataInput)
.
Type.FLOAT
code.
len
bytes from
stm
Type.INT
code.
StreamKeyValUtil.readLine(LineReader, Text)
Type.LIST
code.
Type.LONG
code.
Type.MAP
code.
Type.MAP
code.
Writable
, String
, primitive type, or an array of
the preceding.
Writable
, String
, primitive type, or an array of
the preceding.
Type.BOOL
code.
Type.BYTE
code.
Type.BYTES
code.
Type.DOUBLE
code.
Type.FLOAT
code.
Type.INT
code.
Type.LIST
code.
Type.LONG
code.
Type.MAP
code.
Type.STRING
code.
Type.VECTOR
code.
Type.STRING
code.
Type
.
Type.VECTOR
code.
Type.VECTOR
code.
Record
comparison implementation.
RecordReader
reads <key, value> pairs from an
InputSplit
.Mapper
.RecordWriter
writes the output <key, value> pairs
to an output file.RecordWriter
writes the output <key, value> pairs
to an output file.reduce(...)
method of the Reducer with the
map(...)
methods of the Mappers in the chain.
Reducer
.JobTracker
JobTracker
.
Mapper
that extracts text matching a regular expression.TrackerDistributedCacheManager
.
job
.
RetryPolicy
.Compressor
to the pool.
Decompressor
to the pool.
Reducer.run(org.apache.hadoop.mapreduce.Reducer.Context)
method to
control how the reduce task works.
DumpTypedBytes
.
LoadTypedBytes
.
Tool
by Tool.run(String[])
, after
parsing with the given generic arguments.
Tool
with its Configuration
.
RunningJob
is the user-interface to query for details on a
running Map-Reduce job.FileSystem
backed by
Amazon S3.S3FileSystem
.DNSToSwitchMapping
interface using a
script configured via topology.script.file.name .n
th value.
SequenceFile
s are flat files consisting of binary key/value
pairs.SequenceFile
.RawComparator
.
OutputFormat
that writes keys, values to
SequenceFile
s in binary(raw) formatOutputFormat
that writes keys,
values to SequenceFile
s in binary(raw) formatInputFormat
for SequenceFile
s.InputFormat
for SequenceFile
s.OutputFormat
that writes SequenceFile
s.OutputFormat
that writes SequenceFile
s.RecordReader
for SequenceFile
s.RecordReader
for SequenceFile
s.Serializer
/Deserializer
pair.Serialization
s.io.serializations
property from conf
, which is a comma-delimited list of
classnames.
t
to the underlying output stream.
OutputStream
.CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION
instead.
value
of the name
property.
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS
is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS
is incremented
by framework after invoking the reduce function.
name
property to a boolean
.
name
property to the name of a
theClass
implementing the given interface xface
.
SequenceFile.CompressionType
while creating the SequenceFile
or
SequenceFileOutputFormat.setOutputCompressionType(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.io.SequenceFile.CompressionType)
to specify the SequenceFile.CompressionType
for job-outputs.
or
Authenticator
class to use when an AuthenticatedURL
instance
is created without specifying an authenticator.
name
property to the given type.
name
property to a float
.
Reducer.reduce(Object, Iterable,
org.apache.hadoop.mapreduce.Reducer.Context)
InputFormat
implementation for the map-reduce job.
InputFormat
for the job.
Path
s as the list of inputs
for the map-reduce job.
Path
s as the list of inputs
for the map-reduce job.
InputWriter
class.
name
property to an int
.
JobPriority
for this job.
KeyFieldBasedComparator
options used to compare keys.
KeyFieldBasedComparator
options used to compare keys.
KeyFieldBasedPartitioner
options used for
Partitioner
KeyFieldBasedPartitioner
options used for
Partitioner
bytes[offset:]
in Python syntax.
name
property to a long
.
CompressionCodec
for the map outputs.
Mapper
class for the job.
Mapper
for the job.
Job.setAssignedJobID(JobID)
instead
MapRunnable
class for the job.
JobConf.setMemoryForMapTask(long mem)
and
Use JobConf.setMemoryForReduceTask(long mem)
bytes[left:(right+1)]
in Python syntax.
OutputCommitter
implementation for the map-reduce job.
SequenceFile.CompressionType
for the output SequenceFile
.
SequenceFile.CompressionType
for the output SequenceFile
.
CompressionCodec
to be used to compress job outputs.
CompressionCodec
to be used to compress job outputs.
OutputFormat
implementation for the map-reduce job.
OutputFormat
for the job.
RawComparator
comparator used to compare keys.
Path
of the output directory for the map-reduce job.
Path
of the output directory for the map-reduce job.
OutputReader
class.
RawComparator
comparator for
grouping keys in the input to the reduce.
Partitioner
class used to partition
Mapper
-outputs to be sent to the Reducer
s.
Partitioner
for the job.
Reducer
class for the job.
Reducer
for the job.
bytes[:(offset+1)]
in Python syntax.
SequenceFile
SequenceFile
SequenceFile
SequenceFile
Reducer
.
TaskTrackerStatus
of the TaskTracker
.
name
property as
as comma delimited values.
TaskCompletionEvent.setTaskID(TaskAttemptID)
instead.
GroupMappingServiceProvider
that exec's the groups
shell command to fetch the group
memberships of a given user.GroupMappingServiceProvider
that exec's the groups
shell command to fetch the group
memberships of a given user.Signer
when a string signature is invalid.n
bytes of data from the
input stream.
n
bytes of input from the bytes that can be read from
this input stream without blocking.
Compressor
based on the snappy compression algorithm.Decompressor
based on the snappy compression algorithm.IndexedSorter.sort(IndexedSortable,int,int)
, but indicate progress
periodically.
IndexedSorter.sort(IndexedSortable,int,int)
, but indicate progress
periodically.
IndexedSorter.sort(IndexedSortable,int,int)
, but indicate progress
periodically.
StreamKeyValUtil.splitKeyVal(byte[], int, int, Text, Text,
int, int)
StreamKeyValUtil.splitKeyVal(byte[], int, int, Text, Text, int)
StreamKeyValUtil.splitKeyVal(byte[], Text, Text, int, int)
StreamKeyValUtil.splitKeyVal(byte[], Text, Text, int)
fileName
attribute,
if specified.
StreamJob.setConf(Configuration)
and
run with StreamJob.run(String[])
.
Submitter.runJob(JobConf)
TaskID
.
TaskID
.
TrackerDistributedCacheManager
that represents
the cached files of a single job.JobID
.
JobID
.
TaskTracker
as seen by
the JobTracker
.TaskTracker
.
InputFormat
for plain text files.InputFormat
for plain text files.OutputFormat
that writes plain text files.OutputFormat
that writes plain text files.TaskInProgress
as seen by the JobTracker.List<T>
to a an array of
T[]
.
List<T>
to a an array of
T[]
.
Mapper
that maps text values into Tool
s.FileChannel.transferTo(long, long, WritableByteChannel)
.
void
methods, or by
re-throwing the exception for non-void
methods.
Writable
s.charToEscape
in the string
with the escape char escapeChar
TaskTracker
which were reserved for
taskType
.
org.apache.hadoop.metrics2
usage.JobTracker
is locked on entry.
TaskTracker
to the
UserLogManager
to inform about an event.TaskTracker
.TaskTracker
.
TaskTracker
.
UTF8ByteArrayUtils
and
StreamKeyValUtil
insteadS3FileSystem
.VersionedWritable.readFields(DataInput)
when the
version of an object being read does not match the current implementation
version as returned by VersionedWritable.getVersion()
.DataInput
and DataOutput
.Writable
which is also Comparable
.WritableComparable
s.WritableComparable
implementation.
Serialization
for Writable
s that delegates to
Writable.write(java.io.DataOutput)
and
Writable.readFields(java.io.DataInput)
.out
.
len
bytes from the specified byte array
starting at offset off
and generate a checksum for
each data chunk.
out
.
out
.
out
.
PermissionStatus
from its base components.
out
.
out
.
out
.
out
.
out
.
out
.
PreparedStatement
.
out
.
out
.
PreparedStatement
.
out
.
b.length
bytes from the specified byte array to this
output stream.
len
bytes from the specified byte array starting at
offset off
to this output stream.
out
.
CompressedWritable.write(DataOutput)
.
Writable
, String
, primitive type, or an array of
the preceding.
OutputStream
.
Compressor
based on the popular
zlib compression algorithm.Decompressor
based on the popular
zlib compression algorithm.
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |