org.apache.hadoop |
|
org.apache.hadoop.ant |
|
org.apache.hadoop.ant.condition |
|
org.apache.hadoop.classification |
|
org.apache.hadoop.conf |
Configuration of system parameters.
|
org.apache.hadoop.contrib.bkjournal |
|
org.apache.hadoop.contrib.utils.join |
|
org.apache.hadoop.crypto |
|
org.apache.hadoop.crypto.key |
|
org.apache.hadoop.crypto.key.kms |
|
org.apache.hadoop.crypto.key.kms.server |
|
org.apache.hadoop.crypto.random |
|
org.apache.hadoop.examples |
Hadoop example code.
|
org.apache.hadoop.examples.dancing |
This package is a distributed implementation of Knuth's dancing links
algorithm that can run under Hadoop.
|
org.apache.hadoop.examples.pi |
This package consists of a map/reduce application,
distbbp,
which computes exact binary digits of the mathematical constant π.
|
org.apache.hadoop.examples.pi.math |
This package provides useful mathematical library classes
for the distbbp program.
|
org.apache.hadoop.examples.terasort |
This package consists of 3 map/reduce applications for Hadoop to
compete in the annual terabyte sort
competition.
|
org.apache.hadoop.filecache |
|
org.apache.hadoop.fs |
An abstract file system API.
|
org.apache.hadoop.fs.crypto |
|
org.apache.hadoop.fs.ftp |
|
org.apache.hadoop.fs.http.client |
|
org.apache.hadoop.fs.http.server |
|
org.apache.hadoop.fs.permission |
|
org.apache.hadoop.fs.s3 |
A distributed, block-based implementation of FileSystem that uses Amazon S3
as a backing store.
|
org.apache.hadoop.fs.s3a |
|
org.apache.hadoop.fs.s3native |
|
org.apache.hadoop.fs.swift.auth |
|
org.apache.hadoop.fs.swift.auth.entities |
|
org.apache.hadoop.fs.swift.exceptions |
|
org.apache.hadoop.fs.swift.http |
|
org.apache.hadoop.fs.swift.snative |
|
org.apache.hadoop.fs.swift.util |
|
org.apache.hadoop.fs.viewfs |
|
org.apache.hadoop.ha |
|
org.apache.hadoop.ha.proto |
|
org.apache.hadoop.ha.protocolPB |
|
org.apache.hadoop.http.lib |
This package provides user-selectable (via configuration) classes that add
functionality to the web UI.
|
org.apache.hadoop.io |
Generic i/o code for use when reading and writing data to the network,
to databases, and to files.
|
org.apache.hadoop.io.compress |
|
org.apache.hadoop.io.file.tfile |
|
org.apache.hadoop.io.serializer |
This package provides a mechanism for using different serialization frameworks
in Hadoop.
|
org.apache.hadoop.io.serializer.avro |
This package provides Avro serialization in Hadoop.
|
org.apache.hadoop.ipc.proto |
|
org.apache.hadoop.ipc.protobuf |
|
org.apache.hadoop.ipc.protocolPB |
|
org.apache.hadoop.jmx |
This package provides access to JMX primarily through the
JMXJsonServlet class.
|
org.apache.hadoop.lib.lang |
|
org.apache.hadoop.lib.server |
|
org.apache.hadoop.lib.service |
|
org.apache.hadoop.lib.service.hadoop |
|
org.apache.hadoop.lib.service.instrumentation |
|
org.apache.hadoop.lib.service.scheduler |
|
org.apache.hadoop.lib.service.security |
|
org.apache.hadoop.lib.servlet |
|
org.apache.hadoop.lib.util |
|
org.apache.hadoop.lib.wsrs |
|
org.apache.hadoop.log |
|
org.apache.hadoop.log.metrics |
|
org.apache.hadoop.maven.plugin.protoc |
|
org.apache.hadoop.maven.plugin.util |
|
org.apache.hadoop.maven.plugin.versioninfo |
|
org.apache.hadoop.metrics |
This package defines an API for reporting performance metric information.
|
org.apache.hadoop.metrics.file |
Implementation of the metrics package that writes the metrics to a file.
|
org.apache.hadoop.metrics.ganglia |
Implementation of the metrics package that sends metric data to
Ganglia.
|
org.apache.hadoop.metrics.spi |
The Service Provider Interface for the Metrics API.
|
org.apache.hadoop.metrics2 |
Metrics 2.0
|
org.apache.hadoop.metrics2.annotation |
Annotation interfaces for metrics instrumentation
|
org.apache.hadoop.metrics2.filter |
Builtin metrics filters (to be used in metrics config files)
|
org.apache.hadoop.metrics2.lib |
A collection of library classes for implementing metrics sources
|
org.apache.hadoop.metrics2.sink |
Builtin metrics sinks
|
org.apache.hadoop.metrics2.sink.ganglia |
|
org.apache.hadoop.metrics2.source |
|
org.apache.hadoop.metrics2.util |
General helpers for implementing source and sinks
|
org.apache.hadoop.minikdc |
|
org.apache.hadoop.mount |
|
org.apache.hadoop.net |
Network-related classes.
|
org.apache.hadoop.net.unix |
|
org.apache.hadoop.nfs |
|
org.apache.hadoop.nfs.nfs3 |
|
org.apache.hadoop.nfs.nfs3.request |
|
org.apache.hadoop.nfs.nfs3.response |
|
org.apache.hadoop.oncrpc |
|
org.apache.hadoop.oncrpc.security |
|
org.apache.hadoop.portmap |
|
org.apache.hadoop.record |
(DEPRECATED) Hadoop record I/O contains classes and a record description language
translator for simplifying serialization and deserialization of records in a
language-neutral manner.
|
org.apache.hadoop.record.compiler |
(DEPRECATED) This package contains classes needed for code generation
from the hadoop record compiler.
|
org.apache.hadoop.record.compiler.ant |
|
org.apache.hadoop.record.compiler.generated |
(DEPRECATED) This package contains code generated by JavaCC from the
Hadoop record syntax file rcc.jj.
|
org.apache.hadoop.record.meta |
|
org.apache.hadoop.registry.cli |
|
org.apache.hadoop.registry.client.api |
YARN Registry Client API.
|
org.apache.hadoop.registry.client.binding |
Registry binding utility classes.
|
org.apache.hadoop.registry.client.exceptions |
Registry Service Exceptions
|
org.apache.hadoop.registry.client.impl |
Registry client services
|
org.apache.hadoop.registry.client.impl.zk |
Core Zookeeper support.
|
org.apache.hadoop.registry.client.types |
This package contains all the data types which can be saved to the registry
and/or marshalled to and from JSON.
|
org.apache.hadoop.registry.client.types.yarn |
|
org.apache.hadoop.registry.server |
Server-side classes for the registry
|
org.apache.hadoop.registry.server.integration |
This package contains the classes which integrate with the YARN resource
manager.
|
org.apache.hadoop.registry.server.services |
Basic services for the YARN registry
The RegistryAdminService
extends the shared Yarn Registry client with registry setup and
(potentially asynchronous) administrative actions.
|
org.apache.hadoop.security |
|
org.apache.hadoop.security.alias |
|
org.apache.hadoop.security.authentication.client |
|
org.apache.hadoop.security.authentication.examples |
|
org.apache.hadoop.security.authentication.server |
|
org.apache.hadoop.security.authentication.util |
|
org.apache.hadoop.security.proto |
|
org.apache.hadoop.security.protocolPB |
|
org.apache.hadoop.security.ssl |
|
org.apache.hadoop.security.token.delegation.web |
|
org.apache.hadoop.service |
|
org.apache.hadoop.streaming |
Hadoop Streaming is a utility which allows users to create and run
Map-Reduce jobs with any executables (e.g.
|
org.apache.hadoop.streaming.io |
|
org.apache.hadoop.tools.mapred |
|
org.apache.hadoop.tools.mapred.lib |
|
org.apache.hadoop.tools.proto |
|
org.apache.hadoop.tools.protocolPB |
|
org.apache.hadoop.tools.rumen |
Rumen is a data extraction and analysis tool built for
Apache Hadoop.
|
org.apache.hadoop.tools.rumen.anonymization |
|
org.apache.hadoop.tools.rumen.datatypes |
|
org.apache.hadoop.tools.rumen.datatypes.util |
|
org.apache.hadoop.tools.rumen.serializers |
|
org.apache.hadoop.tools.rumen.state |
|
org.apache.hadoop.tools.util |
|
org.apache.hadoop.tracing |
|
org.apache.hadoop.typedbytes |
Typed bytes are sequences of bytes in which the first byte is a type code.
|
org.apache.hadoop.util |
Common utilities.
|
org.apache.hadoop.util.bloom |
|
org.apache.hadoop.util.hash |
|