Chapter 2. Apache HBase (TM) Configuration

Table of Contents

2.1. Basic Prerequisites
2.1.1. Java
2.1.2. Operating System
2.1.3. Hadoop
2.2. HBase run modes: Standalone and Distributed
2.2.1. Standalone HBase
2.2.2. Distributed
2.2.3. Running and Confirming Your Installation
2.3. Configuration Files
2.3.1. hbase-site.xml and hbase-default.xml
2.3.2. hbase-env.sh
2.3.3. log4j.properties
2.3.4. Client configuration and dependencies connecting to an HBase cluster
2.4. Example Configurations
2.4.1. Basic Distributed HBase Install
2.5. The Important Configurations
2.5.1. Required Configurations
2.5.2. Recommended Configurations
2.5.3. Other Configurations

This chapter is the Not-So-Quick start guide to Apache HBase (TM) configuration. It goes over system requirements, Hadoop setup, the different Apache HBase run modes, and the various configurations in HBase. Please read this chapter carefully. At a mimimum ensure that all Section 2.1, “Basic Prerequisites” have been satisfied. Failure to do so will cause you (and us) grief debugging strange errors and/or data loss.

Apache HBase uses the same configuration system as Apache Hadoop. To configure a deploy, edit a file of environment variables in conf/hbase-env.sh -- this configuration is used mostly by the launcher shell scripts getting the cluster off the ground -- and then add configuration to an XML file to do things like override HBase defaults, tell HBase what Filesystem to use, and the location of the ZooKeeper ensemble [1] .

When running in distributed mode, after you make an edit to an HBase configuration, make sure you copy the content of the conf directory to all nodes of the cluster. HBase will not do this for you. Use rsync.

2.1. Basic Prerequisites

This section lists required services and some required system configuration.

2.1.1. Java

Just like Hadoop, HBase requires at least java 6 from Oracle.

2.1.2. Operating System

2.1.2.1. ssh

ssh must be installed and sshd must be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons. You must be able to ssh to all nodes, including your local node, using passwordless login (Google "ssh passwordless login"). If on mac osx, see the section, SSH: Setting up Remote Desktop and Enabling Self-Login on the hadoop wiki.

2.1.2.2. DNS

HBase uses the local hostname to self-report its IP address. Both forward and reverse DNS resolving must work in versions of HBase previous to 0.92.0 [2].

If your machine has multiple interfaces, HBase will use the interface that the primary hostname resolves to.

If this is insufficient, you can set hbase.regionserver.dns.interface to indicate the primary interface. This only works if your cluster configuration is consistent and every host has the same network interface configuration.

Another alternative is setting hbase.regionserver.dns.nameserver to choose a different nameserver than the system wide default.

2.1.2.3. Loopback IP

HBase expects the loopback IP address to be 127.0.0.1. See Section 2.1.2.3, “Loopback IP”

2.1.2.4. NTP

The clocks on cluster members should be in basic alignments. Some skew is tolerable but wild skew could generate odd behaviors. Run NTP on your cluster, or an equivalent.

If you are having problems querying data, or "weird" cluster operations, check system time!

2.1.2.5.  ulimit and nproc

Apache HBase is a database. It uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient (On mac os x its 256). Any significant amount of loading will lead you to Section 12.9.2.2, “java.io.IOException...(Too many open files)”. You may also notice errors such as...

      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
      

Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)

You should also up the hbase users' nproc setting; under load, a low-nproc setting could manifest as OutOfMemoryError [3] [4].

To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct. [5]

2.1.2.5.1. ulimit on Ubuntu

If you are on Ubuntu you will need to make the following changes:

In the file /etc/security/limits.conf add a line like:

hadoop  -       nofile  32768

Replace hadoop with whatever user is running Hadoop and HBase. If you have separate users, you will need 2 entries, one for each user. In the same file set nproc hard and soft limits. For example:

hadoop soft/hard nproc 32000

.

In the file /etc/pam.d/common-session add as the last line in the file:

session required  pam_limits.so

Otherwise the changes in /etc/security/limits.conf won't be applied.

Don't forget to log out and back in again for the changes to take effect!

2.1.2.6. Windows

Apache HBase has been little tested running on Windows. Running a production install of HBase on top of Windows is not recommended.

If you are running HBase on Windows, you must install Cygwin to have a *nix-like environment for the shell scripts. The full details are explained in the Windows Installation guide. Also search our user mailing list to pick up latest fixes figured by Windows users.

2.1.3. Hadoop

Selecting a Hadoop version is critical for your HBase deployment. Below table shows some information about what versions of Hadoop are supported by various HBase versions. Based on the version of HBase, you should select the most appropriate version of Hadoop. We are not in the Hadoop distro selection business. You can use Hadoop distributions from Apache, or learn about vendor distributions of Hadoop at http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support

Table 2.1. Hadoop version support matrix

HBase-0.92.xHBase-0.94.xHBase-0.96
Hadoop-0.20.205SXX
Hadoop-0.22.x SXX
Hadoop-1.0.x SSS
Hadoop-1.1.x NTSS
Hadoop-0.23.x XSNT
Hadoop-2.x XSS


Where

S = supported and tested,
X = not supported,
NT = it should run, but not tested enough.

Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its lib directory. The bundled jar is ONLY for use in standalone mode. In distributed mode, it is critical that the version of Hadoop that is out on your cluster match what is under HBase. Replace the hadoop jar found in the HBase lib directory with the hadoop jar you are running on your cluster to avoid version mismatch issues. Make sure you replace the jar in HBase everywhere on your cluster. Hadoop version mismatch issues have various manifestations but often all looks like its hung up.

2.1.3.1. Apache HBase 0.92 and 0.94

HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 1.0.x, and 1.1.x. HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have to recompile the code using the specific maven profile (see top level pom.xml)

2.1.3.2. Apache HBase 0.96

Apache HBase 0.96.0 requires Apache Hadoop 1.x at a minimum, and it can run equally well on hadoop-2.0. As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required. We will no longer run properly on older Hadoops such as 0.20.205 or branch-0.20-append. Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop[6].

2.1.3.3. Hadoop versions 0.20.x - 1.x

HBase will lose data unless it is running on an HDFS that has a durable sync implementation. DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have a working, durable sync [7]. Sync has to be explicitly enabled by setting dfs.support.append equal to true on both the client side -- in hbase-site.xml -- and on the serverside in hdfs-site.xml (The sync facility HBase needs is a subset of the append code path).

  <property>
    <name>dfs.support.append</name>
    <value>true</value>
  </property>
        

You will have to restart your cluster after making this edit. Ignore the chicken-little comment you'll find in the hdfs-default.xml in the description for the dfs.support.append configuration.

2.1.3.4. Apache HBase on Secure Hadoop

Apache HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features as long as you do as suggested above and replace the Hadoop jar that ships with HBase with the secure version. If you want to read more about how to setup Secure HBase, see Section 8.1, “Secure Client Access to Apache HBase”.

2.1.3.5. dfs.datanode.max.xcievers

An Hadoop HDFS datanode has an upper bound on the number of files that it will serve at any one time. The upper bound parameter is called xcievers (yes, this is misspelled). Again, before doing any loading, make sure you have configured Hadoop's conf/hdfs-site.xml setting the xceivers value to at least the following:

      <property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
      </property>
      

Be sure to restart your HDFS after making the above configuration.

Not having this configuration in place makes for strange looking failures. Eventually you'll see a complain in the datanode logs complaining about the xcievers exceeded, but on the run up to this one manifestation is complaint about missing blocks. For example: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... [8]

See also Section 13.3.4, “Case Study #4 (xcievers Config)”



[1] Be careful editing XML. Make sure you close all elements. Run your file through xmllint or similar to ensure well-formedness of your document after an edit session.

[2] The hadoop-dns-checker tool can be used to verify DNS is working correctly on the cluster. The project README file provides detailed instructions on usage.

[3] See Jack Levin's major hdfs issues note up on the user list.

[4] The requirement that a database requires upping of system limits is not peculiar to Apache HBase. See for example the section Setting Shell Limits for the Oracle User in Short Guide to install Oracle 10 on Linux.

[5] A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration Parameters: What can you just ignore?

[7] The Cloudera blog post An update on Apache Hadoop 1.0 by Charles Zedlweski has a nice exposition on how all the Hadoop versions relate. Its worth checking out if you are having trouble making sense of the Hadoop version morass.

[8] See Hadoop HDFS: Deceived by Xciever for an informative rant on xceivering.

comments powered by Disqus