You can stop an individual RegionServer by running the following script in the HBase directory on the particular node:
$ ./bin/hbase-daemon.sh stop regionserver
The RegionServer will first close all regions and then shut itself down. On shutdown, the RegionServer's ephemeral node in ZooKeeper will expire. The master will notice the RegionServer gone and will treat it as a 'crashed' server; it will reassign the nodes the RegionServer was carrying.
If the load balancer runs while a node is shutting down, then there could be contention between the Load Balancer and the Master's recovery of the just decommissioned RegionServer. Avoid any problems by disabling the balancer first. See Load Balancer below.
A downside to the above stop of a RegionServer is that regions could be offline for
a good period of time. Regions are closed in order. If many regions on the server, the
first region to close may not be back online until all regions close and after the master
notices the RegionServer's znode gone. In Apache HBase 0.90.2, we added facility for having
a node gradually shed its load and then shutdown itself down. Apache HBase 0.90.2 added the
graceful_stop.sh
script. Here is its usage:
$ ./bin/graceful_stop.sh Usage: graceful_stop.sh [--config &conf-dir>] [--restart] [--reload] [--thrift] [--rest] &hostname> thrift If we should stop/start thrift before/after the hbase stop/start rest If we should stop/start rest before/after the hbase stop/start restart If we should restart after graceful stop reload Move offloaded regions back on to the stopped server debug Move offloaded regions back on to the stopped server hostname Hostname of server we are to stop
To decommission a loaded RegionServer, run the following:
$ ./bin/graceful_stop.sh HOSTNAME
where HOSTNAME
is the host carrying the RegionServer
you would decommission.
HOSTNAME
The HOSTNAME
passed to graceful_stop.sh
must match the hostname that hbase is using to identify RegionServers.
Check the list of RegionServers in the master UI for how HBase is
referring to servers. Its usually hostname but can also be FQDN.
Whatever HBase is using, this is what you should pass the
graceful_stop.sh
decommission
script. If you pass IPs, the script is not yet smart enough to make
a hostname (or FQDN) of it and so it will fail when it checks if server is
currently running; the graceful unloading of regions will not run.
The graceful_stop.sh
script will move the regions off the
decommissioned RegionServer one at a time to minimize region churn.
It will verify the region deployed in the new location before it
will moves the next region and so on until the decommissioned server
is carrying zero regions. At this point, the graceful_stop.sh
tells the RegionServer stop. The master will at this point notice the
RegionServer gone but all regions will have already been redeployed
and because the RegionServer went down cleanly, there will be no
WAL logs to split.
It is assumed that the Region Load Balancer is disabled while the graceful_stop script runs (otherwise the balancer and the decommission script will end up fighting over region deployments). Use the shell to disable the balancer:
hbase(main):001:0> balance_switch false true 0 row(s) in 0.3590 seconds
This turns the balancer OFF. To reenable, do:
hbase(main):001:0> balance_switch true false 0 row(s) in 0.3590 seconds
It is good having Section 2.5.2.2.1, “dfs.datanode.failed.volumes.tolerated” set if you have a decent number of disks
per machine for the case where a disk plain dies. But usually disks do the "John Wayne" -- i.e. take a while
to go down spewing errors in dmesg
-- or for some reason, run much slower than their
companions. In this case you want to decommission the disk. You have two options. You can
<xlink>decommission the datanode</xlink>
or, less disruptive in that only the bad disks data will be rereplicated, can stop the datanode,
unmount the bad volume (You can't umount a volume while the datanode is using it), and then restart the
datanode (presuming you have set dfs.datanode.failed.volumes.tolerated > 0). The regionserver will
throw some errors in its logs as it recalibrates where to get its data from -- it will likely
roll its WAL log too -- but in general but for some latency spikes, it should keep on chugging.
If you are doing short-circuit reads, you will have to move the regions off the regionserver before you stop the datanode; when short-circuiting reading, though chmod'd so regionserver cannot have access, because it already has the files open, it will be able to keep reading the file blocks from the bad disk even though the datanode is down. Move the regions back after you restart the datanode.
You can also ask this script to restart a RegionServer after the shutdown AND move its old regions back into place. The latter you might do to retain data locality. A primitive rolling restart might be effected by running something like the following:
$ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
Tail the output of /tmp/log.txt
to follow the scripts
progress. The above does RegionServers only. Be sure to disable the
load balancer before doing the above. You'd need to do the master
update separately. Do it before you run the above script.
Here is a pseudo-script for how you might craft a rolling restart script:
Untar your release, make sure of its configuration and then rsync it across the cluster. If this is 0.90.2, patch it with HBASE-3744 and HBASE-3756.
Run hbck to ensure the cluster consistent
$ ./bin/hbase hbck
Effect repairs if inconsistent.
Restart the Master:
$ ./bin/hbase-daemon.sh stop master; ./bin/hbase-daemon.sh start master
Disable the region balancer:
$ echo "balance_switch false" | ./bin/hbase shell
Run the graceful_stop.sh
script per RegionServer. For example:
$ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --reload --debug $i; done &> /tmp/log.txt &
If you are running thrift or rest servers on the RegionServer, pass --thrift or --rest options (See usage
for graceful_stop.sh
script).
Restart the Master again. This will clear out dead servers list and reenable the balancer.
Run hbck to ensure the cluster is consistent.