~~ Licensed under the Apache License, Version 2.0 (the "License"); ~~ you may not use this file except in compliance with the License. ~~ You may obtain a copy of the License at ~~ ~~ http://www.apache.org/licenses/LICENSE-2.0 ~~ ~~ Unless required by applicable law or agreed to in writing, software ~~ distributed under the License is distributed on an "AS IS" BASIS, ~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ~~ See the License for the specific language governing permissions and ~~ limitations under the License. --- Hadoop HDFS over HTTP ${project.version} - Server Setup --- --- ${maven.build.timestamp} Hadoop HDFS over HTTP ${project.version} - Server Setup This page explains how to quickly setup HttpFS with Pseudo authentication against a Hadoop cluster with Pseudo authentication. * Requirements * Java 6+ * Maven 3+ * Install HttpFS +---+ ~ $ tar xzf httpfs-${project.version}.tar.gz +---+ * Configure HttpFS By default, HttpFS assumes that Hadoop configuration files (<<>>) are in the HttpFS configuration directory. If this is not the case, add to the <<>> file the <<>> property set to the location of the Hadoop configuration directory. * Configure Hadoop Edit Hadoop <<>> and defined the Unix user that will run the HttpFS server as a proxyuser. For example: +---+ ... hadoop.proxyuser.#HTTPFSUSER#.hosts httpfs-host.foo.com hadoop.proxyuser.#HTTPFSUSER#.groups * ... +---+ IMPORTANT: Replace <<<#HTTPFSUSER#>>> with the Unix user that will start the HttpFS server. * Restart Hadoop You need to restart Hadoop for the proxyuser configuration ot become active. * Start/Stop HttpFS To start/stop HttpFS use HttpFS's bin/httpfs.sh script. For example: +---+ httpfs-${project.version} $ bin/httpfs.sh start +---+ NOTE: Invoking the script without any parameters list all possible parameters (start, stop, run, etc.). The <<>> script is a wrapper for Tomcat's <<>> script that sets the environment variables and Java System properties required to run HttpFS server. * Test HttpFS is working +---+ ~ $ curl -i "http://:14000?user.name=babu&op=homedir" HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked {"homeDir":"http:\/\/:14000\/user\/babu"} +---+ * Embedded Tomcat Configuration To configure the embedded Tomcat go to the <<>>. HttpFS preconfigures the HTTP and Admin ports in Tomcat's <<>> to 14000 and 14001. Tomcat logs are also preconfigured to go to HttpFS's <<>> directory. The following environment variables (which can be set in HttpFS's <<>> script) can be used to alter those values: * HTTPFS_HTTP_PORT * HTTPFS_ADMIN_PORT * HTTPFS_LOG * HttpFS Configuration HttpFS supports the following {{{./httpfs-default.html}configuration properties}} in the HttpFS's <<>> configuration file. * HttpFS over HTTPS (SSL) To configure HttpFS to work over SSL edit the {{httpfs-env.sh}} script in the configuration directory setting the {{HTTPFS_SSL_ENABLED}} to {{true}}. In addition, the following 2 properties may be defined (shown with default values): * HTTPFS_SSL_KEYSTORE_FILE=${HOME}/.keystore * HTTPFS_SSL_KEYSTORE_PASS=password In the HttpFS <<>> directory, replace the <<>> file with the <<>> file. You need to create an SSL certificate for the HttpFS server. As the <<>> Unix user, using the Java <<>> command to create the SSL certificate: +---+ $ keytool -genkey -alias tomcat -keyalg RSA +---+ You will be asked a series of questions in an interactive prompt. It will create the keystore file, which will be named <<.keystore>> and located in the <<>> user home directory. The password you enter for "keystore password" must match the value of the <<>> environment variable set in the <<>> script in the configuration directory. The answer to "What is your first and last name?" (i.e. "CN") must be the hostname of the machine where the HttpFS Server will be running. Start HttpFS. It should work over HTTPS. Using the Hadoop <<>> API or the Hadoop FS shell, use the <<>> scheme. Make sure the JVM is picking up the truststore containing the public key of the SSL certificate if using a self-signed certificate.