The hadoop-azure-datalake module provides support for integration with the Azure Data Lake Store. This support comes via the JAR file azure-datalake-store.jar.
Partial or no support for the following operations :
Azure Data Lake Storage access path syntax is:
adl://<Account Name>.azuredatalakestore.net/
For details on using the store, see Get started with Azure Data Lake Store using the Azure Portal
Usage of Azure Data Lake Storage requires an OAuth2 bearer token to be present as part of the HTTPS header as per the OAuth2 specification. A valid OAuth2 bearer token must be obtained from the Azure Active Directory service for those valid users who have access to Azure Data Lake Storage Account.
Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud based directory and identity management service. See What is ActiveDirectory.
Following sections describes theOAuth2 configuration in core-site.xml.
Credentials can be configured using either a refresh token (associated with a user), or a client credential (analogous to a service principal).
Add the following properties to the cluster’s core-site.xml
<property> <name>fs.adl.oauth2.access.token.provider.type</name> <value>RefreshToken</value> </property>
Applications must set the Client id and OAuth2 refresh token from the Azure Active Directory service associated with the client id. See Active Directory Library For Java.
Do not share client id and refresh token, it must be kept secret.
<property> <name>fs.adl.oauth2.client.id</name> <value></value> </property> <property> <name>fs.adl.oauth2.refresh.token</name> <value></value> </property>
Add the following properties to your core-site.xml
<property> <name>fs.adl.oauth2.access.token.provider.type</name> <value>ClientCredential</value> </property> <property> <name>fs.adl.oauth2.refresh.url</name> <value>TOKEN ENDPOINT FROM STEP 7 ABOVE</value> </property> <property> <name>fs.adl.oauth2.client.id</name> <value>CLIENT ID FROM STEP 7 ABOVE</value> </property> <property> <name>fs.adl.oauth2.credential</name> <value>PASSWORD FROM STEP 7 ABOVE</value> </property>
In many Hadoop clusters, the core-site.xml file is world-readable. To protect these credentials, it is recommended that you use the credential provider framework to securely store them and access them.
All ADLS credential properties can be protected by credential providers. For additional reading on the credential provider API, see Credential Provider API.
hadoop credential create fs.adl.oauth2.client.id -value 123 -provider localjceks://file/home/foo/adls.jceks hadoop credential create fs.adl.oauth2.refresh.token -value 123 -provider localjceks://file/home/foo/adls.jceks
<property> <name>fs.adl.oauth2.access.token.provider.type</name> <value>RefreshToken</value> </property> <property> <name>hadoop.security.credential.provider.path</name> <value>localjceks://file/home/foo/adls.jceks</value> <description>Path to interrogate for protected credentials.</description> </property>
hadoop distcp [-D fs.adl.oauth2.access.token.provider.type=RefreshToken -D hadoop.security.credential.provider.path=localjceks://file/home/user/adls.jceks] hdfs://<NameNode Hostname>:9001/user/foo/srcDir adl://<Account Name>.azuredatalakestore.net/tgtDir/
NOTE: You may optionally add the provider path property to the distcp command line instead of added job specific configuration to a generic core-site.xml. The square brackets above illustrate this capability.`
After credentials are configured in core-site.xml, any Hadoop component may reference files in that Azure Data Lake Storage account by using URLs of the following format:
adl://<Account Name>.azuredatalakestore.net/<path>
The schemes adl identifies a URL on a Hadoop-compatible file system backed by Azure Data Lake Storage. adl utilizes encrypted HTTPS access for all interaction with the Azure Data Lake Storage API.
For example, the following FileSystem Shell commands demonstrate access to a storage account named youraccount.
hadoop fs -mkdir adl://yourcontainer.azuredatalakestore.net/testDir hadoop fs -put testFile adl://yourcontainer.azuredatalakestore.net/testDir/testFile hadoop fs -cat adl://yourcontainer.azuredatalakestore.net/testDir/testFile test file content
The hadoop-azure-datalake module provides support for configuring how User/Group information is represented during getFileStatus(), listStatus(), and getAclStatus() calls..
Add the following properties to core-site.xml
<property> <name>adl.feature.ownerandgroup.enableupn</name> <value>true</value> <description> When true : User and Group in FileStatus/AclStatus response is represented as user friendly name as per Azure AD profile. When false (default) : User and Group in FileStatus/AclStatus response is represented by the unique identifier from Azure AD profile (Object ID as GUID). For performance optimization, Recommended default value. </description> </property>
The hadoop-azure module includes a full suite of unit tests. Most of the tests will run without additional configuration by running mvn test. This includes tests against mocked storage, which is an in-memory emulation of Azure Data Lake Storage.
A selection of tests can run against the Azure Data Lake Storage. To run these tests, please create src/test/resources/auth-keys.xml with Adl account information mentioned in the above sections and the following properties.
<property> <name>fs.adl.test.contract.enable</name> <value>true</value> </property> <property> <name>test.fs.adl.name</name> <value>adl://yourcontainer.azuredatalakestore.net</value> </property>