Running the coordinator example with HCatalog (End-to-end test) 1. Setup services - Hive with JMS (e.g. ActiveMQ Server), HCatalog, database e.g. MySQL, and of course Hadoop 2. This example points to hive.metastore.uris=thrift://localhost:11002. Change it in job.properties if required 3. Create 2 tables 'invites' (input) and 'oozie' (output) with this structure: "create table invites (foo INT, bar INT) partitioned by (ds STRING, region STRING)" 4. Oozie distro should be built this way $> bin/mkdistro.sh -Dhcatalog.version=0.4.1 -DskipTests 5. The 'libext' dir used by oozie-setup should contain the following list JARS hcatalog-core.jar webhcat-java-client.jar jackson-mapper-asl-1.8.8.jar jackson-core-asl-1.8.8.jar hive-common.jar hive-metastore.jar hive-exec.jar hive-serde.jar hive-shims.jar libfb303.jar (Note) hcatalog JARs will be automatically injected 6. Upload this application directory to HDFS 7. Run Oozie job using the job.properties. Coordinator actions will be in WAITING 8. Make input dependencies available throught HCat client by "alter table invites add partition (ds='2010-01-01', region='usa')". This event will start the workflows with pig action 9. First workflow will SUCCEED as expected, however second one will fail due to 'partition already exists' error. Disregard this. The example demonstrates working as expected.