Package | Description |
---|---|
org.apache.hadoop.hive.ql.exec.spark | |
org.apache.hadoop.hive.ql.exec.spark.session | |
org.apache.hadoop.hive.ql.parse.spark |
Modifier and Type | Method and Description |
---|---|
static SparkTask |
SparkUtilities.createSparkTask(SparkWork work,
HiveConf conf) |
SparkJobRef |
RemoteHiveSparkClient.execute(DriverContext driverContext,
SparkWork sparkWork) |
SparkJobRef |
LocalHiveSparkClient.execute(DriverContext driverContext,
SparkWork sparkWork) |
SparkJobRef |
HiveSparkClient.execute(DriverContext driverContext,
SparkWork sparkWork)
HiveSparkClient should generate Spark RDD graph by given sparkWork and driverContext,
and submit RDD graph to Spark cluster.
|
SparkPlan |
SparkPlanGenerator.generate(SparkWork sparkWork) |
Modifier and Type | Method and Description |
---|---|
SparkJobRef |
SparkSessionImpl.submit(DriverContext driverContext,
SparkWork sparkWork) |
SparkJobRef |
SparkSession.submit(DriverContext driverContext,
SparkWork sparkWork)
Submit given sparkWork to SparkClient.
|
Modifier and Type | Method and Description |
---|---|
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions) |
MapWork |
GenSparkUtils.createMapWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork,
PrunedPartitionList partitions,
boolean deferSetup) |
ReduceWork |
GenSparkUtils.createReduceWork(GenSparkProcContext context,
Operator<?> root,
SparkWork sparkWork) |
Copyright © 2016 The Apache Software Foundation. All rights reserved.