public class StageInfo
extends java.lang.Object
Constructor and Description |
---|
StageInfo(int stageId,
int attemptId,
java.lang.String name,
int numTasks,
scala.collection.Seq<RDDInfo> rddInfos,
scala.collection.Seq<java.lang.Object> parentIds,
java.lang.String details,
scala.collection.Seq<scala.collection.Seq<org.apache.spark.scheduler.TaskLocation>> taskLocalityPreferences) |
Modifier and Type | Method and Description |
---|---|
scala.collection.mutable.HashMap<java.lang.Object,AccumulableInfo> |
accumulables()
Terminal values of accumulables updated during this stage.
|
int |
attemptId() |
scala.Option<java.lang.Object> |
completionTime()
Time when all tasks in the stage completed or when the stage was cancelled.
|
java.lang.String |
details() |
scala.Option<java.lang.String> |
failureReason()
If the stage failed, the reason why.
|
static StageInfo |
fromStage(org.apache.spark.scheduler.Stage stage,
int attemptId,
scala.Option<java.lang.Object> numTasks,
scala.collection.Seq<scala.collection.Seq<org.apache.spark.scheduler.TaskLocation>> taskLocalityPreferences)
Construct a StageInfo from a Stage.
|
java.lang.String |
name() |
int |
numTasks() |
scala.collection.Seq<java.lang.Object> |
parentIds() |
scala.collection.Seq<RDDInfo> |
rddInfos() |
void |
stageFailed(java.lang.String reason) |
int |
stageId() |
scala.Option<java.lang.Object> |
submissionTime()
When this stage was submitted from the DAGScheduler to a TaskScheduler.
|
scala.collection.Seq<scala.collection.Seq<org.apache.spark.scheduler.TaskLocation>> |
taskLocalityPreferences() |
public StageInfo(int stageId, int attemptId, java.lang.String name, int numTasks, scala.collection.Seq<RDDInfo> rddInfos, scala.collection.Seq<java.lang.Object> parentIds, java.lang.String details, scala.collection.Seq<scala.collection.Seq<org.apache.spark.scheduler.TaskLocation>> taskLocalityPreferences)
public static StageInfo fromStage(org.apache.spark.scheduler.Stage stage, int attemptId, scala.Option<java.lang.Object> numTasks, scala.collection.Seq<scala.collection.Seq<org.apache.spark.scheduler.TaskLocation>> taskLocalityPreferences)
Each Stage is associated with one or many RDDs, with the boundary of a Stage marked by shuffle dependencies. Therefore, all ancestor RDDs related to this Stage's RDD through a sequence of narrow dependencies should also be associated with this Stage.
stage
- (undocumented)attemptId
- (undocumented)numTasks
- (undocumented)taskLocalityPreferences
- (undocumented)public int stageId()
public int attemptId()
public java.lang.String name()
public int numTasks()
public scala.collection.Seq<RDDInfo> rddInfos()
public scala.collection.Seq<java.lang.Object> parentIds()
public java.lang.String details()
public scala.collection.Seq<scala.collection.Seq<org.apache.spark.scheduler.TaskLocation>> taskLocalityPreferences()
public scala.Option<java.lang.Object> submissionTime()
public scala.Option<java.lang.Object> completionTime()
public scala.Option<java.lang.String> failureReason()
public scala.collection.mutable.HashMap<java.lang.Object,AccumulableInfo> accumulables()
public void stageFailed(java.lang.String reason)