aboutsummaryrefslogtreecommitdiff
path: root/project
diff options
context:
space:
mode:
authorJosh Rosen <joshrosen@databricks.com>2015-09-15 17:11:21 -0700
committerJosh Rosen <joshrosen@databricks.com>2015-09-15 17:11:21 -0700
commit38700ea40cb1dd0805cc926a9e629f93c99527ad (patch)
treea39eecaab229b50fed9a5c69ea7c7f75c43ff5ea /project
parent99ecfa5945aedaa71765ecf5cce59964ae52eebe (diff)
downloadspark-38700ea40cb1dd0805cc926a9e629f93c99527ad.tar.gz
spark-38700ea40cb1dd0805cc926a9e629f93c99527ad.tar.bz2
spark-38700ea40cb1dd0805cc926a9e629f93c99527ad.zip
[SPARK-10381] Fix mixup of taskAttemptNumber & attemptId in OutputCommitCoordinator
When speculative execution is enabled, consider a scenario where the authorized committer of a particular output partition fails during the OutputCommitter.commitTask() call. In this case, the OutputCommitCoordinator is supposed to release that committer's exclusive lock on committing once that task fails. However, due to a unit mismatch (we used task attempt number in one place and task attempt id in another) the lock will not be released, causing Spark to go into an infinite retry loop. This bug was masked by the fact that the OutputCommitCoordinator does not have enough end-to-end tests (the current tests use many mocks). Other factors contributing to this bug are the fact that we have many similarly-named identifiers that have different semantics but the same data types (e.g. attemptNumber and taskAttemptId, with inconsistent variable naming which makes them difficult to distinguish). This patch adds a regression test and fixes this bug by always using task attempt numbers throughout this code. Author: Josh Rosen <joshrosen@databricks.com> Closes #8544 from JoshRosen/SPARK-10381.
Diffstat (limited to 'project')
-rw-r--r--project/MimaExcludes.scala36
1 files changed, 35 insertions, 1 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index 46026c1e90..1c96b09585 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -45,7 +45,7 @@ object MimaExcludes {
excludePackage("org.apache.spark.sql.execution")
) ++
MimaBuild.excludeSparkClass("streaming.flume.FlumeTestUtils") ++
- MimaBuild.excludeSparkClass("streaming.flume.PollingFlumeTestUtils") ++
+ MimaBuild.excludeSparkClass("streaming.flume.PollingFlumeTestUtils") ++
Seq(
ProblemFilters.exclude[MissingMethodProblem](
"org.apache.spark.ml.classification.LogisticCostFun.this"),
@@ -53,6 +53,23 @@ object MimaExcludes {
"org.apache.spark.ml.classification.LogisticAggregator.add"),
ProblemFilters.exclude[MissingMethodProblem](
"org.apache.spark.ml.classification.LogisticAggregator.count")
+ ) ++ Seq(
+ // SPARK-10381 Fix types / units in private AskPermissionToCommitOutput RPC message.
+ // This class is marked as `private` but MiMa still seems to be confused by the change.
+ ProblemFilters.exclude[MissingMethodProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.task"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.copy$default$2"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.copy"),
+ ProblemFilters.exclude[MissingMethodProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.taskAttempt"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.copy$default$3"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.this"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.apply")
)
case v if v.startsWith("1.5") =>
Seq(
@@ -213,6 +230,23 @@ object MimaExcludes {
// SPARK-9704 Made ProbabilisticClassifier, Identifiable, VectorUDT public APIs
ProblemFilters.exclude[IncompatibleResultTypeProblem](
"org.apache.spark.mllib.linalg.VectorUDT.serialize")
+ ) ++ Seq(
+ // SPARK-10381 Fix types / units in private AskPermissionToCommitOutput RPC message.
+ // This class is marked as `private` but MiMa still seems to be confused by the change.
+ ProblemFilters.exclude[MissingMethodProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.task"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.copy$default$2"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.copy"),
+ ProblemFilters.exclude[MissingMethodProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.taskAttempt"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.copy$default$3"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.this"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.scheduler.AskPermissionToCommitOutput.apply")
)
case v if v.startsWith("1.4") =>