aboutsummaryrefslogtreecommitdiff
path: root/pom.xml
diff options
context:
space:
mode:
authorJosh Rosen <joshrosen@databricks.com>2015-09-15 17:11:21 -0700
committerJosh Rosen <joshrosen@databricks.com>2015-09-15 17:11:21 -0700
commit38700ea40cb1dd0805cc926a9e629f93c99527ad (patch)
treea39eecaab229b50fed9a5c69ea7c7f75c43ff5ea /pom.xml
parent99ecfa5945aedaa71765ecf5cce59964ae52eebe (diff)
downloadspark-38700ea40cb1dd0805cc926a9e629f93c99527ad.tar.gz
spark-38700ea40cb1dd0805cc926a9e629f93c99527ad.tar.bz2
spark-38700ea40cb1dd0805cc926a9e629f93c99527ad.zip
[SPARK-10381] Fix mixup of taskAttemptNumber & attemptId in OutputCommitCoordinator
When speculative execution is enabled, consider a scenario where the authorized committer of a particular output partition fails during the OutputCommitter.commitTask() call. In this case, the OutputCommitCoordinator is supposed to release that committer's exclusive lock on committing once that task fails. However, due to a unit mismatch (we used task attempt number in one place and task attempt id in another) the lock will not be released, causing Spark to go into an infinite retry loop. This bug was masked by the fact that the OutputCommitCoordinator does not have enough end-to-end tests (the current tests use many mocks). Other factors contributing to this bug are the fact that we have many similarly-named identifiers that have different semantics but the same data types (e.g. attemptNumber and taskAttemptId, with inconsistent variable naming which makes them difficult to distinguish). This patch adds a regression test and fixes this bug by always using task attempt numbers throughout this code. Author: Josh Rosen <joshrosen@databricks.com> Closes #8544 from JoshRosen/SPARK-10381.
Diffstat (limited to 'pom.xml')
0 files changed, 0 insertions, 0 deletions