aboutsummaryrefslogtreecommitdiff
path: root/python
diff options
context:
space:
mode:
authorReynold Xin <rxin@apache.org>2013-12-16 14:16:02 -0800
committerReynold Xin <rxin@apache.org>2013-12-16 14:16:02 -0800
commit883e034aebe61a25631497b4e299a8f2e3389b00 (patch)
treed612829fb3ee15f3ba75700bc9cd730e5e6c01da /python
parenta51f3404ad8711f5fe66381122c5fa1ead09b3da (diff)
parent558af873340087cad79630ec5c498672c5ea3c4f (diff)
downloadspark-883e034aebe61a25631497b4e299a8f2e3389b00.tar.gz
spark-883e034aebe61a25631497b4e299a8f2e3389b00.tar.bz2
spark-883e034aebe61a25631497b4e299a8f2e3389b00.zip
Merge pull request #245 from gregakespret/task-maxfailures-fix
Fix for spark.task.maxFailures not enforced correctly. Docs at http://spark.incubator.apache.org/docs/latest/configuration.html say: ``` spark.task.maxFailures Number of individual task failures before giving up on the job. Should be greater than or equal to 1. Number of allowed retries = this value - 1. ``` Previous implementation worked incorrectly. When for example `spark.task.maxFailures` was set to 1, the job was aborted only after the second task failure, not after the first one.
Diffstat (limited to 'python')
0 files changed, 0 insertions, 0 deletions