aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/ml/regression.py
diff options
context:
space:
mode:
authorShixiong Zhu <shixiong@databricks.com>2016-02-19 23:00:08 -0800
committerDavies Liu <davies.liu@gmail.com>2016-02-19 23:00:08 -0800
commitdfb2ae2f141960c10200a870ed21583e6af5c536 (patch)
tree94320d96469ed85f67e7cfac253f90c4be1e6de6 /python/pyspark/ml/regression.py
parent6624a588c1b3b6c05fb39285bc6215102dd109c6 (diff)
downloadspark-dfb2ae2f141960c10200a870ed21583e6af5c536.tar.gz
spark-dfb2ae2f141960c10200a870ed21583e6af5c536.tar.bz2
spark-dfb2ae2f141960c10200a870ed21583e6af5c536.zip
[SPARK-13408] [CORE] Ignore errors when it's already reported in JobWaiter
## What changes were proposed in this pull request? `JobWaiter.taskSucceeded` will be called for each task. When `resultHandler` throws an exception, `taskSucceeded` will also throw it for each task. DAGScheduler just catches it and reports it like this: ```Scala try { job.listener.taskSucceeded(rt.outputId, event.result) } catch { case e: Exception => // TODO: Perhaps we want to mark the resultStage as failed? job.listener.jobFailed(new SparkDriverExecutionException(e)) } ``` Therefore `JobWaiter.jobFailed` may be called multiple times. So `JobWaiter.jobFailed` should use `Promise.tryFailure` instead of `Promise.failure` because the latter one doesn't support calling multiple times. ## How was the this patch tested? Jenkins tests. Author: Shixiong Zhu <shixiong@databricks.com> Closes #11280 from zsxwing/SPARK-13408.
Diffstat (limited to 'python/pyspark/ml/regression.py')
0 files changed, 0 insertions, 0 deletions