aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorJosh Rosen <joshrosen@databricks.com>2016-08-09 11:21:45 -0700
committerJosh Rosen <joshrosen@databricks.com>2016-08-09 11:21:45 -0700
commitb89b3a5c8e391fcaebe7ef3c77ef16bb9431d6ab (patch)
tree241fcd62a3279efad49d2631a475ae0729f4020d /docs
parent92da22878bac07545cd946911dcb39a6bb2ee7e8 (diff)
downloadspark-b89b3a5c8e391fcaebe7ef3c77ef16bb9431d6ab.tar.gz
spark-b89b3a5c8e391fcaebe7ef3c77ef16bb9431d6ab.tar.bz2
spark-b89b3a5c8e391fcaebe7ef3c77ef16bb9431d6ab.zip
[SPARK-16956] Make ApplicationState.MAX_NUM_RETRY configurable
## What changes were proposed in this pull request? This patch introduces a new configuration, `spark.deploy.maxExecutorRetries`, to let users configure an obscure behavior in the standalone master where the master will kill Spark applications which have experienced too many back-to-back executor failures. The current setting is a hardcoded constant (10); this patch replaces that with a new cluster-wide configuration. **Background:** This application-killing was added in 6b5980da796e0204a7735a31fb454f312bc9daac (from September 2012) and I believe that it was designed to prevent a faulty application whose executors could never launch from DOS'ing the Spark cluster via an infinite series of executor launch attempts. In a subsequent patch (#1360), this feature was refined to prevent applications which have running executors from being killed by this code path. **Motivation for making this configurable:** Previously, if a Spark Standalone application experienced more than `ApplicationState.MAX_NUM_RETRY` executor failures and was left with no executors running then the Spark master would kill that application, but this behavior is problematic in environments where the Spark executors run on unstable infrastructure and can all simultaneously die. For instance, if your Spark driver runs on an on-demand EC2 instance while all workers run on ephemeral spot instances then it's possible for all executors to die at the same time while the driver stays alive. In this case, it may be desirable to keep the Spark application alive so that it can recover once new workers and executors are available. In order to accommodate this use-case, this patch modifies the Master to never kill faulty applications if `spark.deploy.maxExecutorRetries` is negative. I'd like to merge this patch into master, branch-2.0, and branch-1.6. ## How was this patch tested? I tested this manually using `spark-shell` and `local-cluster` mode. This is a tricky feature to unit test and historically this code has not changed very often, so I'd prefer to skip the additional effort of adding a testing framework and would rather rely on manual tests and review for now. Author: Josh Rosen <joshrosen@databricks.com> Closes #14544 from JoshRosen/add-setting-for-max-executor-failures.
Diffstat (limited to 'docs')
-rw-r--r--docs/spark-standalone.md15
1 files changed, 15 insertions, 0 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index c864c90308..5ae63fe4e6 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -196,6 +196,21 @@ SPARK_MASTER_OPTS supports the following system properties:
</td>
</tr>
<tr>
+ <td><code>spark.deploy.maxExecutorRetries</code></td>
+ <td>10</td>
+ <td>
+ Limit on the maximum number of back-to-back executor failures that can occur before the
+ standalone cluster manager removes a faulty application. An application will never be removed
+ if it has any running executors. If an application experiences more than
+ <code>spark.deploy.maxExecutorRetries</code> failures in a row, no executors
+ successfully start running in between those failures, and the application has no running
+ executors then the standalone cluster manager will remove the application and mark it as failed.
+ To disable this automatic removal, set <code>spark.deploy.maxExecutorRetries</code> to
+ <code>-1</code>.
+ <br/>
+ </td>
+</tr>
+<tr>
<td><code>spark.worker.timeout</code></td>
<td>60</td>
<td>