aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorAndrew Or <andrew@databricks.com>2014-12-10 12:41:36 -0800
committerAndrew Or <andrew@databricks.com>2014-12-10 12:43:20 -0800
commit1da1937531f2e8ab37074ba6ef1a6f54c49c8ad1 (patch)
tree50cc1c21532f2eda3e30ba56c2b99d40e983f4ec /docs
parentd70c7298d9db1942ceae99bdc19fffa643f2490c (diff)
downloadspark-1da1937531f2e8ab37074ba6ef1a6f54c49c8ad1.tar.gz
spark-1da1937531f2e8ab37074ba6ef1a6f54c49c8ad1.tar.bz2
spark-1da1937531f2e8ab37074ba6ef1a6f54c49c8ad1.zip
[SPARK-4771][Docs] Document standalone cluster supervise mode
tdas looks like streaming already refers to the supervise mode. The link from there is broken though. Author: Andrew Or <andrew@databricks.com> Closes #3627 from andrewor14/document-supervise and squashes the following commits: 9ca0908 [Andrew Or] Wording changes 2b55ed2 [Andrew Or] Document standalone cluster supervise mode
Diffstat (limited to 'docs')
-rw-r--r--docs/spark-standalone.md11
1 files changed, 10 insertions, 1 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index ae7b81d5bb..5c6084fb46 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -257,7 +257,7 @@ To run an interactive Spark shell against the cluster, run the following command
You can also pass an option `--total-executor-cores <numCores>` to control the number of cores that spark-shell uses on the cluster.
-# Launching Compiled Spark Applications
+# Launching Spark Applications
The [`spark-submit` script](submitting-applications.html) provides the most straightforward way to
submit a compiled Spark application to the cluster. For standalone clusters, Spark currently
@@ -272,6 +272,15 @@ should specify them through the `--jars` flag using comma as a delimiter (e.g. `
To control the application's configuration or execution environment, see
[Spark Configuration](configuration.html).
+Additionally, standalone `cluster` mode supports restarting your application automatically if it
+exited with non-zero exit code. To use this feature, you may pass in the `--supervise` flag to
+`spark-submit` when launching your application. Then, if you wish to kill an application that is
+failing repeatedly, you may do so through:
+
+ ./bin/spark-class org.apache.spark.deploy.Client kill <master url> <driver ID>
+
+You can find the driver ID through the standalone Master web UI at `http://<master url>:8080`.
+
# Resource Scheduling
The standalone cluster mode currently only supports a simple FIFO scheduler across applications.