aboutsummaryrefslogtreecommitdiff
path: root/docs/spark-standalone.md
diff options
context:
space:
mode:
authorAndrew Or <andrew@databricks.com>2014-12-10 12:41:36 -0800
committerAndrew Or <andrew@databricks.com>2014-12-10 12:41:36 -0800
commit56212831c6436e287a19908e82c26117cbcb16b0 (patch)
treee204d60f97cf1c1f58d1ab9ce1e2d6ece7a574c4 /docs/spark-standalone.md
parent0fc637b4c27f9afdf5c829d26c7a86efd8681490 (diff)
downloadspark-56212831c6436e287a19908e82c26117cbcb16b0.tar.gz
spark-56212831c6436e287a19908e82c26117cbcb16b0.tar.bz2
spark-56212831c6436e287a19908e82c26117cbcb16b0.zip
[SPARK-4771][Docs] Document standalone cluster supervise mode
tdas looks like streaming already refers to the supervise mode. The link from there is broken though. Author: Andrew Or <andrew@databricks.com> Closes #3627 from andrewor14/document-supervise and squashes the following commits: 9ca0908 [Andrew Or] Wording changes 2b55ed2 [Andrew Or] Document standalone cluster supervise mode
Diffstat (limited to 'docs/spark-standalone.md')
-rw-r--r--docs/spark-standalone.md11
1 files changed, 10 insertions, 1 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index ae7b81d5bb..5c6084fb46 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -257,7 +257,7 @@ To run an interactive Spark shell against the cluster, run the following command
You can also pass an option `--total-executor-cores <numCores>` to control the number of cores that spark-shell uses on the cluster.
-# Launching Compiled Spark Applications
+# Launching Spark Applications
The [`spark-submit` script](submitting-applications.html) provides the most straightforward way to
submit a compiled Spark application to the cluster. For standalone clusters, Spark currently
@@ -272,6 +272,15 @@ should specify them through the `--jars` flag using comma as a delimiter (e.g. `
To control the application's configuration or execution environment, see
[Spark Configuration](configuration.html).
+Additionally, standalone `cluster` mode supports restarting your application automatically if it
+exited with non-zero exit code. To use this feature, you may pass in the `--supervise` flag to
+`spark-submit` when launching your application. Then, if you wish to kill an application that is
+failing repeatedly, you may do so through:
+
+ ./bin/spark-class org.apache.spark.deploy.Client kill <master url> <driver ID>
+
+You can find the driver ID through the standalone Master web UI at `http://<master url>:8080`.
+
# Resource Scheduling
The standalone cluster mode currently only supports a simple FIFO scheduler across applications.