aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/spark-standalone.md11
1 files changed, 10 insertions, 1 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index ae7b81d5bb..5c6084fb46 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -257,7 +257,7 @@ To run an interactive Spark shell against the cluster, run the following command
You can also pass an option `--total-executor-cores <numCores>` to control the number of cores that spark-shell uses on the cluster.
-# Launching Compiled Spark Applications
+# Launching Spark Applications
The [`spark-submit` script](submitting-applications.html) provides the most straightforward way to
submit a compiled Spark application to the cluster. For standalone clusters, Spark currently
@@ -272,6 +272,15 @@ should specify them through the `--jars` flag using comma as a delimiter (e.g. `
To control the application's configuration or execution environment, see
[Spark Configuration](configuration.html).
+Additionally, standalone `cluster` mode supports restarting your application automatically if it
+exited with non-zero exit code. To use this feature, you may pass in the `--supervise` flag to
+`spark-submit` when launching your application. Then, if you wish to kill an application that is
+failing repeatedly, you may do so through:
+
+ ./bin/spark-class org.apache.spark.deploy.Client kill <master url> <driver ID>
+
+You can find the driver ID through the standalone Master web UI at `http://<master url>:8080`.
+
# Resource Scheduling
The standalone cluster mode currently only supports a simple FIFO scheduler across applications.