aboutsummaryrefslogtreecommitdiff
path: root/docs/spark-standalone.md
diff options
context:
space:
mode:
authorandrewor14 <andrewor14@gmail.com>2014-09-19 16:02:38 -0700
committerAndrew Or <andrewor14@gmail.com>2014-09-19 16:02:38 -0700
commit8af2370619a8a6bb1af7df43b8329ab319348ad8 (patch)
tree3492f953754715ec4f5ddce0d19d19d57b17c5cd /docs/spark-standalone.md
parent99b06b6fd2d79403ef4307ac6f3fa84176e7a622 (diff)
downloadspark-8af2370619a8a6bb1af7df43b8329ab319348ad8.tar.gz
spark-8af2370619a8a6bb1af7df43b8329ab319348ad8.tar.bz2
spark-8af2370619a8a6bb1af7df43b8329ab319348ad8.zip
[Docs] Fix outdated docs for standalone cluster
This is now supported! Author: andrewor14 <andrewor14@gmail.com> Author: Andrew Or <andrewor14@gmail.com> Closes #2461 from andrewor14/document-standalone-cluster and squashes the following commits: 85c8b9e [andrewor14] Wording change per Patrick 35e30ee [Andrew Or] Fix outdated docs for standalone cluster
Diffstat (limited to 'docs/spark-standalone.md')
-rw-r--r--docs/spark-standalone.md6
1 files changed, 4 insertions, 2 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 99a8e43a6b..29b5491861 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -248,8 +248,10 @@ You can also pass an option `--cores <numCores>` to control the number of cores
The [`spark-submit` script](submitting-applications.html) provides the most straightforward way to
submit a compiled Spark application to the cluster. For standalone clusters, Spark currently
-only supports deploying the driver inside the client process that is submitting the application
-(`client` deploy mode).
+supports two deploy modes. In `client` mode, the driver is launched in the same process as the
+client that submits the application. In `cluster` mode, however, the driver is launched from one
+of the Worker processes inside the cluster, and the client process exits as soon as it fulfills
+its responsibility of submitting the application without waiting for the application to finish.
If your application is launched through Spark submit, then the application jar is automatically
distributed to all worker nodes. For any additional jars that your application depends on, you