aboutsummaryrefslogtreecommitdiff
path: root/docs/job-scheduling.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@databricks.com>2014-01-07 14:35:52 -0500
committerMatei Zaharia <matei@databricks.com>2014-01-07 14:35:52 -0500
commitd8bcc8e9a095c1b20dd7a17b6535800d39bff80e (patch)
treef3f5a1368a43b765b541be706921903cc6ac8da0 /docs/job-scheduling.md
parent15d953450167c4ec45c9d0a2c7ab8ee71be2e576 (diff)
downloadspark-d8bcc8e9a095c1b20dd7a17b6535800d39bff80e.tar.gz
spark-d8bcc8e9a095c1b20dd7a17b6535800d39bff80e.tar.bz2
spark-d8bcc8e9a095c1b20dd7a17b6535800d39bff80e.zip
Add way to limit default # of cores used by applications on standalone mode
Also documents the spark.deploy.spreadOut option.
Diffstat (limited to 'docs/job-scheduling.md')
-rw-r--r--docs/job-scheduling.md5
1 files changed, 2 insertions, 3 deletions
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index 5951155fe3..df2faa5e41 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -32,9 +32,8 @@ Resource allocation can be configured as follows, based on the cluster type:
* **Standalone mode:** By default, applications submitted to the standalone mode cluster will run in
FIFO (first-in-first-out) order, and each application will try to use all available nodes. You can limit
- the number of nodes an application uses by setting the `spark.cores.max` configuration property in it. This
- will allow multiple users/applications to run concurrently. For example, you might launch a long-running
- server that uses 10 cores, and allow users to launch shells that use 20 cores each.
+ the number of nodes an application uses by setting the `spark.cores.max` configuration property in it,
+ or change the default for applications that don't set this setting through `spark.deploy.defaultCores`.
Finally, in addition to controlling cores, each application's `spark.executor.memory` setting controls
its memory use.
* **Mesos:** To use static partitioning on Mesos, set the `spark.mesos.coarse` configuration property to `true`,