diff options
author | Matei Zaharia <matei@databricks.com> | 2014-01-07 14:35:52 -0500 |
---|---|---|
committer | Matei Zaharia <matei@databricks.com> | 2014-01-07 14:35:52 -0500 |
commit | d8bcc8e9a095c1b20dd7a17b6535800d39bff80e (patch) | |
tree | f3f5a1368a43b765b541be706921903cc6ac8da0 /docs/spark-standalone.md | |
parent | 15d953450167c4ec45c9d0a2c7ab8ee71be2e576 (diff) | |
download | spark-d8bcc8e9a095c1b20dd7a17b6535800d39bff80e.tar.gz spark-d8bcc8e9a095c1b20dd7a17b6535800d39bff80e.tar.bz2 spark-d8bcc8e9a095c1b20dd7a17b6535800d39bff80e.zip |
Add way to limit default # of cores used by applications on standalone mode
Also documents the spark.deploy.spreadOut option.
Diffstat (limited to 'docs/spark-standalone.md')
-rw-r--r-- | docs/spark-standalone.md | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index c851833a18..f47d41f966 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -167,6 +167,16 @@ val conf = new SparkConf() val sc = new SparkContext(conf) {% endhighlight %} +In addition, you can configure `spark.deploy.defaultCores` on the cluster master process to change the +default for applications that don't set `spark.cores.max` to something less than infinite. +Do this by adding the following to `conf/spark-env.sh`: + +{% highlight bash %} +export SPARK_JAVA_OPTS="-Dspark.deploy.defaultCores=<value>" +{% endhighlight %} + +This is useful on shared clusters where users might not have configured a maximum number of cores +individually. # Monitoring and Logging |