aboutsummaryrefslogtreecommitdiff
path: root/docs/job-scheduling.md
diff options
context:
space:
mode:
authorfelixcheung <felixcheung_m@hotmail.com>2016-01-21 16:30:20 +0100
committerSean Owen <sowen@cloudera.com>2016-01-21 16:30:20 +0100
commit85200c09adc6eb98fadb8505f55cb44e3d8b3390 (patch)
tree21321d39a9962c0c7525165773ef64fd98cbe8bf /docs/job-scheduling.md
parent1b2a918e59addcdccdf8e011bce075cc9dd07b93 (diff)
downloadspark-85200c09adc6eb98fadb8505f55cb44e3d8b3390.tar.gz
spark-85200c09adc6eb98fadb8505f55cb44e3d8b3390.tar.bz2
spark-85200c09adc6eb98fadb8505f55cb44e3d8b3390.zip
[SPARK-12534][DOC] update documentation to list command line equivalent to properties
Several Spark properties equivalent to Spark submit command line options are missing. Author: felixcheung <felixcheung_m@hotmail.com> Closes #10491 from felixcheung/sparksubmitdoc.
Diffstat (limited to 'docs/job-scheduling.md')
-rw-r--r--docs/job-scheduling.md5
1 files changed, 4 insertions, 1 deletions
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index 6c587b3f0d..95d47794ea 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -39,7 +39,10 @@ Resource allocation can be configured as follows, based on the cluster type:
and optionally set `spark.cores.max` to limit each application's resource share as in the standalone mode.
You should also set `spark.executor.memory` to control the executor memory.
* **YARN:** The `--num-executors` option to the Spark YARN client controls how many executors it will allocate
- on the cluster, while `--executor-memory` and `--executor-cores` control the resources per executor.
+ on the cluster (`spark.executor.instances` as configuration property), while `--executor-memory`
+ (`spark.executor.memory` configuration property) and `--executor-cores` (`spark.executor.cores` configuration
+ property) control the resources per executor. For more information, see the
+ [YARN Spark Properties](running-on-yarn.html).
A second option available on Mesos is _dynamic sharing_ of CPU cores. In this mode, each Spark application
still has a fixed and independent memory allocation (set by `spark.executor.memory`), but when the