diff options
author | Sandy Ryza <sandy@cloudera.com> | 2014-03-13 12:11:33 -0700 |
---|---|---|
committer | Patrick Wendell <pwendell@gmail.com> | 2014-03-13 12:11:33 -0700 |
commit | 698373211ef3cdf841c82d48168cd5dbe00a57b4 (patch) | |
tree | a07edbe4835a7b01aa48cf9bd35c0d6939d21d78 /docs/job-scheduling.md | |
parent | e4e8d8f395aea48f0cae00d7c381a863c48a2837 (diff) | |
download | spark-698373211ef3cdf841c82d48168cd5dbe00a57b4.tar.gz spark-698373211ef3cdf841c82d48168cd5dbe00a57b4.tar.bz2 spark-698373211ef3cdf841c82d48168cd5dbe00a57b4.zip |
SPARK-1183. Don't use "worker" to mean executor
Author: Sandy Ryza <sandy@cloudera.com>
Closes #120 from sryza/sandy-spark-1183 and squashes the following commits:
5066a4a [Sandy Ryza] Remove "worker" in a couple comments
0bd1e46 [Sandy Ryza] Remove --am-class from usage
bfc8fe0 [Sandy Ryza] Remove am-class from doc and fix yarn-alpha
607539f [Sandy Ryza] Address review comments
74d087a [Sandy Ryza] SPARK-1183. Don't use "worker" to mean executor
Diffstat (limited to 'docs/job-scheduling.md')
-rw-r--r-- | docs/job-scheduling.md | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md index df2faa5e41..94604f301d 100644 --- a/docs/job-scheduling.md +++ b/docs/job-scheduling.md @@ -39,8 +39,8 @@ Resource allocation can be configured as follows, based on the cluster type: * **Mesos:** To use static partitioning on Mesos, set the `spark.mesos.coarse` configuration property to `true`, and optionally set `spark.cores.max` to limit each application's resource share as in the standalone mode. You should also set `spark.executor.memory` to control the executor memory. -* **YARN:** The `--num-workers` option to the Spark YARN client controls how many workers it will allocate - on the cluster, while `--worker-memory` and `--worker-cores` control the resources per worker. +* **YARN:** The `--num-executors` option to the Spark YARN client controls how many executors it will allocate + on the cluster, while `--executor-memory` and `--executor-cores` control the resources per executor. A second option available on Mesos is _dynamic sharing_ of CPU cores. In this mode, each Spark application still has a fixed and independent memory allocation (set by `spark.executor.memory`), but when the |