aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorMichael Gummelt <mgummelt@mesosphere.io>2016-02-10 10:53:33 -0800
committerAndrew Or <andrew@databricks.com>2016-02-10 10:53:33 -0800
commit80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a (patch)
treeebd19083ee53f66618a81280c6b6667db129b179 /docs
parentc0b71e0b8f3c068f2f092bb118a16611b3d38d7a (diff)
downloadspark-80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a.tar.gz
spark-80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a.tar.bz2
spark-80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a.zip
[SPARK-5095][MESOS] Support launching multiple mesos executors in coarse grained mesos mode.
This is the next iteration of tnachen's previous PR: https://github.com/apache/spark/pull/4027 In that PR, we resolved with andrewor14 and pwendell to implement the Mesos scheduler's support of `spark.executor.cores` to be consistent with YARN and Standalone. This PR implements that resolution. This PR implements two high-level features. These two features are co-dependent, so they're implemented both here: - Mesos support for spark.executor.cores - Multiple executors per slave We at Mesosphere have been working with Typesafe on a Spark/Mesos integration test suite: https://github.com/typesafehub/mesos-spark-integration-tests, which passes for this PR. The contribution is my original work and I license the work to the project under the project's open source license. Author: Michael Gummelt <mgummelt@mesosphere.io> Closes #10993 from mgummelt/executor_sizing.
Diffstat (limited to 'docs')
-rw-r--r--docs/configuration.md15
-rw-r--r--docs/running-on-mesos.md8
2 files changed, 15 insertions, 8 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index cd9dc1bcfc..b07c69cd4c 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -825,13 +825,18 @@ Apart from these, the following properties are also available, and may be useful
</tr>
<tr>
<td><code>spark.executor.cores</code></td>
- <td>1 in YARN mode, all the available cores on the worker in standalone mode.</td>
<td>
- The number of cores to use on each executor. For YARN and standalone mode only.
+ 1 in YARN mode, all the available cores on the worker in
+ standalone and Mesos coarse-grained modes.
+ </td>
+ <td>
+ The number of cores to use on each executor.
- In standalone mode, setting this parameter allows an application to run multiple executors on
- the same worker, provided that there are enough cores on that worker. Otherwise, only one
- executor per application will run on each worker.
+ In standalone and Mesos coarse-grained modes, setting this
+ parameter allows an application to run multiple executors on the
+ same worker, provided that there are enough cores on that
+ worker. Otherwise, only one executor per application will run on
+ each worker.
</td>
</tr>
<tr>
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index e1c87a8d95..0df476d9b4 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -277,9 +277,11 @@ See the [configuration page](configuration.html) for information on Spark config
<td><code>spark.mesos.extra.cores</code></td>
<td><code>0</code></td>
<td>
- Set the extra amount of cpus to request per task. This setting is only used for Mesos coarse grain mode.
- The total amount of cores requested per task is the number of cores in the offer plus the extra cores configured.
- Note that total amount of cores the executor will request in total will not exceed the <code>spark.cores.max</code> setting.
+ Set the extra number of cores for an executor to advertise. This
+ does not result in more cores allocated. It instead means that an
+ executor will "pretend" it has more cores, so that the driver will
+ send it more tasks. Use this to increase parallelism. This
+ setting is only used for Mesos coarse-grained mode.
</td>
</tr>
<tr>