aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-mesos.md
diff options
context:
space:
mode:
authorMichael Gummelt <mgummelt@mesosphere.io>2016-02-10 10:53:33 -0800
committerAndrew Or <andrew@databricks.com>2016-02-10 10:53:33 -0800
commit80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a (patch)
treeebd19083ee53f66618a81280c6b6667db129b179 /docs/running-on-mesos.md
parentc0b71e0b8f3c068f2f092bb118a16611b3d38d7a (diff)
downloadspark-80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a.tar.gz
spark-80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a.tar.bz2
spark-80cb963ad963e26c3a7f8388bdd4ffd5e99aad1a.zip
[SPARK-5095][MESOS] Support launching multiple mesos executors in coarse grained mesos mode.
This is the next iteration of tnachen's previous PR: https://github.com/apache/spark/pull/4027 In that PR, we resolved with andrewor14 and pwendell to implement the Mesos scheduler's support of `spark.executor.cores` to be consistent with YARN and Standalone. This PR implements that resolution. This PR implements two high-level features. These two features are co-dependent, so they're implemented both here: - Mesos support for spark.executor.cores - Multiple executors per slave We at Mesosphere have been working with Typesafe on a Spark/Mesos integration test suite: https://github.com/typesafehub/mesos-spark-integration-tests, which passes for this PR. The contribution is my original work and I license the work to the project under the project's open source license. Author: Michael Gummelt <mgummelt@mesosphere.io> Closes #10993 from mgummelt/executor_sizing.
Diffstat (limited to 'docs/running-on-mesos.md')
-rw-r--r--docs/running-on-mesos.md8
1 files changed, 5 insertions, 3 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index e1c87a8d95..0df476d9b4 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -277,9 +277,11 @@ See the [configuration page](configuration.html) for information on Spark config
<td><code>spark.mesos.extra.cores</code></td>
<td><code>0</code></td>
<td>
- Set the extra amount of cpus to request per task. This setting is only used for Mesos coarse grain mode.
- The total amount of cores requested per task is the number of cores in the offer plus the extra cores configured.
- Note that total amount of cores the executor will request in total will not exceed the <code>spark.cores.max</code> setting.
+ Set the extra number of cores for an executor to advertise. This
+ does not result in more cores allocated. It instead means that an
+ executor will "pretend" it has more cores, so that the driver will
+ send it more tasks. Use this to increase parallelism. This
+ setting is only used for Mesos coarse-grained mode.
</td>
</tr>
<tr>