diff options
author | Timothy Chen <tnachen@gmail.com> | 2015-08-11 23:33:22 -0700 |
---|---|---|
committer | Andrew Or <andrew@databricks.com> | 2015-08-11 23:33:22 -0700 |
commit | 741a29f98945538a475579ccc974cd42c1613be4 (patch) | |
tree | 122699fda085dfef9f8b41edf444372129a25ea4 /docs | |
parent | 5c99d8bf98cbf7f568345d02a814fc318cbfca75 (diff) | |
download | spark-741a29f98945538a475579ccc974cd42c1613be4.tar.gz spark-741a29f98945538a475579ccc974cd42c1613be4.tar.bz2 spark-741a29f98945538a475579ccc974cd42c1613be4.zip |
[SPARK-9575] [MESOS] Add docuemntation around Mesos shuffle service.
andrewor14
Author: Timothy Chen <tnachen@gmail.com>
Closes #7907 from tnachen/mesos_shuffle.
Diffstat (limited to 'docs')
-rw-r--r-- | docs/running-on-mesos.md | 14 |
1 files changed, 14 insertions, 0 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 55e6d4e83a..cfd219ab02 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -216,6 +216,20 @@ node. Please refer to [Hadoop on Mesos](https://github.com/mesos/hadoop). In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through Mesos. +# Dynamic Resource Allocation with Mesos + +Mesos supports dynamic allocation only with coarse grain mode, which can resize the number of executors based on statistics +of the application. While dynamic allocation supports both scaling up and scaling down the number of executors, the coarse grain scheduler only supports scaling down +since it is already designed to run one executor per slave with the configured amount of resources. However, after scaling down the number of executors the coarse grain scheduler +can scale back up to the same amount of executors when Spark signals more executors are needed. + +Users that like to utilize this feature should launch the Mesos Shuffle Service that +provides shuffle data cleanup functionality on top of the Shuffle Service since Mesos doesn't yet support notifying another framework's +termination. To launch/stop the Mesos Shuffle Service please use the provided sbin/start-mesos-shuffle-service.sh and sbin/stop-mesos-shuffle-service.sh +scripts accordingly. + +The Shuffle Service is expected to be running on each slave node that will run Spark executors. One way to easily achieve this with Mesos +is to launch the Shuffle Service with Marathon with a unique host constraint. # Configuration |