From d9956f86ad7a937c5f2cfe39eacdcbdad9356c30 Mon Sep 17 00:00:00 2001 From: Timothy Chen Date: Thu, 18 Dec 2014 12:15:53 -0800 Subject: Add mesos specific configurations into doc Author: Timothy Chen Closes #3349 from tnachen/mesos_doc and squashes the following commits: 737ef49 [Timothy Chen] Add TOC 5ca546a [Timothy Chen] Update description around cores requested. 26283a5 [Timothy Chen] Add mesos specific configurations into doc --- docs/running-on-mesos.md | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 1073abb202..78358499fd 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -2,6 +2,8 @@ layout: global title: Running Spark on Mesos --- +* This will become a table of contents (this text will be scraped). +{:toc} Spark can run on hardware clusters managed by [Apache Mesos](http://mesos.apache.org/). @@ -183,6 +185,49 @@ node. Please refer to [Hadoop on Mesos](https://github.com/mesos/hadoop). In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through Mesos. +# Configuration + +See the [configuration page](configuration.html) for information on Spark configurations. The following configs are specific for Spark on Mesos. + +#### Spark Properties + + + + + + + + + + + + + + + + + + + + + + + +
Property NameDefaultMeaning
spark.mesos.coarsefalse + Set the run mode for Spark on Mesos. For more information about the run mode, refer to #Mesos Run Mode section above. +
spark.mesos.extra.cores0 + Set the extra amount of cpus to request per task. This setting is only used for Mesos coarse grain mode. + The total amount of cores requested per task is the number of cores in the offer plus the extra cores configured. + Note that total amount of cores the executor will request in total will not exceed the spark.cores.max setting. +
spark.mesos.executor.homeSPARK_HOME + The location where the mesos executor will look for Spark binaries to execute, and uses the SPARK_HOME setting on default. + This variable is only used when no spark.executor.uri is provided, and assumes Spark is installed on the specified location + on each slave. +
spark.mesos.executor.memoryOverhead384 + The amount of memory that Mesos executor will request for the task to account for the overhead of running the executor itself. + The final total amount of memory allocated is the maximum value between executor memory plus memoryOverhead, and overhead fraction (1.07) plus the executor memory. +
+ # Troubleshooting and Debugging A few places to look during debugging: -- cgit v1.2.3