--- layout: global title: Running Spark on Mesos --- * This will become a table of contents (this text will be scraped). {:toc} Spark can run on hardware clusters managed by [Apache Mesos](http://mesos.apache.org/). The advantages of deploying Spark with Mesos include: - dynamic partitioning between Spark and other [frameworks](https://mesos.apache.org/documentation/latest/mesos-frameworks/) - scalable partitioning between multiple instances of Spark # How it Works In a standalone cluster deployment, the cluster manager in the below diagram is a Spark master instance. When using Mesos, the Mesos master replaces the Spark master as the cluster manager.
Property Name | Default | Meaning |
---|---|---|
spark.mesos.coarse |
false | Set the run mode for Spark on Mesos. For more information about the run mode, refer to #Mesos Run Mode section above. |
spark.mesos.extra.cores |
0 | Set the extra amount of cpus to request per task. This setting is only used for Mesos coarse grain mode. The total amount of cores requested per task is the number of cores in the offer plus the extra cores configured. Note that total amount of cores the executor will request in total will not exceed the spark.cores.max setting. |
spark.mesos.executor.home |
SPARK_HOME | The location where the mesos executor will look for Spark binaries to execute, and uses the SPARK_HOME setting on default. This variable is only used when no spark.executor.uri is provided, and assumes Spark is installed on the specified location on each slave. |
spark.mesos.executor.memoryOverhead |
384 | The amount of memory that Mesos executor will request for the task to account for the overhead of running the executor itself. The final total amount of memory allocated is the maximum value between executor memory plus memoryOverhead, and overhead fraction (1.07) plus the executor memory. |