--- layout: global title: Running Spark on Mesos --- * This will become a table of contents (this text will be scraped). {:toc} Spark can run on hardware clusters managed by [Apache Mesos](http://mesos.apache.org/). The advantages of deploying Spark with Mesos include: - dynamic partitioning between Spark and other [frameworks](https://mesos.apache.org/documentation/latest/mesos-frameworks/) - scalable partitioning between multiple instances of Spark # How it Works In a standalone cluster deployment, the cluster manager in the below diagram is a Spark master instance. When using Mesos, the Mesos master replaces the Spark master as the cluster manager.
Property Name | Default | Meaning |
---|---|---|
spark.mesos.coarse |
false | If set to "true", runs over Mesos clusters in "coarse-grained" sharing mode, where Spark acquires one long-lived Mesos task on each machine instead of one Mesos task per Spark task. This gives lower-latency scheduling for short queries, but leaves resources in use for the whole duration of the Spark job. |
spark.mesos.extra.cores |
0 | Set the extra amount of cpus to request per task. This setting is only used for Mesos coarse grain mode. The total amount of cores requested per task is the number of cores in the offer plus the extra cores configured. Note that total amount of cores the executor will request in total will not exceed the spark.cores.max setting. |
spark.mesos.mesosExecutor.cores |
1.0 | (Fine-grained mode only) Number of cores to give each Mesos executor. This does not include the cores used to run the Spark tasks. In other words, even if no Spark task is being run, each Mesos executor will occupy the number of cores configured here. The value can be a floating point number. |
spark.mesos.executor.home |
driver side SPARK_HOME |
Set the directory in which Spark is installed on the executors in Mesos. By default, the
executors will simply use the driver's Spark home directory, which may not be visible to
them. Note that this is only relevant if a Spark binary package is not specified through
spark.executor.uri .
|
spark.mesos.executor.memoryOverhead |
executor memory * 0.10, with minimum of 384 | The amount of additional memory, specified in MB, to be allocated per executor. By default, the overhead will be larger of either 384 or 10% of `spark.executor.memory`. If it's set, the final overhead will be this value. |