aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-mesos.md
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2015-11-18 12:50:29 -0800
committerReynold Xin <rxin@databricks.com>2015-11-18 12:50:29 -0800
commita416e41e285700f861559d710dbf857405bfddf6 (patch)
treea0d2f70568985f6c98c3bbf1a7435ea7b731e726 /docs/running-on-mesos.md
parent31921e0f0bd559d042148d1ea32f865fb3068f38 (diff)
downloadspark-a416e41e285700f861559d710dbf857405bfddf6.tar.gz
spark-a416e41e285700f861559d710dbf857405bfddf6.tar.bz2
spark-a416e41e285700f861559d710dbf857405bfddf6.zip
[SPARK-11809] Switch the default Mesos mode to coarse-grained mode
Based on my conversions with people, I believe the consensus is that the coarse-grained mode is more stable and easier to reason about. It is best to use that as the default rather than the more flaky fine-grained mode. Author: Reynold Xin <rxin@databricks.com> Closes #9795 from rxin/SPARK-11809.
Diffstat (limited to 'docs/running-on-mesos.md')
-rw-r--r--docs/running-on-mesos.md27
1 files changed, 17 insertions, 10 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 5be208cf34..a197d0e373 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -161,21 +161,15 @@ Note that jars or python files that are passed to spark-submit should be URIs re
# Mesos Run Modes
-Spark can run over Mesos in two modes: "fine-grained" (default) and "coarse-grained".
+Spark can run over Mesos in two modes: "coarse-grained" (default) and "fine-grained".
-In "fine-grained" mode (default), each Spark task runs as a separate Mesos task. This allows
-multiple instances of Spark (and other frameworks) to share machines at a very fine granularity,
-where each application gets more or fewer machines as it ramps up and down, but it comes with an
-additional overhead in launching each task. This mode may be inappropriate for low-latency
-requirements like interactive queries or serving web requests.
-
-The "coarse-grained" mode will instead launch only *one* long-running Spark task on each Mesos
+The "coarse-grained" mode will launch only *one* long-running Spark task on each Mesos
machine, and dynamically schedule its own "mini-tasks" within it. The benefit is much lower startup
overhead, but at the cost of reserving the Mesos resources for the complete duration of the
application.
-To run in coarse-grained mode, set the `spark.mesos.coarse` property in your
-[SparkConf](configuration.html#spark-properties):
+Coarse-grained is the default mode. You can also set `spark.mesos.coarse` property to true
+to turn it on explictly in [SparkConf](configuration.html#spark-properties):
{% highlight scala %}
conf.set("spark.mesos.coarse", "true")
@@ -186,6 +180,19 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere
only makes sense if you run just one application at a time. You can cap the maximum number of cores
using `conf.set("spark.cores.max", "10")` (for example).
+In "fine-grained" mode, each Spark task runs as a separate Mesos task. This allows
+multiple instances of Spark (and other frameworks) to share machines at a very fine granularity,
+where each application gets more or fewer machines as it ramps up and down, but it comes with an
+additional overhead in launching each task. This mode may be inappropriate for low-latency
+requirements like interactive queries or serving web requests.
+
+To run in coarse-grained mode, set the `spark.mesos.coarse` property to false in your
+[SparkConf](configuration.html#spark-properties):
+
+{% highlight scala %}
+conf.set("spark.mesos.coarse", "false")
+{% endhighlight %}
+
You may also make use of `spark.mesos.constraints` to set attribute based constraints on mesos resource offers. By default, all resource offers will be accepted.
{% highlight scala %}