aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/_config.yml2
-rw-r--r--docs/running-on-mesos.md12
2 files changed, 13 insertions, 1 deletions
diff --git a/docs/_config.yml b/docs/_config.yml
index be3d8a2fe6..bbb576e0e7 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -18,6 +18,6 @@ SPARK_VERSION: 2.1.0-SNAPSHOT
SPARK_VERSION_SHORT: 2.1.0
SCALA_BINARY_VERSION: "2.11"
SCALA_VERSION: "2.11.7"
-MESOS_VERSION: 0.21.0
+MESOS_VERSION: 0.22.0
SPARK_ISSUE_TRACKER_URL: https://issues.apache.org/jira/browse/SPARK
SPARK_GITHUB_URL: https://github.com/apache/spark
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 10dc9ce890..ce888b5445 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -260,6 +260,10 @@ have Mesos download Spark via the usual methods.
Requires Mesos version 0.20.1 or later.
+Note that by default Mesos agents will not pull the image if it already exists on the agent. If you use mutable image
+tags you can set `spark.mesos.executor.docker.forcePullImage` to `true` in order to force the agent to always pull the
+image before running the executor. Force pulling images is only available in Mesos version 0.22 and above.
+
# Running Alongside Hadoop
You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a
@@ -335,6 +339,14 @@ See the [configuration page](configuration.html) for information on Spark config
</td>
</tr>
<tr>
+ <td><code>spark.mesos.executor.docker.forcePullImage</code></td>
+ <td>false</td>
+ <td>
+ Force Mesos agents to pull the image specified in <code>spark.mesos.executor.docker.image</code>.
+ By default Mesos agents will not pull images they already have cached.
+ </td>
+</tr>
+<tr>
<td><code>spark.mesos.executor.docker.volumes</code></td>
<td>(none)</td>
<td>