aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-mesos.md
diff options
context:
space:
mode:
authorChris Heller <hellertime@gmail.com>2015-05-01 18:41:22 -0700
committerAndrew Or <andrew@databricks.com>2015-05-01 18:41:22 -0700
commit8f50a07d2188ccc5315d979755188b1e5d5b5471 (patch)
tree4f3fc389e598179c1774a1bfb25bcb4be4418651 /docs/running-on-mesos.md
parentb4b43df8a338a30c0eadcf10cbe3ba203dc3f861 (diff)
downloadspark-8f50a07d2188ccc5315d979755188b1e5d5b5471.tar.gz
spark-8f50a07d2188ccc5315d979755188b1e5d5b5471.tar.bz2
spark-8f50a07d2188ccc5315d979755188b1e5d5b5471.zip
[SPARK-2691] [MESOS] Support for Mesos DockerInfo
This patch adds partial support for running spark on mesos inside of a docker container. Only fine-grained mode is presently supported, and there is no checking done to ensure that the version of libmesos is recent enough to have a DockerInfo structure in the protobuf (other than pinning a mesos version in the pom.xml). Author: Chris Heller <hellertime@gmail.com> Closes #3074 from hellertime/SPARK-2691 and squashes the following commits: d504af6 [Chris Heller] Assist type inference f64885d [Chris Heller] Fix errant line length 17c41c0 [Chris Heller] Base Dockerfile on mesosphere/mesos image 8aebda4 [Chris Heller] Simplfy Docker image docs 1ae7f4f [Chris Heller] Style points 974bd56 [Chris Heller] Convert map to flatMap 5d8bdf7 [Chris Heller] Factor out the DockerInfo construction. 7b75a3d [Chris Heller] Align to styleguide 80108e7 [Chris Heller] Bend to the will of RAT ba77056 [Chris Heller] Explicit RAT exclude abda5e5 [Chris Heller] Wildcard .rat-excludes 2f2873c [Chris Heller] Exclude spark-mesos from RAT a589a5b [Chris Heller] Add example Dockerfile b6825ce [Chris Heller] Remove use of EasyMock eae1b86 [Chris Heller] Move properties under 'spark.mesos.' c184d00 [Chris Heller] Use map on Option to be consistent with non-coarse code fb9501a [Chris Heller] Bumped mesos version to current release fa11879 [Chris Heller] Add listenerBus to EasyMock 882151e [Chris Heller] Changes to scala style b22d42d [Chris Heller] Exclude template from RAT db536cf [Chris Heller] Remove unneeded mocks dea1bd5 [Chris Heller] Force default protocol 7dac042 [Chris Heller] Add test for DockerInfo 5456c0c [Chris Heller] Adjust syntax style 521c194 [Chris Heller] Adjust version info 6e38f70 [Chris Heller] Document Mesos Docker properties 29572ab [Chris Heller] Support all DockerInfo fields b8c0dea [Chris Heller] Support for mesos DockerInfo in coarse-mode. 482a9fd [Chris Heller] Support for mesos DockerInfo in fine-grained mode.
Diffstat (limited to 'docs/running-on-mesos.md')
-rw-r--r--docs/running-on-mesos.md42
1 files changed, 42 insertions, 0 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 8f53d8201a..5f1d6daeb2 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -184,6 +184,16 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere
only makes sense if you run just one application at a time. You can cap the maximum number of cores
using `conf.set("spark.cores.max", "10")` (for example).
+# Mesos Docker Support
+
+Spark can make use of a Mesos Docker containerizer by setting the property `spark.mesos.executor.docker.image`
+in your [SparkConf](configuration.html#spark-properties).
+
+The Docker image used must have an appropriate version of Spark already part of the image, or you can
+have Mesos download Spark via the usual methods.
+
+Requires Mesos version 0.20.1 or later.
+
# Running Alongside Hadoop
You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a
@@ -238,6 +248,38 @@ See the [configuration page](configuration.html) for information on Spark config
</td>
</tr>
<tr>
+ <td><code>spark.mesos.executor.docker.image</code></td>
+ <td>(none)</td>
+ <td>
+ Set the name of the docker image that the Spark executors will run in. The selected
+ image must have Spark installed, as well as a compatible version of the Mesos library.
+ The installed path of Spark in the image can be specified with <code>spark.mesos.executor.home</code>;
+ the installed path of the Mesos library can be specified with <code>spark.executorEnv.MESOS_NATIVE_LIBRARY</code>.
+ </td>
+</tr>
+<tr>
+ <td><code>spark.mesos.executor.docker.volumes</code></td>
+ <td>(none)</td>
+ <td>
+ Set the list of volumes which will be mounted into the Docker image, which was set using
+ <code>spark.mesos.executor.docker.image</code>. The format of this property is a comma-separated list of
+ mappings following the form passed to <tt>docker run -v</tt>. That is they take the form:
+
+ <pre>[host_path:]container_path[:ro|:rw]</pre>
+ </td>
+</tr>
+<tr>
+ <td><code>spark.mesos.executor.docker.portmaps</code></td>
+ <td>(none)</td>
+ <td>
+ Set the list of incoming ports exposed by the Docker image, which was set using
+ <code>spark.mesos.executor.docker.image</code>. The format of this property is a comma-separated list of
+ mappings which take the form:
+
+ <pre>host_port:container_port[:tcp|:udp]</pre>
+ </td>
+</tr>
+<tr>
<td><code>spark.mesos.executor.home</code></td>
<td>driver side <code>SPARK_HOME</code></td>
<td>