aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-mesos.md
diff options
context:
space:
mode:
authorMichael Gummelt <mgummelt@mesosphere.io>2016-11-14 23:46:54 -0800
committerReynold Xin <rxin@databricks.com>2016-11-14 23:46:54 -0800
commitd89bfc92302424406847ac7a9cfca714e6b742fc (patch)
treeeb0e51ca03ab5285b8a0defba5b0815980d89435 /docs/running-on-mesos.md
parent86430cc4e8dbc65a091a532fc9c5ec12b7be04f4 (diff)
downloadspark-d89bfc92302424406847ac7a9cfca714e6b742fc.tar.gz
spark-d89bfc92302424406847ac7a9cfca714e6b742fc.tar.bz2
spark-d89bfc92302424406847ac7a9cfca714e6b742fc.zip
[SPARK-18232][MESOS] Support CNI
## What changes were proposed in this pull request? Adds support for CNI-isolated containers ## How was this patch tested? I launched SparkPi both with and without `spark.mesos.network.name`, and verified the job completed successfully. Author: Michael Gummelt <mgummelt@mesosphere.io> Closes #15740 from mgummelt/spark-342-cni.
Diffstat (limited to 'docs/running-on-mesos.md')
-rw-r--r--docs/running-on-mesos.md27
1 files changed, 15 insertions, 12 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 923d8dbebf..8d5ad12cb8 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -368,17 +368,6 @@ See the [configuration page](configuration.html) for information on Spark config
</td>
</tr>
<tr>
- <td><code>spark.mesos.executor.docker.portmaps</code></td>
- <td>(none)</td>
- <td>
- Set the list of incoming ports exposed by the Docker image, which was set using
- <code>spark.mesos.executor.docker.image</code>. The format of this property is a comma-separated list of
- mappings which take the form:
-
- <pre>host_port:container_port[:tcp|:udp]</pre>
- </td>
-</tr>
-<tr>
<td><code>spark.mesos.executor.home</code></td>
<td>driver side <code>SPARK_HOME</code></td>
<td>
@@ -505,12 +494,26 @@ See the [configuration page](configuration.html) for information on Spark config
Set the maximum number GPU resources to acquire for this job. Note that executors will still launch when no GPU resources are found
since this configuration is just a upper limit and not a guaranteed amount.
</td>
+ </tr>
+<tr>
+ <td><code>spark.mesos.network.name</code></td>
+ <td><code>(none)</code></td>
+ <td>
+ Attach containers to the given named network. If this job is
+ launched in cluster mode, also launch the driver in the given named
+ network. See
+ <a href="http://mesos.apache.org/documentation/latest/cni/">the Mesos CNI docs</a>
+ for more details.
+ </td>
</tr>
<tr>
<td><code>spark.mesos.fetcherCache.enable</code></td>
<td><code>false</code></td>
<td>
- If set to `true`, all URIs (example: `spark.executor.uri`, `spark.mesos.uris`) will be cached by the [Mesos fetcher cache](http://mesos.apache.org/documentation/latest/fetcher/)
+ If set to `true`, all URIs (example: `spark.executor.uri`,
+ `spark.mesos.uris`) will be cached by the <a
+ href="http://mesos.apache.org/documentation/latest/fetcher/">Mesos
+ Fetcher Cache</a>
</td>
</tr>
</table>