diff options
author | Ankur Chauhan <achauhan@brightcove.com> | 2015-07-06 16:04:57 -0700 |
---|---|---|
committer | Andrew Or <andrew@databricks.com> | 2015-07-06 16:04:57 -0700 |
commit | 1165b17d24cdf1dbebb2faca14308dfe5c2a652c (patch) | |
tree | e3f1508fadb7e5a70b3039a707918227459fda96 /docs | |
parent | 9ff203346ca4decf2999e33bfb8c400ec75313e6 (diff) | |
download | spark-1165b17d24cdf1dbebb2faca14308dfe5c2a652c.tar.gz spark-1165b17d24cdf1dbebb2faca14308dfe5c2a652c.tar.bz2 spark-1165b17d24cdf1dbebb2faca14308dfe5c2a652c.zip |
[SPARK-6707] [CORE] [MESOS] Mesos Scheduler should allow the user to specify constraints based on slave attributes
Currently, the mesos scheduler only looks at the 'cpu' and 'mem' resources when trying to determine the usablility of a resource offer from a mesos slave node. It may be preferable for the user to be able to ensure that the spark jobs are only started on a certain set of nodes (based on attributes).
For example, If the user sets a property, let's say `spark.mesos.constraints` is set to `tachyon=true;us-east-1=false`, then the resource offers will be checked to see if they meet both these constraints and only then will be accepted to start new executors.
Author: Ankur Chauhan <achauhan@brightcove.com>
Closes #5563 from ankurcha/mesos_attribs and squashes the following commits:
902535b [Ankur Chauhan] Fix line length
d83801c [Ankur Chauhan] Update code as per code review comments
8b73f2d [Ankur Chauhan] Fix imports
c3523e7 [Ankur Chauhan] Added docs
1a24d0b [Ankur Chauhan] Expand scope of attributes matching to include all data types
482fd71 [Ankur Chauhan] Update access modifier to private[this] for offer constraints
5ccc32d [Ankur Chauhan] Fix nit pick whitespace
1bce782 [Ankur Chauhan] Fix nit pick whitespace
c0cbc75 [Ankur Chauhan] Use offer id value for debug message
7fee0ea [Ankur Chauhan] Add debug statements
fc7eb5b [Ankur Chauhan] Fix import codestyle
00be252 [Ankur Chauhan] Style changes as per code review comments
662535f [Ankur Chauhan] Incorporate code review comments + use SparkFunSuite
fdc0937 [Ankur Chauhan] Decline offers that did not meet criteria
67b58a0 [Ankur Chauhan] Add documentation for spark.mesos.constraints
63f53f4 [Ankur Chauhan] Update codestyle - uniform style for config values
02031e4 [Ankur Chauhan] Fix scalastyle warnings in tests
c09ed84 [Ankur Chauhan] Fixed the access modifier on offerConstraints val to private[mesos]
0c64df6 [Ankur Chauhan] Rename overhead fractions to memory_*, fix spacing
8cc1e8f [Ankur Chauhan] Make exception message more explicit about the source of the error
addedba [Ankur Chauhan] Added test case for malformed constraint string
ec9d9a6 [Ankur Chauhan] Add tests for parse constraint string
72fe88a [Ankur Chauhan] Fix up tests + remove redundant method override, combine utility class into new mesos scheduler util trait
92b47fd [Ankur Chauhan] Add attributes based constraints support to MesosScheduler
Diffstat (limited to 'docs')
-rw-r--r-- | docs/running-on-mesos.md | 22 |
1 files changed, 22 insertions, 0 deletions
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 5f1d6daeb2..1f915d8ea1 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -184,6 +184,14 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere only makes sense if you run just one application at a time. You can cap the maximum number of cores using `conf.set("spark.cores.max", "10")` (for example). +You may also make use of `spark.mesos.constraints` to set attribute based constraints on mesos resource offers. By default, all resource offers will be accepted. + +{% highlight scala %} +conf.set("spark.mesos.constraints", "tachyon=true;us-east-1=false") +{% endhighlight %} + +For example, Let's say `spark.mesos.constraints` is set to `tachyon=true;us-east-1=false`, then the resource offers will be checked to see if they meet both these constraints and only then will be accepted to start new executors. + # Mesos Docker Support Spark can make use of a Mesos Docker containerizer by setting the property `spark.mesos.executor.docker.image` @@ -298,6 +306,20 @@ See the [configuration page](configuration.html) for information on Spark config the final overhead will be this value. </td> </tr> +<tr> + <td><code>spark.mesos.constraints</code></td> + <td>Attribute based constraints to be matched against when accepting resource offers.</td> + <td> + Attribute based constraints on mesos resource offers. By default, all resource offers will be accepted. Refer to <a href="http://mesos.apache.org/documentation/attributes-resources/">Mesos Attributes & Resources</a> for more information on attributes. + <ul> + <li>Scalar constraints are matched with "less than equal" semantics i.e. value in the constraint must be less than or equal to the value in the resource offer.</li> + <li>Range constraints are matched with "contains" semantics i.e. value in the constraint must be within the resource offer's value.</li> + <li>Set constraints are matched with "subset of" semantics i.e. value in the constraint must be a subset of the resource offer's value.</li> + <li>Text constraints are metched with "equality" semantics i.e. value in the constraint must be exactly equal to the resource offer's value.</li> + <li>In case there is no value present as a part of the constraint any offer with the corresponding attribute will be accepted (without value check).</li> + </ul> + </td> +</tr> </table> # Troubleshooting and Debugging |