diff options
author | Andrew Ash <andrew@andrewash.com> | 2014-01-06 09:01:46 -0800 |
---|---|---|
committer | Andrew Ash <andrew@andrewash.com> | 2014-01-06 09:01:46 -0800 |
commit | 2dd4fb5698220bc33acb878254d41704221573bd (patch) | |
tree | 3a8f7ca5d9da785405e94d82b6bd40e9160357eb | |
parent | a2e7e0497484554f86bd71e93705eb0422b1512b (diff) | |
download | spark-2dd4fb5698220bc33acb878254d41704221573bd.tar.gz spark-2dd4fb5698220bc33acb878254d41704221573bd.tar.bz2 spark-2dd4fb5698220bc33acb878254d41704221573bd.zip |
Clarify spark.cores.max
It controls the count of cores across the cluster, not on a per-machine basis.
-rw-r--r-- | docs/configuration.md | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/docs/configuration.md b/docs/configuration.md index 567aba07f0..09342fedfc 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -81,7 +81,8 @@ there are at least five properties that you will commonly want to control: <td> When running on a <a href="spark-standalone.html">standalone deploy cluster</a> or a <a href="running-on-mesos.html#mesos-run-modes">Mesos cluster in "coarse-grained" - sharing mode</a>, how many CPU cores to request at most. The default will use all available cores + sharing mode</a>, the maximum amount of CPU cores to request for the application from + across the cluster (not from each machine). The default will use all available cores offered by the cluster manager. </td> </tr> |