diff options
author | Matei Zaharia <matei@eecs.berkeley.edu> | 2012-09-26 22:54:39 -0700 |
---|---|---|
committer | Matei Zaharia <matei@eecs.berkeley.edu> | 2012-09-26 22:54:39 -0700 |
commit | ea05fc130b64ce356ab7524a3d5bd1e022cf51b5 (patch) | |
tree | 551ac8546cb21aa750a0967ef115e16639b0ef64 /docs/configuration.md | |
parent | 1ef4f0fbd27e54803f14fed1df541fb341daced8 (diff) | |
download | spark-ea05fc130b64ce356ab7524a3d5bd1e022cf51b5.tar.gz spark-ea05fc130b64ce356ab7524a3d5bd1e022cf51b5.tar.bz2 spark-ea05fc130b64ce356ab7524a3d5bd1e022cf51b5.zip |
Updates to standalone cluster, web UI and deploy docs.
Diffstat (limited to 'docs/configuration.md')
-rw-r--r-- | docs/configuration.md | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/docs/configuration.md b/docs/configuration.md index 93a644910c..0b6be26bba 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -80,9 +80,9 @@ there are at least four properties that you will commonly want to control: <td>spark.cores.max</td> <td>(infinite)</td> <td> - When running on a <a href="{{BASE_PATH}}spark-standalone.html">standalone deploy cluster</a> or a - <a href="{{BASE_PATH}}running-on-mesos.html">Mesos cluster in "coarse-grained" sharing mode</a>, - how many CPU cores to request at most. The default will use all available cores. + When running on a <a href="{{HOME_PATH}}spark-standalone.html">standalone deploy cluster</a> or a + <a href="{{HOME_PATH}}running-on-mesos.html#mesos-run-modes">Mesos cluster in "coarse-grained" + sharing mode</a>, how many CPU cores to request at most. The default will use all available cores. </td> </tr> </table> @@ -97,7 +97,7 @@ Apart from these, the following properties are also available, and may be useful <td>false</td> <td> If set to "true", runs over Mesos clusters in - <a href="{{BASE_PATH}}running-on-mesos.html">"coarse-grained" sharing mode</a>, + <a href="{{HOME_PATH}}running-on-mesos.html#mesos-run-modes">"coarse-grained" sharing mode</a>, where Spark acquires one long-lived Mesos task on each machine instead of one Mesos task per Spark task. This gives lower-latency scheduling for short queries, but leaves resources in use for the whole duration of the Spark job. |