aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorIulian Dragos <jaguarul@gmail.com>2015-04-28 12:08:18 -0700
committerAndrew Or <andrew@databricks.com>2015-04-28 12:08:18 -0700
commit8aab94d8984e9d12194dbda47b2e7d9dbc036889 (patch)
tree6ae73a4d711bb4a44db690390ab7626ad2c40f61 /docs
parent52ccf1d3739694826915cdf01642bab02958eb78 (diff)
downloadspark-8aab94d8984e9d12194dbda47b2e7d9dbc036889.tar.gz
spark-8aab94d8984e9d12194dbda47b2e7d9dbc036889.tar.bz2
spark-8aab94d8984e9d12194dbda47b2e7d9dbc036889.zip
[SPARK-4286] Add an external shuffle service that can be run as a daemon.
This allows Mesos deployments to use the shuffle service (and implicitly dynamic allocation). It does so by adding a new "main" class and two corresponding scripts in `sbin`: - `sbin/start-shuffle-service.sh` - `sbin/stop-shuffle-service.sh` Specific options can be passed in `SPARK_SHUFFLE_OPTS`. This is picking up work from #3861 /cc tnachen Author: Iulian Dragos <jaguarul@gmail.com> Closes #4990 from dragos/feature/external-shuffle-service and squashes the following commits: 6c2b148 [Iulian Dragos] Import order and wrong name fixup. 07804ad [Iulian Dragos] Moved ExternalShuffleService to the `deploy` package + other minor tweaks. 4dc1f91 [Iulian Dragos] Reviewer’s comments: 8145429 [Iulian Dragos] Add an external shuffle service that can be run as a daemon.
Diffstat (limited to 'docs')
-rw-r--r--docs/job-scheduling.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index 963e88a3e1..8d9c2ba204 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -32,7 +32,7 @@ Resource allocation can be configured as follows, based on the cluster type:
* **Standalone mode:** By default, applications submitted to the standalone mode cluster will run in
FIFO (first-in-first-out) order, and each application will try to use all available nodes. You can limit
the number of nodes an application uses by setting the `spark.cores.max` configuration property in it,
- or change the default for applications that don't set this setting through `spark.deploy.defaultCores`.
+ or change the default for applications that don't set this setting through `spark.deploy.defaultCores`.
Finally, in addition to controlling cores, each application's `spark.executor.memory` setting controls
its memory use.
* **Mesos:** To use static partitioning on Mesos, set the `spark.mesos.coarse` configuration property to `true`,