diff options
Diffstat (limited to 'docs/job-scheduling.md')
-rw-r--r-- | docs/job-scheduling.md | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md index 5951155fe3..df2faa5e41 100644 --- a/docs/job-scheduling.md +++ b/docs/job-scheduling.md @@ -32,9 +32,8 @@ Resource allocation can be configured as follows, based on the cluster type: * **Standalone mode:** By default, applications submitted to the standalone mode cluster will run in FIFO (first-in-first-out) order, and each application will try to use all available nodes. You can limit - the number of nodes an application uses by setting the `spark.cores.max` configuration property in it. This - will allow multiple users/applications to run concurrently. For example, you might launch a long-running - server that uses 10 cores, and allow users to launch shells that use 20 cores each. + the number of nodes an application uses by setting the `spark.cores.max` configuration property in it, + or change the default for applications that don't set this setting through `spark.deploy.defaultCores`. Finally, in addition to controlling cores, each application's `spark.executor.memory` setting controls its memory use. * **Mesos:** To use static partitioning on Mesos, set the `spark.mesos.coarse` configuration property to `true`, |