diff options
author | Matei Zaharia <matei@databricks.com> | 2013-12-30 22:17:28 -0500 |
---|---|---|
committer | Matei Zaharia <matei@databricks.com> | 2013-12-30 22:17:28 -0500 |
commit | 0fa5809768cf60ec62b4277f04e23a44dc1582e2 (patch) | |
tree | fee16620755769a70975c41d894db43633b18098 /docs/spark-standalone.md | |
parent | 994f080f8ae3372366e6004600ba791c8a372ff0 (diff) | |
download | spark-0fa5809768cf60ec62b4277f04e23a44dc1582e2.tar.gz spark-0fa5809768cf60ec62b4277f04e23a44dc1582e2.tar.bz2 spark-0fa5809768cf60ec62b4277f04e23a44dc1582e2.zip |
Updated docs for SparkConf and handled review comments
Diffstat (limited to 'docs/spark-standalone.md')
-rw-r--r-- | docs/spark-standalone.md | 15 |
1 files changed, 11 insertions, 4 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index b822265b5a..f7f0b78908 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -154,11 +154,18 @@ You can also pass an option `-c <numCores>` to control the number of cores that The standalone cluster mode currently only supports a simple FIFO scheduler across applications. However, to allow multiple concurrent users, you can control the maximum number of resources each -application will acquire. +application will use. By default, it will acquire *all* cores in the cluster, which only makes sense if you just run one -application at a time. You can cap the number of cores using -`System.setProperty("spark.cores.max", "10")` (for example). -This value must be set *before* initializing your SparkContext. +application at a time. You can cap the number of cores by setting `spark.cores.max` in your +[SparkConf](configuration.html#spark-properties). For example: + +{% highlight scala %} +val conf = new SparkConf() + .setMaster(...) + .setAppName(...) + .set("spark.cores.max", "10") +val sc = new SparkContext(conf) +{% endhighlight %} # Monitoring and Logging |