aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-yarn.md
diff options
context:
space:
mode:
authorfelixcheung <felixcheung_m@hotmail.com>2016-01-21 16:30:20 +0100
committerSean Owen <sowen@cloudera.com>2016-01-21 16:30:20 +0100
commit85200c09adc6eb98fadb8505f55cb44e3d8b3390 (patch)
tree21321d39a9962c0c7525165773ef64fd98cbe8bf /docs/running-on-yarn.md
parent1b2a918e59addcdccdf8e011bce075cc9dd07b93 (diff)
downloadspark-85200c09adc6eb98fadb8505f55cb44e3d8b3390.tar.gz
spark-85200c09adc6eb98fadb8505f55cb44e3d8b3390.tar.bz2
spark-85200c09adc6eb98fadb8505f55cb44e3d8b3390.zip
[SPARK-12534][DOC] update documentation to list command line equivalent to properties
Several Spark properties equivalent to Spark submit command line options are missing. Author: felixcheung <felixcheung_m@hotmail.com> Closes #10491 from felixcheung/sparksubmitdoc.
Diffstat (limited to 'docs/running-on-yarn.md')
-rw-r--r--docs/running-on-yarn.md27
1 files changed, 27 insertions, 0 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index a148c867eb..ad66b9f64a 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -114,6 +114,19 @@ If you need a reference to the proper location to put log files in the YARN so t
</td>
</tr>
<tr>
+ <td><code>spark.driver.memory</code></td>
+ <td>1g</td>
+ <td>
+ Amount of memory to use for the driver process, i.e. where SparkContext is initialized.
+ (e.g. <code>1g</code>, <code>2g</code>).
+
+ <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
+ directly in your application, because the driver JVM has already started at that point.
+ Instead, please set this through the <code>--driver-memory</code> command line option
+ or in your default properties file.
+ </td>
+</tr>
+<tr>
<td><code>spark.driver.cores</code></td>
<td><code>1</code></td>
<td>
@@ -203,6 +216,13 @@ If you need a reference to the proper location to put log files in the YARN so t
</td>
</tr>
<tr>
+ <td><code>spark.executor.cores</code></td>
+ <td>1 in YARN mode, all the available cores on the worker in standalone mode.</td>
+ <td>
+ The number of cores to use on each executor. For YARN and standalone mode only.
+ </td>
+</tr>
+<tr>
<td><code>spark.executor.instances</code></td>
<td><code>2</code></td>
<td>
@@ -210,6 +230,13 @@ If you need a reference to the proper location to put log files in the YARN so t
</td>
</tr>
<tr>
+ <td><code>spark.executor.memory</code></td>
+ <td>1g</td>
+ <td>
+ Amount of memory to use per executor process (e.g. <code>2g</code>, <code>8g</code>).
+ </td>
+</tr>
+<tr>
<td><code>spark.yarn.executor.memoryOverhead</code></td>
<td>executorMemory * 0.10, with minimum of 384 </td>
<td>