aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorIlya Ganelin <ilya.ganelin@capitalone.com>2015-02-19 15:50:58 -0800
committerAndrew Or <andrew@databricks.com>2015-02-19 15:53:20 -0800
commit6bddc40353057a562c78e75c5549c79a0d7d5f8b (patch)
tree7ee1f06771ec6463273d7661aacdefdceffa616c
parent34b7c35380c88569a1396fb4ed991a0bed4288e7 (diff)
downloadspark-6bddc40353057a562c78e75c5549c79a0d7d5f8b.tar.gz
spark-6bddc40353057a562c78e75c5549c79a0d7d5f8b.tar.bz2
spark-6bddc40353057a562c78e75c5549c79a0d7d5f8b.zip
SPARK-5570: No docs stating that `new SparkConf().set("spark.driver.memory", ...) will not work
I've updated documentation to reflect true behavior of this setting in client vs. cluster mode. Author: Ilya Ganelin <ilya.ganelin@capitalone.com> Closes #4665 from ilganeli/SPARK-5570 and squashes the following commits: 5d1c8dd [Ilya Ganelin] Added example configuration code a51700a [Ilya Ganelin] Getting rid of extra spaces 85f7a08 [Ilya Ganelin] Reworded note 5889d43 [Ilya Ganelin] Formatting adjustment f149ba1 [Ilya Ganelin] Minor updates 1fec7a5 [Ilya Ganelin] Updated to add clarification for other driver properties db47595 [Ilya Ganelin] Slight formatting update c899564 [Ilya Ganelin] Merge remote-tracking branch 'upstream/master' into SPARK-5570 17b751d [Ilya Ganelin] Updated documentation for driver-memory to reflect its true behavior in client vs cluster mode
-rw-r--r--docs/configuration.md23
1 files changed, 22 insertions, 1 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index eb0d6d33c9..541695c83a 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -115,7 +115,11 @@ of the most common options to set are:
<td>
Amount of memory to use for the driver process, i.e. where SparkContext is initialized.
(e.g. <code>512m</code>, <code>2g</code>).
- </td>
+
+ <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
+ directly in your application, because the driver JVM has already started at that point.
+ Instead, please set this through the <code>--driver-memory</code> command line option
+ or in your default properties file.</td>
</tr>
<tr>
<td><code>spark.executor.memory</code></td>
@@ -214,6 +218,11 @@ Apart from these, the following properties are also available, and may be useful
<td>(none)</td>
<td>
A string of extra JVM options to pass to the driver. For instance, GC settings or other logging.
+
+ <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
+ directly in your application, because the driver JVM has already started at that point.
+ Instead, please set this through the <code>--driver-java-options</code> command line option or in
+ your default properties file.</td>
</td>
</tr>
<tr>
@@ -221,6 +230,11 @@ Apart from these, the following properties are also available, and may be useful
<td>(none)</td>
<td>
Extra classpath entries to append to the classpath of the driver.
+
+ <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
+ directly in your application, because the driver JVM has already started at that point.
+ Instead, please set this through the <code>--driver-class-path</code> command line option or in
+ your default properties file.</td>
</td>
</tr>
<tr>
@@ -228,6 +242,11 @@ Apart from these, the following properties are also available, and may be useful
<td>(none)</td>
<td>
Set a special library path to use when launching the driver JVM.
+
+ <br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
+ directly in your application, because the driver JVM has already started at that point.
+ Instead, please set this through the <code>--driver-library-path</code> command line option or in
+ your default properties file.</td>
</td>
</tr>
<tr>
@@ -237,6 +256,8 @@ Apart from these, the following properties are also available, and may be useful
(Experimental) Whether to give user-added jars precedence over Spark's own jars when loading
classes in the the driver. This feature can be used to mitigate conflicts between Spark's
dependencies and user dependencies. It is currently an experimental feature.
+
+ This is used in cluster mode only.
</td>
</tr>
<tr>