aboutsummaryrefslogtreecommitdiff
path: root/docs/configuration.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/configuration.md')
-rw-r--r--docs/configuration.md50
1 files changed, 37 insertions, 13 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index a3029837ff..5a4abca264 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -73,6 +73,9 @@ there are at least five properties that you will commonly want to control:
Directory to use for "scratch" space in Spark, including map output files and RDDs that get stored
on disk. This should be on a fast, local disk in your system. It can also be a comma-separated
list of multiple directories on different disks.
+
+ NOTE: In Spark 1.0 and later this will be overriden by SPARK_LOCAL_DIRS (Standalone, Mesos) or
+ LOCAL_DIRS (YARN) envrionment variables set by the cluster manager.
</td>
</tr>
<tr>
@@ -578,7 +581,7 @@ Apart from these, the following properties are also available, and may be useful
to consolidate them onto as few nodes as possible. Spreading out is usually better for
data locality in HDFS, but consolidating is more efficient for compute-intensive workloads. <br/>
<b>Note:</b> this setting needs to be configured in the standalone cluster master, not in individual
- applications; you can set it through <code>SPARK_JAVA_OPTS</code> in <code>spark-env.sh</code>.
+ applications; you can set it through <code>SPARK_MASTER_OPTS</code> in <code>spark-env.sh</code>.
</td>
</tr>
<tr>
@@ -591,7 +594,7 @@ Apart from these, the following properties are also available, and may be useful
Set this lower on a shared cluster to prevent users from grabbing
the whole cluster by default. <br/>
<b>Note:</b> this setting needs to be configured in the standalone cluster master, not in individual
- applications; you can set it through <code>SPARK_JAVA_OPTS</code> in <code>spark-env.sh</code>.
+ applications; you can set it through <code>SPARK_MASTER_OPTS</code> in <code>spark-env.sh</code>.
</td>
</tr>
<tr>
@@ -649,6 +652,34 @@ Apart from these, the following properties are also available, and may be useful
Number of cores to allocate for each task.
</td>
</tr>
+<tr>
+ <td>spark.executor.extraJavaOptions</td>
+ <td>(none)</td>
+ <td>
+ A string of extra JVM options to pass to executors. For instance, GC settings or other
+ logging. Note that it is illegal to set Spark properties or heap size settings with this
+ option. Spark properties should be set using a SparkConf object or the
+ spark-defaults.conf file used with the spark-submit script. Heap size settings can be set
+ with spark.executor.memory.
+ </td>
+</tr>
+<tr>
+ <td>spark.executor.extraClassPath</td>
+ <td>(none)</td>
+ <td>
+ Extra classpath entries to append to the classpath of executors. This exists primarily
+ for backwards-compatibility with older versions of Spark. Users typically should not need
+ to set this option.
+ </td>
+</tr>
+<tr>
+ <td>spark.executor.extraLibraryPath</td>
+ <td>(none)</td>
+ <td>
+ Set a special library path to use when launching executor JVM's.
+ </td>
+</tr>
+
</table>
## Viewing Spark Properties
@@ -659,10 +690,9 @@ This is a useful place to check to make sure that your properties have been set
# Environment Variables
Certain Spark settings can be configured through environment variables, which are read from the `conf/spark-env.sh`
-script in the directory where Spark is installed (or `conf/spark-env.cmd` on Windows). These variables are meant to be for machine-specific settings, such
-as library search paths. While Spark properties can also be set there through `SPARK_JAVA_OPTS`, for per-application settings, we recommend setting
-these properties within the application instead of in `spark-env.sh` so that different applications can use different
-settings.
+script in the directory where Spark is installed (or `conf/spark-env.cmd` on Windows). In Standalone and Mesos modes,
+this file can give machine specific information such as hostnames. It is also sourced when running local
+Spark applications or submission scripts.
Note that `conf/spark-env.sh` does not exist by default when Spark is installed. However, you can copy
`conf/spark-env.sh.template` to create it. Make sure you make the copy executable.
@@ -672,13 +702,7 @@ The following variables can be set in `spark-env.sh`:
* `JAVA_HOME`, the location where Java is installed (if it's not on your default `PATH`)
* `PYSPARK_PYTHON`, the Python binary to use for PySpark
* `SPARK_LOCAL_IP`, to configure which IP address of the machine to bind to.
-* `SPARK_LIBRARY_PATH`, to add search directories for native libraries.
-* `SPARK_CLASSPATH`, to add elements to Spark's classpath that you want to be present for _all_ applications.
- Note that applications can also add dependencies for themselves through `SparkContext.addJar` -- we recommend
- doing that when possible.
-* `SPARK_JAVA_OPTS`, to add JVM options. This includes Java options like garbage collector settings and any system
- properties that you'd like to pass with `-D`. One use case is to set some Spark properties differently on this
- machine, e.g., `-Dspark.local.dir=/disk1,/disk2`.
+* `SPARK_PUBLIC_DNS`, the hostname your Spark program will advertise to other machines.
* Options for the Spark [standalone cluster scripts](spark-standalone.html#cluster-launch-scripts), such as number of cores
to use on each machine and maximum memory.