aboutsummaryrefslogtreecommitdiff
path: root/docs/configuration.md
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@gmail.com>2014-04-21 10:26:33 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-04-21 10:26:33 -0700
commitfb98488fc8e68cc84f6e0750fd4e9e29029879d2 (patch)
treeeba99b56bea8ec2e357020a413bf9cf04a4e3308 /docs/configuration.md
parent3a390bfd80f80739b9d847780eccc443fc2dc0ea (diff)
downloadspark-fb98488fc8e68cc84f6e0750fd4e9e29029879d2.tar.gz
spark-fb98488fc8e68cc84f6e0750fd4e9e29029879d2.tar.bz2
spark-fb98488fc8e68cc84f6e0750fd4e9e29029879d2.zip
Clean up and simplify Spark configuration
Over time as we've added more deployment modes, this have gotten a bit unwieldy with user-facing configuration options in Spark. Going forward we'll advise all users to run `spark-submit` to launch applications. This is a WIP patch but it makes the following improvements: 1. Improved `spark-env.sh.template` which was missing a lot of things users now set in that file. 2. Removes the shipping of SPARK_CLASSPATH, SPARK_JAVA_OPTS, and SPARK_LIBRARY_PATH to the executors on the cluster. This was an ugly hack. Instead it introduces config variables spark.executor.extraJavaOpts, spark.executor.extraLibraryPath, and spark.executor.extraClassPath. 3. Adds ability to set these same variables for the driver using `spark-submit`. 4. Allows you to load system properties from a `spark-defaults.conf` file when running `spark-submit`. This will allow setting both SparkConf options and other system properties utilized by `spark-submit`. 5. Made `SPARK_LOCAL_IP` an environment variable rather than a SparkConf property. This is more consistent with it being set on each node. Author: Patrick Wendell <pwendell@gmail.com> Closes #299 from pwendell/config-cleanup and squashes the following commits: 127f301 [Patrick Wendell] Improvements to testing a006464 [Patrick Wendell] Moving properties file template. b4b496c [Patrick Wendell] spark-defaults.properties -> spark-defaults.conf 0086939 [Patrick Wendell] Minor style fixes af09e3e [Patrick Wendell] Mention config file in docs and clean-up docs b16e6a2 [Patrick Wendell] Cleanup of spark-submit script and Scala quick start guide af0adf7 [Patrick Wendell] Automatically add user jar a56b125 [Patrick Wendell] Responses to Tom's review d50c388 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into config-cleanup a762901 [Patrick Wendell] Fixing test failures ffa00fe [Patrick Wendell] Review feedback fda0301 [Patrick Wendell] Note 308f1f6 [Patrick Wendell] Properly escape quotes and other clean-up for YARN e83cd8f [Patrick Wendell] Changes to allow re-use of test applications be42f35 [Patrick Wendell] Handle case where SPARK_HOME is not set c2a2909 [Patrick Wendell] Test compile fixes 4ee6f9d [Patrick Wendell] Making YARN doc changes consistent afc9ed8 [Patrick Wendell] Cleaning up line limits and two compile errors. b08893b [Patrick Wendell] Additional improvements. ace4ead [Patrick Wendell] Responses to review feedback. b72d183 [Patrick Wendell] Review feedback for spark env file 46555c1 [Patrick Wendell] Review feedback and import clean-ups 437aed1 [Patrick Wendell] Small fix 761ebcd [Patrick Wendell] Library path and classpath for drivers 7cc70e4 [Patrick Wendell] Clean up terminology inside of spark-env script 5b0ba8e [Patrick Wendell] Don't ship executor envs 84cc5e5 [Patrick Wendell] Small clean-up 1f75238 [Patrick Wendell] SPARK_JAVA_OPTS --> SPARK_MASTER_OPTS for master settings 4982331 [Patrick Wendell] Remove SPARK_LIBRARY_PATH 6eaf7d0 [Patrick Wendell] executorJavaOpts 0faa3b6 [Patrick Wendell] Stash of adding config options in submit script and YARN ac2d65e [Patrick Wendell] Change spark.local.dir -> SPARK_LOCAL_DIRS
Diffstat (limited to 'docs/configuration.md')
-rw-r--r--docs/configuration.md50
1 files changed, 37 insertions, 13 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index a3029837ff..5a4abca264 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -73,6 +73,9 @@ there are at least five properties that you will commonly want to control:
Directory to use for "scratch" space in Spark, including map output files and RDDs that get stored
on disk. This should be on a fast, local disk in your system. It can also be a comma-separated
list of multiple directories on different disks.
+
+ NOTE: In Spark 1.0 and later this will be overriden by SPARK_LOCAL_DIRS (Standalone, Mesos) or
+ LOCAL_DIRS (YARN) envrionment variables set by the cluster manager.
</td>
</tr>
<tr>
@@ -578,7 +581,7 @@ Apart from these, the following properties are also available, and may be useful
to consolidate them onto as few nodes as possible. Spreading out is usually better for
data locality in HDFS, but consolidating is more efficient for compute-intensive workloads. <br/>
<b>Note:</b> this setting needs to be configured in the standalone cluster master, not in individual
- applications; you can set it through <code>SPARK_JAVA_OPTS</code> in <code>spark-env.sh</code>.
+ applications; you can set it through <code>SPARK_MASTER_OPTS</code> in <code>spark-env.sh</code>.
</td>
</tr>
<tr>
@@ -591,7 +594,7 @@ Apart from these, the following properties are also available, and may be useful
Set this lower on a shared cluster to prevent users from grabbing
the whole cluster by default. <br/>
<b>Note:</b> this setting needs to be configured in the standalone cluster master, not in individual
- applications; you can set it through <code>SPARK_JAVA_OPTS</code> in <code>spark-env.sh</code>.
+ applications; you can set it through <code>SPARK_MASTER_OPTS</code> in <code>spark-env.sh</code>.
</td>
</tr>
<tr>
@@ -649,6 +652,34 @@ Apart from these, the following properties are also available, and may be useful
Number of cores to allocate for each task.
</td>
</tr>
+<tr>
+ <td>spark.executor.extraJavaOptions</td>
+ <td>(none)</td>
+ <td>
+ A string of extra JVM options to pass to executors. For instance, GC settings or other
+ logging. Note that it is illegal to set Spark properties or heap size settings with this
+ option. Spark properties should be set using a SparkConf object or the
+ spark-defaults.conf file used with the spark-submit script. Heap size settings can be set
+ with spark.executor.memory.
+ </td>
+</tr>
+<tr>
+ <td>spark.executor.extraClassPath</td>
+ <td>(none)</td>
+ <td>
+ Extra classpath entries to append to the classpath of executors. This exists primarily
+ for backwards-compatibility with older versions of Spark. Users typically should not need
+ to set this option.
+ </td>
+</tr>
+<tr>
+ <td>spark.executor.extraLibraryPath</td>
+ <td>(none)</td>
+ <td>
+ Set a special library path to use when launching executor JVM's.
+ </td>
+</tr>
+
</table>
## Viewing Spark Properties
@@ -659,10 +690,9 @@ This is a useful place to check to make sure that your properties have been set
# Environment Variables
Certain Spark settings can be configured through environment variables, which are read from the `conf/spark-env.sh`
-script in the directory where Spark is installed (or `conf/spark-env.cmd` on Windows). These variables are meant to be for machine-specific settings, such
-as library search paths. While Spark properties can also be set there through `SPARK_JAVA_OPTS`, for per-application settings, we recommend setting
-these properties within the application instead of in `spark-env.sh` so that different applications can use different
-settings.
+script in the directory where Spark is installed (or `conf/spark-env.cmd` on Windows). In Standalone and Mesos modes,
+this file can give machine specific information such as hostnames. It is also sourced when running local
+Spark applications or submission scripts.
Note that `conf/spark-env.sh` does not exist by default when Spark is installed. However, you can copy
`conf/spark-env.sh.template` to create it. Make sure you make the copy executable.
@@ -672,13 +702,7 @@ The following variables can be set in `spark-env.sh`:
* `JAVA_HOME`, the location where Java is installed (if it's not on your default `PATH`)
* `PYSPARK_PYTHON`, the Python binary to use for PySpark
* `SPARK_LOCAL_IP`, to configure which IP address of the machine to bind to.
-* `SPARK_LIBRARY_PATH`, to add search directories for native libraries.
-* `SPARK_CLASSPATH`, to add elements to Spark's classpath that you want to be present for _all_ applications.
- Note that applications can also add dependencies for themselves through `SparkContext.addJar` -- we recommend
- doing that when possible.
-* `SPARK_JAVA_OPTS`, to add JVM options. This includes Java options like garbage collector settings and any system
- properties that you'd like to pass with `-D`. One use case is to set some Spark properties differently on this
- machine, e.g., `-Dspark.local.dir=/disk1,/disk2`.
+* `SPARK_PUBLIC_DNS`, the hostname your Spark program will advertise to other machines.
* Options for the Spark [standalone cluster scripts](spark-standalone.html#cluster-launch-scripts), such as number of cores
to use on each machine and maximum memory.