aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-yarn.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei.zaharia@gmail.com>2013-08-27 19:50:32 -0700
committerMatei Zaharia <matei.zaharia@gmail.com>2013-08-27 19:50:32 -0700
commitcd043cf922692aa493308cf1e6da6f7522d80b78 (patch)
tree413d57cea54a0a0aae259b4c16a24a3d17704202 /docs/running-on-yarn.md
parent898da7e42221572884e915545d248bad058ae915 (diff)
parent63dc635de6e7a31095b3d246899a657c665e4ed7 (diff)
downloadspark-cd043cf922692aa493308cf1e6da6f7522d80b78.tar.gz
spark-cd043cf922692aa493308cf1e6da6f7522d80b78.tar.bz2
spark-cd043cf922692aa493308cf1e6da6f7522d80b78.zip
Merge pull request #867 from tgravescs/yarnenvconfigs
Spark on Yarn allow users to specify environment variables
Diffstat (limited to 'docs/running-on-yarn.md')
-rw-r--r--docs/running-on-yarn.md6
1 files changed, 6 insertions, 0 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 6bada9bdd7..cac9c5e4b6 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -42,6 +42,12 @@ This will build the shaded (consolidated) jar. Typically something like :
If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_{{site.SCALA_VERSION}}-{{site.SPARK_VERSION}}` file can be generated by running `sbt/sbt package`. NOTE: since the documentation you're reading is for Spark version {{site.SPARK_VERSION}}, we are assuming here that you have downloaded Spark {{site.SPARK_VERSION}} or checked it out of source control. If you are using a different version of Spark, the version numbers in the jar generated by the sbt package command will obviously be different.
+# Configuration
+
+Most of the configs are the same for Spark on YARN as other deploys. See the Configuration page for more information on those. These are configs that are specific to SPARK on YARN.
+
+* `SPARK_YARN_USER_ENV`, to add environment variables to the Spark processes launched on YARN. This can be a comma separated list of environment variables. ie SPARK_YARN_USER_ENV="JAVA_HOME=/jdk64,FOO=bar"
+
# Launching Spark on YARN
Ensure that HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which contains the (client side) configuration files for the hadoop cluster.