aboutsummaryrefslogtreecommitdiff
path: root/docs/hadoop-third-party-distributions.md
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@gmail.com>2014-04-22 19:22:06 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-04-22 19:22:06 -0700
commit995fdc96bcd2c540804401eaab009a777d7d7aa9 (patch)
treec57cf54c46a3a33cce065682fb26422a42c0ca56 /docs/hadoop-third-party-distributions.md
parentea8cea82a02099bb66f1e77b757e4d96cc31d6e2 (diff)
downloadspark-995fdc96bcd2c540804401eaab009a777d7d7aa9.tar.gz
spark-995fdc96bcd2c540804401eaab009a777d7d7aa9.tar.bz2
spark-995fdc96bcd2c540804401eaab009a777d7d7aa9.zip
Assorted clean-up for Spark-on-YARN.
In particular when the HADOOP_CONF_DIR is not not specified. Author: Patrick Wendell <pwendell@gmail.com> Closes #488 from pwendell/hadoop-cleanup and squashes the following commits: fe95f13 [Patrick Wendell] Changes based on Andrew's feeback 18d09c1 [Patrick Wendell] Review comments from Andrew 17929cc [Patrick Wendell] Assorted clean-up for Spark-on-YARN.
Diffstat (limited to 'docs/hadoop-third-party-distributions.md')
-rw-r--r--docs/hadoop-third-party-distributions.md9
1 files changed, 2 insertions, 7 deletions
diff --git a/docs/hadoop-third-party-distributions.md b/docs/hadoop-third-party-distributions.md
index de6a2b0a43..454877a7fa 100644
--- a/docs/hadoop-third-party-distributions.md
+++ b/docs/hadoop-third-party-distributions.md
@@ -110,10 +110,5 @@ The location of these configuration files varies across CDH and HDP versions, bu
a common location is inside of `/etc/hadoop/conf`. Some tools, such as Cloudera Manager, create
configurations on-the-fly, but offer a mechanisms to download copies of them.
-There are a few ways to make these files visible to Spark:
-
-* You can copy these files into `$SPARK_HOME/conf` and they will be included in Spark's
-classpath automatically.
-* If you are running Spark on the same nodes as Hadoop _and_ your distribution includes both
-`hdfs-site.xml` and `core-site.xml` in the same directory, you can set `HADOOP_CONF_DIR`
-in `$SPARK_HOME/spark-env.sh` to that directory.
+To make these files visible to Spark, set `HADOOP_CONF_DIR` in `$SPARK_HOME/spark-env.sh`
+to a location containing the configuration files.