aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorPatrick Wendell <patrick@databricks.com>2015-06-09 16:14:21 -0700
committerPatrick Wendell <patrick@databricks.com>2015-06-09 16:14:21 -0700
commit6e4fb0c9e8f03cf068c422777cfce82a89e8e738 (patch)
tree1ecaaf3e938f3ec0e30e4d603b913bb594825ccf /docs
parent0d5892dc723d203e7d892d3beacbaa97aedb1a24 (diff)
downloadspark-6e4fb0c9e8f03cf068c422777cfce82a89e8e738.tar.gz
spark-6e4fb0c9e8f03cf068c422777cfce82a89e8e738.tar.bz2
spark-6e4fb0c9e8f03cf068c422777cfce82a89e8e738.zip
[SPARK-6511] [DOCUMENTATION] Explain how to use Hadoop provided builds
This provides preliminary documentation pointing out how to use the Hadoop free builds. I am hoping over time this list can grow to include most of the popular Hadoop distributions. Getting more people using these builds will help us long term reduce the number of binaries we build. Author: Patrick Wendell <patrick@databricks.com> Closes #6729 from pwendell/hadoop-provided and squashes the following commits: 1113b76 [Patrick Wendell] [SPARK-6511] [Documentation] Explain how to use Hadoop provided builds
Diffstat (limited to 'docs')
-rw-r--r--docs/hadoop-provided.md26
-rw-r--r--docs/index.md10
2 files changed, 33 insertions, 3 deletions
diff --git a/docs/hadoop-provided.md b/docs/hadoop-provided.md
new file mode 100644
index 0000000000..0ba5a58051
--- /dev/null
+++ b/docs/hadoop-provided.md
@@ -0,0 +1,26 @@
+---
+layout: global
+displayTitle: Using Spark's "Hadoop Free" Build
+title: Using Spark's "Hadoop Free" Build
+---
+
+Spark uses Hadoop client libraries for HDFS and YARN. Starting in version Spark 1.4, the project packages "Hadoop free" builds that lets you more easily connect a single Spark binary to any Hadoop version. To use these builds, you need to modify `SPARK_DIST_CLASSPATH` to include Hadoop's package jars. The most convenient place to do this is by adding an entry in `conf/spark-env.sh`.
+
+This page describes how to connect Spark to Hadoop for different types of distributions.
+
+# Apache Hadoop
+For Apache distributions, you can use Hadoop's 'classpath' command. For instance:
+
+{% highlight bash %}
+### in conf/spark-env.sh ###
+
+# If 'hadoop' binary is on your PATH
+export SPARK_DIST_CLASSPATH=$(hadoop classpath)
+
+# With explicit path to 'hadoop' binary
+export SPARK_DIST_CLASSPATH=$(/path/to/hadoop/bin/hadoop classpath)
+
+# Passing a Hadoop configuration directory
+export SPARK_DIST_CLASSPATH=$(hadoop classpath --config /path/to/configs)
+
+{% endhighlight %}
diff --git a/docs/index.md b/docs/index.md
index 7939657915..d85cf12def 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -12,9 +12,13 @@ It also supports a rich set of higher-level tools including [Spark SQL](sql-prog
# Downloading
-Get Spark from the [downloads page](http://spark.apache.org/downloads.html) of the project website. This documentation is for Spark version {{site.SPARK_VERSION}}. The downloads page
-contains Spark packages for many popular HDFS versions. If you'd like to build Spark from
-scratch, visit [Building Spark](building-spark.html).
+Get Spark from the [downloads page](http://spark.apache.org/downloads.html) of the project website. This documentation is for Spark version {{site.SPARK_VERSION}}. Spark uses Hadoop's client libraries for HDFS and YARN. Downloads are pre-packaged for a handful of popular Hadoop versions.
+Users can also download a "Hadoop free" binary and run Spark with any Hadoop version
+[by augmenting Spark's classpath](hadoop-provided.html).
+
+If you'd like to build Spark from
+source, visit [Building Spark](building-spark.html).
+
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It's easy to run
locally on one machine --- all you need is to have `java` installed on your system `PATH`,