aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@gmail.com>2014-07-10 11:10:43 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-07-10 11:11:00 -0700
commit88006a62377d2b7c9886ba49ceef158737bc1b97 (patch)
tree6c43e2001aca00ecff0b628dd01fe2bf7f8d52cf /README.md
parent628932b8d0dbbc6c68c61d4bca1c504f38684c2a (diff)
downloadspark-88006a62377d2b7c9886ba49ceef158737bc1b97.tar.gz
spark-88006a62377d2b7c9886ba49ceef158737bc1b97.tar.bz2
spark-88006a62377d2b7c9886ba49ceef158737bc1b97.zip
HOTFIX: Minor doc update for sbt change
Diffstat (limited to 'README.md')
-rw-r--r--README.md13
1 files changed, 6 insertions, 7 deletions
diff --git a/README.md b/README.md
index 6211a5889a..01ef851f34 100644
--- a/README.md
+++ b/README.md
@@ -69,29 +69,28 @@ can be run using:
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported
storage systems. Because the protocols have changed in different versions of
Hadoop, you must build Spark against the same version that your cluster runs.
-You can change the version by setting the `SPARK_HADOOP_VERSION` environment
-when building Spark.
+You can change the version by setting `-Dhadoop.version` when building Spark.
For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop
versions without YARN, use:
# Apache Hadoop 1.2.1
- $ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly
+ $ sbt/sbt -Dhadoop.version=1.2.1 assembly
# Cloudera CDH 4.2.0 with MapReduce v1
- $ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly
+ $ sbt/sbt -Dhadoop.version=2.0.0-mr1-cdh4.2.0 assembly
For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions
with YARN, also set `SPARK_YARN=true`:
# Apache Hadoop 2.0.5-alpha
- $ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
+ $ sbt/sbt -Dhadoop.version=2.0.5-alpha -Pyarn assembly
# Cloudera CDH 4.2.0 with MapReduce v2
- $ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly
+ $ sbt/sbt -Dhadoop.version=2.0.0-cdh4.2.0 -Pyarn assembly
# Apache Hadoop 2.2.X and newer
- $ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly
+ $ sbt/sbt -Dhadoop.version=2.2.0 -Pyarn assembly
When developing a Spark application, specify the Hadoop version by adding the
"hadoop-client" artifact to your project's dependencies. For example, if you're