aboutsummaryrefslogtreecommitdiff
path: root/docs/index.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2013-08-31 17:40:33 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2013-08-31 17:40:33 -0700
commit9ddad0dcb47e3326151a53e270448b5135805ae5 (patch)
tree76f17bcb3af42b67b2e0ee93e1367d8e6dff8398 /docs/index.md
parent4819baa658a6c8a3e4c5c504af284ea6091e4c35 (diff)
downloadspark-9ddad0dcb47e3326151a53e270448b5135805ae5.tar.gz
spark-9ddad0dcb47e3326151a53e270448b5135805ae5.tar.bz2
spark-9ddad0dcb47e3326151a53e270448b5135805ae5.zip
Fixes suggested by Patrick
Diffstat (limited to 'docs/index.md')
-rw-r--r--docs/index.md9
1 files changed, 5 insertions, 4 deletions
diff --git a/docs/index.md b/docs/index.md
index bcd7dad6ae..0ea0e103e4 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -40,12 +40,13 @@ Python interpreter (`./pyspark`). These are a great way to learn Spark.
Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported
storage systems. Because the HDFS protocol has changed in different versions of
Hadoop, you must build Spark against the same version that your cluster uses.
-You can do this by setting the `SPARK_HADOOP_VERSION` variable when compiling:
+By default, Spark links to Hadoop 1.0.4. You can change this by setting the
+`SPARK_HADOOP_VERSION` variable when compiling:
SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly
-In addition, if you wish to run Spark on [YARN](running-on-yarn.md), you should also
-set `SPARK_YARN`:
+In addition, if you wish to run Spark on [YARN](running-on-yarn.md), set
+`SPARK_YARN` to `true`:
SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
@@ -94,7 +95,7 @@ set `SPARK_YARN`:
exercises about Spark, Shark, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/agenda-2012),
[slides](http://ampcamp.berkeley.edu/agenda-2012) and [exercises](http://ampcamp.berkeley.edu/exercises-2012) are
available online for free.
-* [Code Examples](http://spark.incubator.apache.org/examples.html): more are also available in the [examples subfolder](https://github.com/mesos/spark/tree/master/examples/src/main/scala/spark/examples) of Spark
+* [Code Examples](http://spark.incubator.apache.org/examples.html): more are also available in the [examples subfolder](https://github.com/mesos/spark/tree/master/examples/src/main/scala/) of Spark
* [Paper Describing Spark](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf)
* [Paper Describing Spark Streaming](http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf)