aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rwxr-xr-xdocs/_layouts/global.html15
-rw-r--r--docs/img/incubator-logo.pngbin0 -> 11651 bytes
-rw-r--r--docs/index.md53
-rw-r--r--docs/running-on-yarn.md39
-rwxr-xr-xpyspark2
-rwxr-xr-xrun-example2
-rwxr-xr-xspark-class4
7 files changed, 58 insertions, 57 deletions
diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html
index a014554462..91a4a2eaee 100755
--- a/docs/_layouts/global.html
+++ b/docs/_layouts/global.html
@@ -66,6 +66,7 @@
<li><a href="python-programming-guide.html">Spark in Python</a></li>
<li class="divider"></li>
<li><a href="streaming-programming-guide.html">Spark Streaming</a></li>
+ <li><a href="mllib-programming-guide.html">MLlib (Machine Learning)</a></li>
<li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
</ul>
</li>
@@ -77,8 +78,8 @@
<li><a href="api/pyspark/index.html">Spark Core for Python</a></li>
<li class="divider"></li>
<li><a href="api/streaming/index.html">Spark Streaming</a></li>
- <li><a href="api/bagel/index.html">Bagel (Pregel on Spark)</a></li>
<li><a href="api/mllib/index.html">MLlib (Machine Learning)</a></li>
+ <li><a href="api/bagel/index.html">Bagel (Pregel on Spark)</a></li>
</ul>
</li>
@@ -140,9 +141,15 @@
<hr>-->
- <!--<footer>
- <p></p>
- </footer>-->
+ <footer>
+ <hr>
+ <p style="text-align: center; veritcal-align: middle; color: #999;">
+ Apache Spark is an effort undergoing incubation at the Apache Software Foundation.
+ <a href="http://incubator.apache.org">
+ <img style="margin-left: 20px;" src="img/incubator-logo.png" />
+ </a>
+ </p>
+ </footer>
</div> <!-- /container -->
diff --git a/docs/img/incubator-logo.png b/docs/img/incubator-logo.png
new file mode 100644
index 0000000000..33ca7f6227
--- /dev/null
+++ b/docs/img/incubator-logo.png
Binary files differ
diff --git a/docs/index.md b/docs/index.md
index ec9c7dd4f3..5aa7f74059 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -3,13 +3,13 @@ layout: global
title: Spark Overview
---
-Spark is a MapReduce-like cluster computing framework designed for low-latency iterative jobs and interactive use from an interpreter.
-It provides clean, language-integrated APIs in [Scala](scala-programming-guide.html), [Java](java-programming-guide.html), and [Python](python-programming-guide.html), with a rich array of parallel operators.
+Apache Spark is a cluster computing engine that aims to make data analytics both easier and faster.
+It provides rich, language-integrated APIs in [Scala](scala-programming-guide.html), [Java](java-programming-guide.html), and [Python](python-programming-guide.html), and a powerful execution engine that supports general operator graphs.
Spark can run on the Apache Mesos cluster manager, Hadoop YARN, Amazon EC2, or without an independent resource manager ("standalone mode").
# Downloading
-Get Spark by visiting the [downloads page](http://spark-project.org/downloads.html) of the Spark website. This documentation is for Spark version {{site.SPARK_VERSION}}.
+Get Spark from the [downloads page](http://spark.incubator.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
# Building
@@ -42,11 +42,17 @@ Finally, Spark can be used interactively from a modified version of the Scala in
# A Note About Hadoop Versions
-Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported
+Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported
storage systems. Because the HDFS protocol has changed in different versions of
-Hadoop, you must build Spark against the same version that your cluster runs.
-You can change the version by setting the `HADOOP_VERSION` variable at the top
-of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
+Hadoop, you must build Spark against the same version that your cluster uses.
+You can do this by setting the `SPARK_HADOOP_VERSION` variable when compiling:
+
+ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly
+
+In addition, if you wish to run Spark on [YARN](running-on-yarn.md), you should also
+set `SPARK_YARN` to `true`:
+
+ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
# Where to Go from Here
@@ -54,15 +60,20 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
* [Quick Start](quick-start.html): a quick introduction to the Spark API; start here!
* [Spark Programming Guide](scala-programming-guide.html): an overview of Spark concepts, and details on the Scala API
-* [Java Programming Guide](java-programming-guide.html): using Spark from Java
-* [Python Programming Guide](python-programming-guide.html): using Spark from Python
-* [Spark Streaming Guide](streaming-programming-guide.html): using the alpha release of Spark Streaming
+ * [Java Programming Guide](java-programming-guide.html): using Spark from Java
+ * [Python Programming Guide](python-programming-guide.html): using Spark from Python
+* [Spark Streaming](streaming-programming-guide.html): using the alpha release of Spark Streaming
+* [MLlib (Machine Learning)](mllib-programming-guide.html): Spark's built-in machine learning library
+* [Bagel (Pregel on Spark)](bagel-programming-guide.html): simple graph processing model
**API Docs:**
-* [Spark Java/Scala (Scaladoc)](api/core/index.html)
-* [Spark Python (Epydoc)](api/pyspark/index.html)
-* [Spark Streaming Java/Scala (Scaladoc)](api/streaming/index.html)
+* [Spark for Java/Scala (Scaladoc)](api/core/index.html)
+* [Spark for Python (Epydoc)](api/pyspark/index.html)
+* [Spark Streaming for Java/Scala (Scaladoc)](api/streaming/index.html)
+* [MLlib (Machine Learning) for Java/Scala (Scaladoc)](api/mllib/index.html)
+* [Bagel (Pregel on Spark) for Scala (Scaladoc)](api/bagel/index.html)
+
**Deployment guides:**
@@ -74,27 +85,27 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
**Other documents:**
-* [Building Spark With Maven](building-with-maven.html): Build Spark using the Maven build tool
* [Configuration](configuration.html): customize Spark via its configuration system
* [Tuning Guide](tuning.html): best practices to optimize performance and memory use
-* [Bagel](bagel-programming-guide.html): an implementation of Google's Pregel on Spark
+* [Hardware Provisioning](hardware-provisioning.html): recommendations for cluster hardware
+* [Building Spark with Maven](building-with-maven.html): Build Spark using the Maven build tool
* [Contributing to Spark](contributing-to-spark.html)
**External resources:**
-* [Spark Homepage](http://www.spark-project.org)
-* [Mailing List](http://groups.google.com/group/spark-users): ask questions about Spark here
-* [AMP Camp](http://ampcamp.berkeley.edu/): a two-day training camp at UC Berkeley that featured talks and exercises
- about Spark, Shark, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/agenda-2012),
+* [Spark Homepage](http://spark.incubator.apache.org)
+* [Mailing Lists](http://spark.incubator.apache.org/mailing-lists.html): ask questions about Spark here
+* [AMP Camps](http://ampcamp.berkeley.edu/): a series of training camps at UC Berkeley that featured talks and
+ exercises about Spark, Shark, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/agenda-2012),
[slides](http://ampcamp.berkeley.edu/agenda-2012) and [exercises](http://ampcamp.berkeley.edu/exercises-2012) are
available online for free.
-* [Code Examples](http://spark-project.org/examples.html): more are also available in the [examples subfolder](https://github.com/mesos/spark/tree/master/examples/src/main/scala/spark/examples) of Spark
+* [Code Examples](http://spark.incubator.apache.org/examples.html): more are also available in the [examples subfolder](https://github.com/mesos/spark/tree/master/examples/src/main/scala/spark/examples) of Spark
* [Paper Describing Spark](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf)
* [Paper Describing Spark Streaming](http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf)
# Community
-To get help using Spark or keep up with Spark development, sign up for the [spark-users mailing list](http://groups.google.com/group/spark-users).
+To get help using Spark or keep up with Spark development, sign up for the [user mailing list](http://spark.incubator.apache.org/mailing-lists.html).
If you're in the San Francisco Bay Area, there's a regular [Spark meetup](http://www.meetup.com/spark-users/) every few weeks. Come by to meet the developers and other users.
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 678cd57aba..fe5334ffdc 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -3,50 +3,33 @@ layout: global
title: Launching Spark on YARN
---
-Experimental support for running over a [YARN (Hadoop
+Support for running on [YARN (Hadoop
NextGen)](http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/YARN.html)
-cluster was added to Spark in version 0.6.0. This was merged into master as part of 0.7 effort.
-To build spark with YARN support, please use the hadoop2-yarn profile.
-Ex: mvn -Phadoop2-yarn clean install
+was added to Spark in version 0.6.0, and improved in 0.7.0 and 0.8.0.
-# Building spark core consolidated jar.
+# Building a YARN-Enabled Assembly JAR
-We need a consolidated spark core jar (which bundles all the required dependencies) to run Spark jobs on a yarn cluster.
-This can be built either through sbt or via maven.
+We need a consolidated Spark JAR (which bundles all the required dependencies) to run Spark jobs on a YARN cluster.
+This can be built by setting the Hadoop version and `SPARK_YARN` environment variable, as follows:
-- Building spark assembled jar via sbt.
-Enable YARN support by setting `SPARK_YARN=true` when invoking sbt:
+ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true ./sbt/sbt assembly
- SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true ./sbt/sbt clean assembly
-
-The assembled jar would typically be something like :
-`./yarn/target/spark-yarn-assembly-0.8.0-SNAPSHOT.jar`
-
-
-- Building spark assembled jar via Maven.
- Use the hadoop2-yarn profile and execute the package target.
-
-Something like this. Ex:
-
- mvn -Phadoop2-yarn -Dhadoop.version=2.0.5-alpha clean package -DskipTests=true
-
-
-This will build the shaded (consolidated) jar. Typically something like :
-`./yarn/target/spark-yarn-bin-<VERSION>-shaded.jar`
+The assembled JAR will be something like this:
+`./assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly_{{site.SPARK_VERSION}}-hadoop2.0.5.jar`.
# Preparations
-- Building spark-yarn assembly (see above).
+- Building a YARN-enabled assembly (see above).
- Your application code must be packaged into a separate JAR file.
-If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_{{site.SCALA_VERSION}}-{{site.SPARK_VERSION}}` file can be generated by running `sbt/sbt package`. NOTE: since the documentation you're reading is for Spark version {{site.SPARK_VERSION}}, we are assuming here that you have downloaded Spark {{site.SPARK_VERSION}} or checked it out of source control. If you are using a different version of Spark, the version numbers in the jar generated by the sbt package command will obviously be different.
+If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_{{site.SCALA_VERSION}}-{{site.SPARK_VERSION}}` file can be generated by running `sbt/sbt assembly`. NOTE: since the documentation you're reading is for Spark version {{site.SPARK_VERSION}}, we are assuming here that you have downloaded Spark {{site.SPARK_VERSION}} or checked it out of source control. If you are using a different version of Spark, the version numbers in the jar generated by the sbt package command will obviously be different.
# Configuration
Most of the configs are the same for Spark on YARN as other deploys. See the Configuration page for more information on those. These are configs that are specific to SPARK on YARN.
-* `SPARK_YARN_USER_ENV`, to add environment variables to the Spark processes launched on YARN. This can be a comma separated list of environment variables. ie SPARK_YARN_USER_ENV="JAVA_HOME=/jdk64,FOO=bar"
+* `SPARK_YARN_USER_ENV`, to add environment variables to the Spark processes launched on YARN. This can be a comma separated list of environment variables, e.g. `SPARK_YARN_USER_ENV="JAVA_HOME=/jdk64,FOO=bar"`.
# Launching Spark on YARN
diff --git a/pyspark b/pyspark
index 2dba2ceb21..4941a36d0d 100755
--- a/pyspark
+++ b/pyspark
@@ -31,7 +31,7 @@ if [ ! -f "$FWDIR/RELEASE" ]; then
ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/spark-assembly*hadoop*.jar >& /dev/null
if [[ $? != 0 ]]; then
echo "Failed to find Spark assembly in $FWDIR/assembly/target" >&2
- echo "You need to compile Spark before running this program" >&2
+ echo "You need to build Spark with sbt/sbt assembly before running this program" >&2
exit 1
fi
fi
diff --git a/run-example b/run-example
index ccd4356bdf..24d83ba5cf 100755
--- a/run-example
+++ b/run-example
@@ -50,7 +50,7 @@ if [ -e "$EXAMPLES_DIR"/target/spark-examples*[0-9T].jar ]; then
fi
if [[ -z $SPARK_EXAMPLES_JAR ]]; then
echo "Failed to find Spark examples assembly in $FWDIR/examples/target" >&2
- echo "You need to compile Spark before running this program" >&2
+ echo "You need to build Spark with sbt/sbt assembly before running this program" >&2
exit 1
fi
diff --git a/spark-class b/spark-class
index 5ef3de9773..244b78b4e1 100755
--- a/spark-class
+++ b/spark-class
@@ -102,10 +102,10 @@ export JAVA_OPTS
if [ ! -f "$FWDIR/RELEASE" ]; then
# Exit if the user hasn't compiled Spark
- ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/spark-assembly*.jar >& /dev/null
+ ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/spark-assembly*hadoop*.jar >& /dev/null
if [[ $? != 0 ]]; then
echo "Failed to find Spark assembly in $FWDIR/assembly/target" >&2
- echo "You need to compile Spark before running this program" >&2
+ echo "You need to build Spark with sbt/sbt assembly before running this program" >&2
exit 1
fi
fi