aboutsummaryrefslogtreecommitdiff
path: root/docs/index.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2012-09-12 19:38:15 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2012-09-12 19:38:15 -0700
commit35e17be8408d126e8daa2ba6a42508074917e681 (patch)
tree8813df080f1c04e276c0134c97f24c55d4d43cb7 /docs/index.md
parentb4dfa25c8a6dc242cf36b5558ed19672f0ea99c3 (diff)
parentc92e6169cf83d0fb87220999db993869912e6438 (diff)
downloadspark-35e17be8408d126e8daa2ba6a42508074917e681.tar.gz
spark-35e17be8408d126e8daa2ba6a42508074917e681.tar.bz2
spark-35e17be8408d126e8daa2ba6a42508074917e681.zip
Merge branch 'dev' of github.com:mesos/spark into dev
Diffstat (limited to 'docs/index.md')
-rw-r--r--docs/index.md17
1 files changed, 9 insertions, 8 deletions
diff --git a/docs/index.md b/docs/index.md
index a1fe3b2e56..48ab151e41 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -14,7 +14,7 @@ Get Spark by checking out the master branch of the Git repository, using `git cl
Spark requires [Scala 2.9](http://www.scala-lang.org/).
In addition, to run Spark on a cluster, you will need to install [Mesos](http://incubator.apache.org/mesos/), using the steps in
-[[Running Spark on Mesos]]. However, if you just want to run Spark on a single machine (possibly using multiple cores),
+[Running Spark on Mesos]({{HOME_PATH}}running-on-mesos.html). However, if you just want to run Spark on a single machine (possibly using multiple cores),
you do not need Mesos.
To build and run Spark, you will need to have Scala's `bin` directory in your `PATH`,
@@ -51,17 +51,18 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
# Where to Go from Here
-* [Spark Programming Guide](/programming-guide.html): how to get started using Spark, and details on the API
-* [Running Spark on Amazon EC2](/running-on-amazon-ec2.html): scripts that let you launch a cluster on EC2 in about 5 minutes
-* [Running Spark on Mesos](/running-on-mesos.html): instructions on how to deploy to a private cluster
-* [Configuration](/configuration.html)
-* [Bagel Programming Guide](/bagel-programming-guide.html): implementation of Google's Pregel on Spark
-* [Spark Debugger](/spark-debugger.html): experimental work on a debugger for Spark jobs
+* [Spark Programming Guide]({{HOME_PATH}}programming-guide.html): how to get started using Spark, and details on the API
+* [Running Spark on Amazon EC2]({{HOME_PATH}}running-on-amazon-ec2.html): scripts that let you launch a cluster on EC2 in about 5 minutes
+* [Running Spark on Mesos]({{HOME_PATH}}running-on-mesos.html): instructions on how to deploy to a private cluster
+* [Configuration]({{HOME_PATH}}configuration.html)
+* [Bagel Programming Guide]({{HOME_PATH}}bagel-programming-guide.html): implementation of Google's Pregel on Spark
+* [Spark Debugger]({{HOME_PATH}}spark-debugger.html): experimental work on a debugger for Spark jobs
* [Contributing to Spark](contributing-to-spark.html)
# Other Resources
* [Spark Homepage](http://www.spark-project.org)
+* [AMPCamp](http://ampcamp.berkeley.edu/): All AMPCamp presentation videos are available online. Going through the videos and exercises is a great way to sharpen your Spark skills.
* [Paper describing the programming model](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf)
* [Code Examples](http://spark-project.org/examples.html) (more also available in the [examples subfolder](https://github.com/mesos/spark/tree/master/examples/src/main/scala/spark/examples) of the Spark codebase)
* [Mailing List](http://groups.google.com/group/spark-users)
@@ -72,4 +73,4 @@ To keep up with Spark development or get help, sign up for the [spark-users mail
If you're in the San Francisco Bay Area, there's a regular [Spark meetup](http://www.meetup.com/spark-users/) every few weeks. Come by to meet the developers and other users.
-If you'd like to contribute code to Spark, read [how to contribute](Contributing to Spark).
+If you'd like to contribute code to Spark, read [how to contribute]({{HOME_PATH}}contributing-to-spark.html).