aboutsummaryrefslogtreecommitdiff
path: root/docs/index.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2013-09-06 00:29:37 -0400
committerMatei Zaharia <matei@eecs.berkeley.edu>2013-09-08 00:29:11 -0700
commit98fb69822cf780160bca51abeaab7c82e49fab54 (patch)
tree524a1e75519f6a5cc65d004501d6237228db97f2 /docs/index.md
parent38488aca8a67d5d2749b82e3fd5f3dc50873d09a (diff)
downloadspark-98fb69822cf780160bca51abeaab7c82e49fab54.tar.gz
spark-98fb69822cf780160bca51abeaab7c82e49fab54.tar.bz2
spark-98fb69822cf780160bca51abeaab7c82e49fab54.zip
Work in progress:
- Add job scheduling docs - Rename some fair scheduler properties - Organize intro page better - Link to Apache wiki for "contributing to Spark"
Diffstat (limited to 'docs/index.md')
-rw-r--r--docs/index.md26
1 files changed, 15 insertions, 11 deletions
diff --git a/docs/index.md b/docs/index.md
index d3aacc629f..1814cb19c8 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -21,7 +21,7 @@ Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_VERSION}}. If you write applications in Scala, you will need to use this same version of Scala in your own program -- newer major versions may not work. You can get the right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
-# Testing the Build
+# Running the Examples and Shell
Spark comes with several sample programs in the `examples` directory.
To run one of the samples, use `./run-example <class> <params>` in the top-level Spark directory
@@ -34,14 +34,16 @@ to connect to. This can be a [URL for a distributed cluster](scala-programming-g
or `local` to run locally with one thread, or `local[N]` to run locally with N threads. You should start by using
`local` for testing.
-Finally, Spark can be used interactively through modified versions of the Scala shell (`./spark-shell`) or
-Python interpreter (`./pyspark`). These are a great way to learn Spark.
+Finally, you can run Spark interactively through modified versions of the Scala shell (`./spark-shell`) or
+Python interpreter (`./pyspark`). These are a great way to learn the framework.
-# Running on a Cluster
+# Launching on a Cluster
-Spark supports several options for deployment:
+The Spark [cluster mode overview](cluster-overview.html) explains the key concepts in running on a cluster.
+Spark can run both by itself, or over several existing cluster managers. It currently provides several
+options for deployment:
-* [Amazon EC2](ec2-scripts.html): our scripts let you launch a cluster in about 5 minutes
+* [Amazon EC2](ec2-scripts.html): our EC2 scripts let you launch a cluster in about 5 minutes
* [Standalone Deploy Mode](spark-standalone.html): simplest way to deploy Spark on a private cluster
* [Apache Mesos](running-on-mesos.html)
* [Hadoop YARN](running-on-yarn.html)
@@ -91,19 +93,21 @@ In addition, if you wish to run Spark on [YARN](running-on-yarn.md), set
**Deployment guides:**
-* [Running Spark on Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes
+* [Cluster Overview](cluster-overview.html): overview of concepts and components when running on a cluster
+* [Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes
* [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster quickly without a third-party cluster manager
-* [Running Spark on Mesos](running-on-mesos.html): deploy a private cluster using
+* [Mesos](running-on-mesos.html): deploy a private cluster using
[Apache Mesos](http://incubator.apache.org/mesos)
-* [Running Spark on YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
+* [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
**Other documents:**
* [Configuration](configuration.html): customize Spark via its configuration system
* [Tuning Guide](tuning.html): best practices to optimize performance and memory use
* [Hardware Provisioning](hardware-provisioning.html): recommendations for cluster hardware
-* [Building Spark with Maven](building-with-maven.html): Build Spark using the Maven build tool
-* [Contributing to Spark](contributing-to-spark.html)
+* [Job Scheduling](job-scheduling.html): scheduling resources across and within Spark applications
+* [Building Spark with Maven](building-with-maven.html): build Spark using the Maven system
+* [Contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark)
**External resources:**