aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2012-09-25 23:59:04 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2012-09-25 23:59:04 -0700
commitd51d5e0582c0605deae7497cd95a055698dc9383 (patch)
tree6c402339d4aeae723f4985f7444a60fd0416be38 /docs
parentc5754bb9399a59c4a83d28e618fea87900aa8f8a (diff)
downloadspark-d51d5e0582c0605deae7497cd95a055698dc9383.tar.gz
spark-d51d5e0582c0605deae7497cd95a055698dc9383.tar.bz2
spark-d51d5e0582c0605deae7497cd95a055698dc9383.zip
Doc fixes
Diffstat (limited to 'docs')
-rwxr-xr-xdocs/_layouts/global.html2
-rw-r--r--docs/index.md4
-rw-r--r--docs/running-on-yarn.md28
3 files changed, 19 insertions, 15 deletions
diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html
index 578814017e..dee7f65d0d 100755
--- a/docs/_layouts/global.html
+++ b/docs/_layouts/global.html
@@ -6,7 +6,7 @@
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
- <title>{{ page.title }}</title>
+ <title>{{ page.title }} - Spark 0.6.0 Documentation</title>
<meta name="description" content="">
<link rel="stylesheet" href="{{HOME_PATH}}css/bootstrap.min.css">
diff --git a/docs/index.md b/docs/index.md
index cdc96200a8..26b2cc0840 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -55,10 +55,12 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
# Where to Go from Here
**Programming guides:**
+
* [Spark Programming Guide]({{HOME_PATH}}scala-programming-guide.html): how to get started using Spark, and details on the Scala API
* [Java Programming Guide]({{HOME_PATH}}java-programming-guide.html): using Spark from Java
**Deployment guides:**
+
* [Running Spark on Amazon EC2]({{HOME_PATH}}ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes
* [Standalone Deploy Mode]({{HOME_PATH}}spark-standalone.html): launch a standalone cluster quickly without Mesos
* [Running Spark on Mesos]({{HOME_PATH}}running-on-mesos.html): deploy a private cluster using
@@ -66,12 +68,14 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
* [Running Spark on YARN]({{HOME_PATH}}running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
**Other documents:**
+
* [Configuration]({{HOME_PATH}}configuration.html): customize Spark via its configuration system
* [API docs (Scaladoc)]({{HOME_PATH}}api/core/index.html)
* [Bagel]({{HOME_PATH}}bagel-programming-guide.html): an implementation of Google's Pregel on Spark
* [Contributing to Spark](contributing-to-spark.html)
**External resources:**
+
* [Spark Homepage](http://www.spark-project.org)
* [AMP Camp](http://ampcamp.berkeley.edu/): a two-day training camp at UC Berkeley that featured talks and exercises
about Spark, Shark, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/agenda-2012),
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 7cd46da940..19e7aede27 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -16,23 +16,23 @@ If you want to test out the YARN deployment mode, you can use the current spark
The command to launch the YARN Client is as follows:
- SPARK_JAR=<SPARK_YAR_FILE> ./run spark.deploy.yarn.Client
- --jar <YOUR_APP_JAR_FILE>
- --class <APP_MAIN_CLASS>
- --args <APP_MAIN_ARGUMENTS>
- --num-workers <NUMBER_OF_WORKER_MACHINES>
- --worker-memory <MEMORY_PER_WORKER>
- --worker-cores <CORES_PER_WORKER>
+ SPARK_JAR=<SPARK_YAR_FILE> ./run spark.deploy.yarn.Client \
+ --jar <YOUR_APP_JAR_FILE> \
+ --class <APP_MAIN_CLASS> \
+ --args <APP_MAIN_ARGUMENTS> \
+ --num-workers <NUMBER_OF_WORKER_MACHINES> \
+ --worker-memory <MEMORY_PER_WORKER> \
+ --worker-cores <CORES_PER_WORKER>
For example:
- SPARK_JAR=./core/target/spark-core-assembly-0.6.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client
- --jar examples/target/scala-2.9.1/spark-examples_2.9.1-0.6.0-SNAPSHOT.jar
- --class spark.examples.SparkPi
- --args standalone
- --num-workers 3
- --worker-memory 2g
- --worker-cores 2
+ SPARK_JAR=./core/target/spark-core-assembly-0.6.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client \
+ --jar examples/target/scala-2.9.1/spark-examples_2.9.1-0.6.0-SNAPSHOT.jar \
+ --class spark.examples.SparkPi \
+ --args standalone \
+ --num-workers 3 \
+ --worker-memory 2g \
+ --worker-cores 2
The above starts a YARN Client programs which periodically polls the Application Master for status updates and displays them in the console. The client will exit once your application has finished running.