diff options
Diffstat (limited to 'docs/running-on-yarn.md')
-rw-r--r-- | docs/running-on-yarn.md | 15 |
1 files changed, 8 insertions, 7 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md index 501b19b79e..dd094ab131 100644 --- a/docs/running-on-yarn.md +++ b/docs/running-on-yarn.md @@ -3,10 +3,11 @@ layout: global title: Launching Spark on YARN --- -Spark 0.6 adds experimental support for running over a [YARN (Hadoop -NextGen)](http://hadoop.apache.org/docs/r2.0.1-alpha/hadoop-yarn/hadoop-yarn-site/YARN.html) cluster. -Because YARN depends on version 2.0 of the Hadoop libraries, this currently requires checking out a -separate branch of Spark, called `yarn`, which you can do as follows: +Experimental support for running over a [YARN (Hadoop +NextGen)](http://hadoop.apache.org/docs/r2.0.1-alpha/hadoop-yarn/hadoop-yarn-site/YARN.html) +cluster was added to Spark in version 0.6.0. Because YARN depends on version +2.0 of the Hadoop libraries, this currently requires checking out a separate +branch of Spark, called `yarn`, which you can do as follows: git clone git://github.com/mesos/spark cd spark @@ -18,7 +19,7 @@ separate branch of Spark, called `yarn`, which you can do as follows: - In order to distribute Spark within the cluster, it must be packaged into a single JAR file. This can be done by running `sbt/sbt assembly` - Your application code must be packaged into a separate JAR file. -If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_2.9.2-0.6.0-SNAPSHOT.jar` file can be generated by running `sbt/sbt package`. +If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_{{site.SCALA_VERSION}}-{{site.SPARK_VERSION}}-SNAPSHOT.jar` file can be generated by running `sbt/sbt package`. NOTE: since the documentation you're reading is for Spark version {{site.SPARK_VERSION}}, we are assuming here that you have downloaded Spark {{site.SPARK_VERSION}} or checked it out of source control. If you are using a different version of Spark, the version numbers in the jar generated by the sbt package command will obviously be different. # Launching Spark on YARN @@ -34,8 +35,8 @@ The command to launch the YARN Client is as follows: For example: - SPARK_JAR=./core/target/spark-core-assembly-0.6.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client \ - --jar examples/target/scala-2.9.2/spark-examples_2.9.2-0.6.0-SNAPSHOT.jar \ + SPARK_JAR=./core/target/spark-core-assembly-{{site.SPARK_VERSION}}-SNAPSHOT.jar ./run spark.deploy.yarn.Client \ + --jar examples/target/scala-{{site.SCALA_VERSION}}/spark-examples_{{site.SCALA_VERSION}}-{{site.SPARK_VERSION}}-SNAPSHOT.jar \ --class spark.examples.SparkPi \ --args standalone \ --num-workers 3 \ |