aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-yarn.md
diff options
context:
space:
mode:
authorReynold Xin <rxin@cs.berkeley.edu>2013-05-17 18:05:46 -0700
committerReynold Xin <rxin@cs.berkeley.edu>2013-05-17 18:05:46 -0700
commit0eab7a78b90e2593075c479282f631a5a20e77a9 (patch)
tree90852eed295bb1e7009812d1c11b7ac010f5dfdb /docs/running-on-yarn.md
parent7760d78b3aca8d9f6563ff1f7c2d88905b70e941 (diff)
downloadspark-0eab7a78b90e2593075c479282f631a5a20e77a9.tar.gz
spark-0eab7a78b90e2593075c479282f631a5a20e77a9.tar.bz2
spark-0eab7a78b90e2593075c479282f631a5a20e77a9.zip
Fixed a couple typos and formating problems in the YARN documentation.
Diffstat (limited to 'docs/running-on-yarn.md')
-rw-r--r--docs/running-on-yarn.md20
1 files changed, 11 insertions, 9 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 3946100247..66fb8d73e8 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -14,29 +14,31 @@ Ex: mvn -Phadoop2-yarn clean install
We need a consolidated spark core jar (which bundles all the required dependencies) to run Spark jobs on a yarn cluster.
This can be built either through sbt or via maven.
-- Building spark assembled jar via sbt.
-It is a manual process of enabling it in project/SparkBuild.scala.
+- Building spark assembled jar via sbt.
+ It is a manual process of enabling it in project/SparkBuild.scala.
Please comment out the
HADOOP_VERSION, HADOOP_MAJOR_VERSION and HADOOP_YARN
variables before the line 'For Hadoop 2 YARN support'
Next, uncomment the subsequent 3 variable declaration lines (for these three variables) which enable hadoop yarn support.
-Assembly of the jar Ex:
-./sbt/sbt clean assembly
+Assembly of the jar Ex:
+
+ ./sbt/sbt clean assembly
The assembled jar would typically be something like :
-./core/target/spark-core-assembly-0.8.0-SNAPSHOT.jar
+`./core/target/spark-core-assembly-0.8.0-SNAPSHOT.jar`
-- Building spark assembled jar via sbt.
-Use the hadoop2-yarn profile and execute the package target.
+- Building spark assembled jar via Maven.
+ Use the hadoop2-yarn profile and execute the package target.
Something like this. Ex:
-$ mvn -Phadoop2-yarn clean package -DskipTests=true
+
+ mvn -Phadoop2-yarn clean package -DskipTests=true
This will build the shaded (consolidated) jar. Typically something like :
-./repl-bin/target/spark-repl-bin-<VERSION>-shaded-hadoop2-yarn.jar
+`./repl-bin/target/spark-repl-bin-<VERSION>-shaded-hadoop2-yarn.jar`
# Preparations