aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2012-09-26 23:53:38 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2012-09-26 23:53:38 -0700
commitbf18e0994e48d6e9ef5f4feb2d3dff8e7c719954 (patch)
tree5049a366ef1b24c85faf33c4ec865ca528f5a4dd /docs
parenta4093f75637910524f501d36b268584006455d9b (diff)
downloadspark-bf18e0994e48d6e9ef5f4feb2d3dff8e7c719954.tar.gz
spark-bf18e0994e48d6e9ef5f4feb2d3dff8e7c719954.tar.bz2
spark-bf18e0994e48d6e9ef5f4feb2d3dff8e7c719954.zip
Minor typos
Diffstat (limited to 'docs')
-rw-r--r--docs/java-programming-guide.md2
-rw-r--r--docs/running-on-yarn.md4
2 files changed, 3 insertions, 3 deletions
diff --git a/docs/java-programming-guide.md b/docs/java-programming-guide.md
index 2411e07849..4a36934553 100644
--- a/docs/java-programming-guide.md
+++ b/docs/java-programming-guide.md
@@ -33,7 +33,7 @@ There are a few key differences between the Java and Scala APIs:
* RDD methods like `collect()` and `countByKey()` return Java collections types,
such as `java.util.List` and `java.util.Map`.
* Key-value pairs, which are simply written as `(key, value)` in Scala, are represented
- by the `scala.Tuple2` class, and need to be created using `new Tuple2<K, V>(key, value)`
+ by the `scala.Tuple2` class, and need to be created using `new Tuple2<K, V>(key, value)`.
## RDD Classes
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 081b67ae1e..501b19b79e 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -18,7 +18,7 @@ separate branch of Spark, called `yarn`, which you can do as follows:
- In order to distribute Spark within the cluster, it must be packaged into a single JAR file. This can be done by running `sbt/sbt assembly`
- Your application code must be packaged into a separate JAR file.
-If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_2.9.1-0.6.0-SNAPSHOT.jar` file can be generated by running `sbt/sbt package`.
+If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_2.9.2-0.6.0-SNAPSHOT.jar` file can be generated by running `sbt/sbt package`.
# Launching Spark on YARN
@@ -35,7 +35,7 @@ The command to launch the YARN Client is as follows:
For example:
SPARK_JAR=./core/target/spark-core-assembly-0.6.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client \
- --jar examples/target/scala-2.9.1/spark-examples_2.9.1-0.6.0-SNAPSHOT.jar \
+ --jar examples/target/scala-2.9.2/spark-examples_2.9.2-0.6.0-SNAPSHOT.jar \
--class spark.examples.SparkPi \
--args standalone \
--num-workers 3 \