aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorAndy Konwinski <andyk@berkeley.edu>2013-03-13 02:02:34 -0700
committerAndy Konwinski <andyk@berkeley.edu>2013-03-13 02:02:34 -0700
commitb63109763ba695725f8fd2d4078c2ff6e2134d19 (patch)
tree9c46f0ec1bec375e0eb9bb9664aa86bbc2a2c18b /docs/quick-start.md
parent00c4d238ddb368a9ef4c6251e232bd5808ba80f4 (diff)
downloadspark-b63109763ba695725f8fd2d4078c2ff6e2134d19.tar.gz
spark-b63109763ba695725f8fd2d4078c2ff6e2134d19.tar.bz2
spark-b63109763ba695725f8fd2d4078c2ff6e2134d19.zip
Fix broken link in Quick Start.
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 36d024f13a..de304cdaff 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -189,7 +189,7 @@ public class SimpleJob {
}
{% endhighlight %}
-This job simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. Note that like in the Scala example, we initialize a SparkContext, though we use the special `JavaSparkContext` class to get a Java-friendly one. We also create RDDs (represented by `JavaRDD`) and run transformations on them. Finally, we pass functions to Spark by creating classes that extend `spark.api.java.function.Function`. The [Java programming guide]("java-programming-guide") describes these differences in more detail.
+This job simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. Note that like in the Scala example, we initialize a SparkContext, though we use the special `JavaSparkContext` class to get a Java-friendly one. We also create RDDs (represented by `JavaRDD`) and run transformations on them. Finally, we pass functions to Spark by creating classes that extend `spark.api.java.function.Function`. The [Java programming guide](java-programming-guide.html) describes these differences in more detail.
To build the job, we also write a Maven `pom.xml` file that lists Spark as a dependency. Note that Spark artifacts are tagged with a Scala version.