aboutsummaryrefslogtreecommitdiff
path: root/docs/java-programming-guide.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2012-09-25 23:51:04 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2012-09-25 23:51:04 -0700
commitc5754bb9399a59c4a83d28e618fea87900aa8f8a (patch)
tree98e64c44646814907ec87d7f93e89f96653705a8 /docs/java-programming-guide.md
parentf1246cc7c18bd0c155f920f4dc593e88147a94e4 (diff)
downloadspark-c5754bb9399a59c4a83d28e618fea87900aa8f8a.tar.gz
spark-c5754bb9399a59c4a83d28e618fea87900aa8f8a.tar.bz2
spark-c5754bb9399a59c4a83d28e618fea87900aa8f8a.zip
Fixes to Java guide
Diffstat (limited to 'docs/java-programming-guide.md')
-rw-r--r--docs/java-programming-guide.md90
1 files changed, 53 insertions, 37 deletions
diff --git a/docs/java-programming-guide.md b/docs/java-programming-guide.md
index 546d69bfe5..2411e07849 100644
--- a/docs/java-programming-guide.md
+++ b/docs/java-programming-guide.md
@@ -3,22 +3,22 @@ layout: global
title: Java Programming Guide
---
-The Spark Java API
-([spark.api.java]({{HOME_PATH}}api/core/index.html#spark.api.java.package)) defines
-[`JavaSparkContext`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaSparkContext) and
-[`JavaRDD`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaRDD) classes,
-which support
-the same methods as their Scala counterparts but take Java functions and return
-Java data and collection types.
-
-Because Java API is similar to the Scala API, this programming guide only
-covers Java-specific features;
-the [Scala Programming Guide]({{HOME_PATH}}scala-programming-guide.html)
-provides a more general introduction to Spark concepts and should be read
-first.
-
-
-# Key differences in the Java API
+The Spark Java API exposes all the Spark features available in the Scala version to Java.
+To learn the basics of Spark, we recommend reading through the
+[Scala Programming Guide]({{HOME_PATH}}scala-programming-guide.html) first; it should be
+easy to follow even if you don't know Scala.
+This guide will show how to use the Spark features described there in Java.
+
+The Spark Java API is defined in the
+[`spark.api.java`]({{HOME_PATH}}api/core/index.html#spark.api.java.package) package, and includes
+a [`JavaSparkContext`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaSparkContext) for
+initializing Spark and [`JavaRDD`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaRDD) classes,
+which support the same methods as their Scala counterparts but take Java functions and return
+Java data and collection types. The main differences have to do with passing functions to RDD
+operations (e.g. map) and handling RDDs of different types, as discussed next.
+
+# Key Differences in the Java API
+
There are a few key differences between the Java and Scala APIs:
* Java does not support anonymous or first-class functions, so functions must
@@ -27,21 +27,25 @@ There are a few key differences between the Java and Scala APIs:
[`Function2`]({{HOME_PATH}}api/core/index.html#spark.api.java.function.Function2), etc.
classes.
* To maintain type safety, the Java API defines specialized Function and RDD
- classes for key-value pairs and doubles.
-* RDD methods like `collect` and `countByKey` return Java collections types,
+ classes for key-value pairs and doubles. For example,
+ [`JavaPairRDD`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaPairRDD)
+ stores key-value pairs.
+* RDD methods like `collect()` and `countByKey()` return Java collections types,
such as `java.util.List` and `java.util.Map`.
-
+* Key-value pairs, which are simply written as `(key, value)` in Scala, are represented
+ by the `scala.Tuple2` class, and need to be created using `new Tuple2<K, V>(key, value)`
## RDD Classes
-Spark defines additional operations on RDDs of doubles and key-value pairs, such
-as `stdev` and `join`.
+
+Spark defines additional operations on RDDs of key-value pairs and doubles, such
+as `reduceByKey`, `join`, and `stdev`.
In the Scala API, these methods are automatically added using Scala's
[implicit conversions](http://www.scala-lang.org/node/130) mechanism.
-In the Java API, the extra methods are defined in
-[`JavaDoubleRDD`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaDoubleRDD) and
+In the Java API, the extra methods are defined in the
[`JavaPairRDD`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaPairRDD)
+and [`JavaDoubleRDD`]({{HOME_PATH}}api/core/index.html#spark.api.java.JavaDoubleRDD)
classes. RDD methods like `map` are overloaded by specialized `PairFunction`
and `DoubleFunction` classes, allowing them to return RDDs of the appropriate
types. Common methods like `filter` and `sample` are implemented by
@@ -57,22 +61,25 @@ class has a single abstract method, `call()`, that must be implemented.
<table class="table">
<tr><th>Class</th><th>Function Type</th></tr>
-<tr><td>Function&lt;T, R&gt;</td><td>T -&gt; R </td></tr>
-<tr><td>DoubleFunction&lt;T&gt;</td><td>T -&gt; Double </td></tr>
-<tr><td>PairFunction&lt;T, K, V&gt;</td><td>T -&gt; Tuple2&lt;K, V&gt; </td></tr>
+<tr><td>Function&lt;T, R&gt;</td><td>T =&gt; R </td></tr>
+<tr><td>DoubleFunction&lt;T&gt;</td><td>T =&gt; Double </td></tr>
+<tr><td>PairFunction&lt;T, K, V&gt;</td><td>T =&gt; Tuple2&lt;K, V&gt; </td></tr>
-<tr><td>FlatMapFunction&lt;T, R&gt;</td><td>T -&gt; Iterable&lt;R&gt; </td></tr>
-<tr><td>DoubleFlatMapFunction&lt;T&gt;</td><td>T -&gt; Iterable&lt;Double&gt; </td></tr>
-<tr><td>PairFlatMapFunction&lt;T, K, V&gt;</td><td>T -&gt; Iterable&lt;Tuple2&lt;K, V&gt;&gt; </td></tr>
+<tr><td>FlatMapFunction&lt;T, R&gt;</td><td>T =&gt; Iterable&lt;R&gt; </td></tr>
+<tr><td>DoubleFlatMapFunction&lt;T&gt;</td><td>T =&gt; Iterable&lt;Double&gt; </td></tr>
+<tr><td>PairFlatMapFunction&lt;T, K, V&gt;</td><td>T =&gt; Iterable&lt;Tuple2&lt;K, V&gt;&gt; </td></tr>
-<tr><td>Function2&lt;T1, T2, R&gt;</td><td>T1, T2 -&gt; R (function of two arguments)</td></tr>
+<tr><td>Function2&lt;T1, T2, R&gt;</td><td>T1, T2 =&gt; R (function of two arguments)</td></tr>
</table>
+
# Other Features
+
The Java API supports other Spark features, including
[accumulators]({{HOME_PATH}}scala-programming-guide.html#accumulators),
-[broadcast variables]({{HOME_PATH}}scala-programming-guide.html#broadcast_variables), and
-[caching]({{HOME_PATH}}scala-programming-guide.html#caching).
+[broadcast variables]({{HOME_PATH}}scala-programming-guide.html#broadcast-variables), and
+[caching]({{HOME_PATH}}scala-programming-guide.html#rdd-persistence).
+
# Example
@@ -130,8 +137,6 @@ JavaPairRDD<String, Integer> ones = words.map(
Note that `map` was passed a `PairFunction<String, String, Integer>` and
returned a `JavaPairRDD<String, Integer>`.
-
-
To finish the word count program, we will use `reduceByKey` to count the
occurrences of each word:
@@ -161,12 +166,23 @@ JavaPairRDD<String, Integer> counts = lines.flatMap(
...
);
{% endhighlight %}
+
There is no performance difference between these approaches; the choice is
-a matter of style.
+just a matter of style.
+
+# Javadoc
+
+We currently provide documentation for the Java API as Scaladoc, in the
+[`spark.api.java` package]({{HOME_PATH}}api/core/index.html#spark.api.java.package), because
+some of the classes are implemented in Scala. The main downside is that the types and function
+definitions show Scala syntax (for example, `def reduce(func: Function2[T, T]): T` instead of
+`T reduce(Function2<T, T> func)`).
+We hope to generate documentation with Java-style syntax in the future.
+
+# Where to Go from Here
-# Where to go from here
-Spark includes several sample jobs using the Java API in
+Spark includes several sample programs using the Java API in
`examples/src/main/java`. You can run them by passing the class name to the
`run` script included in Spark -- for example, `./run
spark.examples.JavaWordCount`. Each example program prints usage help when run