aboutsummaryrefslogtreecommitdiff
path: root/docs/quick-start.md
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@gmail.com>2012-10-02 23:54:03 -0700
committerPatrick Wendell <pwendell@gmail.com>2012-10-02 23:54:03 -0700
commit35b767f478e641ea4dccd174f618442a9082e4ae (patch)
tree4e34970cefb4c7b02c9e234f5a2dba389014cba0 /docs/quick-start.md
parentf78edf94cff1b7ba49f400bef9fa741a1dc468da (diff)
downloadspark-35b767f478e641ea4dccd174f618442a9082e4ae.tar.gz
spark-35b767f478e641ea4dccd174f618442a9082e4ae.tar.bz2
spark-35b767f478e641ea4dccd174f618442a9082e4ae.zip
Responding to Matei's comments
Diffstat (limited to 'docs/quick-start.md')
-rw-r--r--docs/quick-start.md74
1 files changed, 57 insertions, 17 deletions
diff --git a/docs/quick-start.md b/docs/quick-start.md
index aaef1b20f0..f9356afe9a 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -8,7 +8,13 @@ title: Spark Quick Start
# Introduction
-This document provides a quick-and-dirty look at Spark's API. See the [programming guide]({{HOME_PATH}}/scala-programming-guide.html) for a complete reference. To follow along with this guide, you only need to have successfully [built spark]({{HOME_PATH}}) on one machine -- all operations are demonstrated locally.
+This document provides a quick-and-dirty look at Spark's API. See the [programming guide]({{HOME_PATH}}/scala-programming-guide.html) for a complete reference. To follow along with this guide, you only need to have successfully [built spark]({{HOME_PATH}}) on one machine. Building Spark is as simple as running
+
+{% highlight bash %}
+$ sbt/sbt package
+{% endhighlight %}
+
+from within the Spark directory.
# Interactive Data Analysis with the Spark Shell
@@ -23,7 +29,7 @@ scala> val textFile = sc.textFile("README.md")
textFile: spark.RDD[String] = spark.MappedRDD@2ee9b6e3
{% endhighlight %}
-RDD's have _actions_, which return values, and _transformations_, which return pointers to new RDD's. Let's start with a few actions:
+RDD's have _[actions]({{HOME_PATH}}/scala-programming-guide.html#actions)_, which return values, and _[transformations]({{HOME_PATH}}/scala-programming-guide.html#transformations)_, which return pointers to new RDD's. Let's start with a few actions:
{% highlight scala %}
scala> textFile.count() // Number of items in this RDD
@@ -33,7 +39,7 @@ scala> textFile.first() // First item in this RDD
res1: String = # Spark
{% endhighlight %}
-Now let's use a transformation. We will use the `filter()` function to return a new RDD with a subset of the items in the file.
+Now let's use a transformation. We will use the [filter]({{HOME_PATH}}/scala-programming-guide.html#transformations)() transformation to return a new RDD with a subset of the items in the file.
{% highlight scala %}
scala> val sparkLinesOnly = textFile.filter(line => line.contains("Spark"))
@@ -55,7 +61,7 @@ scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a < b) {b
res4: Long = 16
{% endhighlight %}
-This first maps a line to an integer value, creating a new RDD. `reduce` is called on that RDD to find the largest line count. The arguments to map() and reduce() are scala closures. We can easily include functions declared elsewhere, or include existing functions in our anonymous closures. For instance, we can use `Math.max()` to make this code easier to understand.
+This first maps a line to an integer value, creating a new RDD. `reduce` is called on that RDD to find the largest line count. The arguments to [map]({{HOME_PATH}}/scala-programming-guide.html#transformations)() and [reduce]({{HOME_PATH}}/scala-programming-guide.html#actions)() are scala closures. We can easily include functions declared elsewhere, or include existing functions in our anonymous closures. For instance, we can use `Math.max()` to make this code easier to understand.
{% highlight scala %}
scala> import java.lang.Math;
@@ -65,6 +71,20 @@ scala> textFile.map(line => line.split(" ").size).reduce((a, b) => Math.max(a, b
res5: Int = 16
{% endhighlight %}
+One common data flow pattern is MapReduce, as popularized by Hadoop. Spark can implement MapReduce flows easily:
+
+{% highlight scala %}
+scala> val wordCountRDD = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((c1, c2) => c1 + c2)
+wordCountRDD: spark.RDD[(java.lang.String, Int)] = spark.ShuffledAggregatedRDD@71f027b8
+{% endhighlight %}
+
+Here, we combined the [flatMap]({{HOME_PATH}}/scala-programming-guide.html#transformations)(), [map]({{HOME_PATH}}/scala-programming-guide.html#transformations)() and [reduceByKey]({{HOME_PATH}}/scala-programming-guide.html#transformations)() transformations to create per-word counts in the file. To collect the word counts in our shell, we can use the [collect]({{HOME_PATH}}/scala-programming-guide.html#actions)() action:
+
+{% highlight scala %}
+scala> wordCountRDD.collect()
+res6: Array[(java.lang.String, Int)] = Array((need,2), ("",43), (Extra,3), (using,1), (passed,1), (etc.,1), (its,1), (`/usr/local/lib/libmesos.so`,1), (`SCALA_HOME`,1), (option,1), (these,1), (#,1), (`PATH`,,2), (200,1), (To,3),...
+{% endhighlight %}
+
## Caching
Spark also supports pulling data sets into a cluster-wide cache. This is very useful when data is accessed iteratively, such as in machine learning jobs, or repeatedly, such as when small "hot data" is queried repeatedly. As a simple example, let's pull part of our file into memory:
@@ -74,22 +94,21 @@ scala> val linesWithSparkCached = linesWithSpark.cache()
linesWithSparkCached: spark.RDD[String] = spark.FilteredRDD@17e51082
scala> linesWithSparkCached.count()
-res6: Long = 15
+res7: Long = 15
scala> linesWithSparkCached.count()
-res7: Long = 15
+res8: Long = 15
{% endhighlight %}
It may seem silly to use a Spark to explore and cache a 30 line text file. The interesting part is that these same functions can be used on very large data sets, even when they are striped across tens or hundreds of nodes.
-# A Spark Job
-Now say we wanted to write custom job using the Spark API. We will walk through a simple job in both Scala (with sbt) and Java (with maven). If you using other build systems, please reference the Spark assembly jar in the developer guide. The first step is to publish spark to our local Ivy/Maven repositories. From the spark directory
+# A Spark Job in Scala
+Now say we wanted to write custom job using the Spark API. We will walk through a simple job in both Scala (with sbt) and Java (with maven). If you using other build systems, please reference the Spark assembly jar in the developer guide. The first step is to publish Spark to our local Ivy/Maven repositories. From the Spark directory:
{% highlight bash %}
$ sbt/sbt publish-local
{% endhighlight %}
-## In Scala
Next, we'll create a very simple Spark job in Scala. So simple, in fact, that it's named `SimpleJob.scala`:
{% highlight scala %}
@@ -99,7 +118,8 @@ import SparkContext._
object SimpleJob extends Application {
val logFile = "/var/log/syslog" // Should be some log file on your system
- val sc = new SparkContext("local", "Simple Job")
+ val sc = new SparkContext("local", "Simple Job", "$YOUR_SPARK_HOME",
+ "target/scala-2.9.2/simple-project_2.9.2-1.0.jar")
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
@@ -107,6 +127,8 @@ object SimpleJob extends Application {
}
{% endhighlight %}
+This job simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. Unlike the earlier examples with the Spark Shell, which initializes its own SparkContext, we initialize a SparkContext as part of the job. We pass the SparkContext constructor four arguments, the type of scheduler we want to use (in this case, a local scheduler), a name for the job, the directory where Spark is installed, and a name for the jar file containing the job's sources. The final two arguments are needed in a distributed setting, where Spark is running across several nodes, so we include them for completeness. Spark will automatically ship the jar files you list to slave nodes.
+
This file depends on the Spark API, so we'll also include an sbt configuration file, `simple.sbt` which explains that Spark is a dependency:
{% highlight scala %}
@@ -114,10 +136,12 @@ name := "Simple Project"
version := "1.0"
+scalaVersion := "2.9.2"
+
libraryDependencies += "org.spark-project" %% "spark-core" % "0.6.0-SNAPSHOT"
{% endhighlight %}
-Of course, for sbt to work correctly, we'll need to layout `SimpleJob.scala` and `simple.sbt` according to the typical directory structure. Once that is in place, we can use `sbt run` to execute our example job.
+Of course, for sbt to work correctly, we'll need to layout `SimpleJob.scala` and `simple.sbt` according to the typical directory structure. Once that is in place, we can create a jar package containing the job's code, then use `sbt run` to execute our example job.
{% highlight bash %}
$ find .
@@ -128,13 +152,21 @@ $ find .
./src/main/scala
./src/main/scala/SimpleJob.scala
-$ sbt clean run
+$ sbt clean package
+$ sbt run
...
Lines with a: 8422, Lines with b: 1836
{% endhighlight %}
-## In Java
-Our simple job in Java (`SimpleJob.java`) looks very similar:
+This example only runs the job locally; for a tutorial on running jobs across several machines, see the [Standalone Mode]({{HOME_PATH}}/spark-standalone.html) documentation and consider using a distributed input source, such as HDFS.
+
+# A Spark Job In Java
+Now say we wanted to write custom job using the Spark API. We will walk through a simple job in both Scala (with sbt) and Java (with maven). If you using other build systems, please reference the Spark assembly jar in the developer guide. The first step is to publish Spark to our local Ivy/Maven repositories. From the Spark directory:
+
+{% highlight bash %}
+$ sbt/sbt publish-local
+{% endhighlight %}
+Next, we'll create a very simple Spark job in Scala. So simple, in fact, that it's named `SimpleJob.java`:
{% highlight java %}
/*** SimpleJob.java ***/
@@ -144,7 +176,8 @@ import spark.api.java.function.Function;
public class SimpleJob {
public static void main(String[] args) {
String logFile = "/var/log/syslog"; // Should be some log file on your system
- JavaSparkContext sc = new JavaSparkContext("local", "Simple Job");
+ JavaSparkContext sc = new JavaSparkContext("local", "Simple Job",
+ "$YOUR_SPARK_HOME", "target/simple-project-1.0.jar");
JavaRDD<String> logData = sc.textFile(logFile).cache();
long numAs = logData.filter(new Function<String, Boolean>() {
@@ -161,6 +194,8 @@ public class SimpleJob {
}
{% endhighlight %}
+This job simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. Unlike the earlier examples with the Spark Shell, which initializes its own SparkContext, we initialize a SparkContext as part of the job. We pass the SparkContext constructor four arguments, the type of scheduler we want to use (in this case, a local scheduler), a name for the job, the directory where Spark is installed, and a name for the jar file containing the job's sources. The final two arguments are needed in a distributed setting, where Spark is running across several nodes, so we include them for completeness. Spark will automatically ship the jar files you list to slave nodes.
+
Our Maven `pom.xml` file will list Spark as a dependency. Note that Spark artifacts are tagged with a Scala version.
{% highlight xml %}
@@ -191,9 +226,14 @@ $ find .
./src/main/java/SimpleJob.java
{% endhighlight %}
-Now, we can execute the job using Maven. Of course, in practice, we would typically compile or package this job and run it outside of Maven.
+Now, we can execute the job using Maven:
+
{% highlight bash %}
-$ mvn clean exec:java -Dexec.mainClass="SimpleJob"
+$ mvn clean package
+$ mvn exec:java -Dexec.mainClass="SimpleJob"
...
Lines with a: 8422, Lines with b: 1836
{% endhighlight %}
+
+This example only runs the job locally; for a tutorial on running jobs across several machines, see the [Standalone Mode]({{HOME_PATH}}/spark-standalone.html) documentation and consider using a distributed input source, such as HDFS.
+