aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-programming-guide.md
diff options
context:
space:
mode:
authorTathagata Das <tathagata.das1565@gmail.com>2012-12-29 18:31:51 -0800
committerTathagata Das <tathagata.das1565@gmail.com>2012-12-29 18:31:51 -0800
commit9e644402c155b5fc68794a17c36ddd19d3242f4f (patch)
tree0fd01d0fb798d9cf2764f1ed666694fabdbb942a /docs/streaming-programming-guide.md
parent0bc0a60d3001dd231e13057a838d4b6550e5a2b9 (diff)
downloadspark-9e644402c155b5fc68794a17c36ddd19d3242f4f.tar.gz
spark-9e644402c155b5fc68794a17c36ddd19d3242f4f.tar.bz2
spark-9e644402c155b5fc68794a17c36ddd19d3242f4f.zip
Improved jekyll and scala docs. Made many classes and method private to remove them from scala docs.
Diffstat (limited to 'docs/streaming-programming-guide.md')
-rw-r--r--docs/streaming-programming-guide.md56
1 files changed, 30 insertions, 26 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 90916545bc..7c421ac70f 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -2,33 +2,44 @@
layout: global
title: Streaming (Alpha) Programming Guide
---
+
+{:toc}
+
+# Overview
+A Spark Streaming application is very similar to a Spark application; it consists of a *driver program* that runs the user's `main` function and continuous executes various *parallel operations* on input streams of data. The main abstraction Spark Streaming provides is a *discretized stream* (DStream), which is a continuous sequence of RDDs (distributed collection of elements) representing a continuous stream of data. DStreams can created from live incoming data (such as data from a socket, Kafka, etc.) or it can be generated by transformation of existing DStreams using parallel operators like map, reduce, and window. The basic processing model is as follows:
+(i) While a Spark Streaming driver program is running, the system receives data from various sources and and divides the data into batches. Each batch of data is treated as a RDD, that is a immutable and parallel collection of data. These input data RDDs are automatically persisted in memory (serialized by default) and replicated to two nodes for fault-tolerance. This sequence of RDDs is collectively referred to as an InputDStream.
+(ii) Data received by InputDStreams are processed processed using DStream operations. Since all data is represented as RDDs and all DStream operations as RDD operations, data is automatically recovered in the event of node failures.
+
+This guide shows some how to start programming with DStreams.
+
# Initializing Spark Streaming
The first thing a Spark Streaming program must do is create a `StreamingContext` object, which tells Spark how to access a cluster. A `StreamingContext` can be created from an existing `SparkContext`, or directly:
{% highlight scala %}
-new StreamingContext(master, jobName, [sparkHome], [jars])
-new StreamingContext(sparkContext)
-{% endhighlight %}
-
-Once a context is instantiated, the batch interval must be set:
+import spark.SparkContext
+import SparkContext._
-{% highlight scala %}
-context.setBatchDuration(Milliseconds(2000))
+new StreamingContext(master, frameworkName, batchDuration)
+new StreamingContext(sparkContext, batchDuration)
{% endhighlight %}
+The `master` parameter is either the [Mesos master URL](running-on-mesos.html) (for running on a cluster)or the special "local" string (for local mode) that is used to create a Spark Context. For more information about this please refer to the [Spark programming guide](scala-programming-guide.html).
-# DStreams - Discretized Streams
-The primary abstraction in Spark Streaming is a DStream. A DStream represents distributed collection which is computed periodically according to a specified batch interval. DStream's can be chained together to create complex chains of transformation on streaming data. DStreams can be created by operating on existing DStreams or from an input source. To creating DStreams from an input source, use the StreamingContext:
+
+# Creating Input Sources - InputDStreams
+The StreamingContext is used to creating InputDStreams from input sources:
{% highlight scala %}
-context.neworkStream(host, port) // A stream that reads from a socket
-context.flumeStream(hosts, ports) // A stream populated by a Flume flow
+context.neworkStream(host, port) // Creates a stream that uses a TCP socket to read data from <host>:<port>
+context.flumeStream(host, port) // Creates a stream populated by a Flume flow
{% endhighlight %}
-# DStream Operators
+A complete list of input sources is available in the [DStream API doc](api/streaming/index.html#spark.streaming.StreamingContext).
+
+## DStream Operations
Once an input stream has been created, you can transform it using _stream operators_. Most of these operators return new DStreams which you can further transform. Eventually, you'll need to call an _output operator_, which forces evaluation of the stream by writing data out to an external source.
-## Transformations
+### Transformations
DStreams support many of the transformations available on normal Spark RDD's:
@@ -73,20 +84,13 @@ DStreams support many of the transformations available on normal Spark RDD's:
<td> <b>cogroup</b>(<i>otherStream</i>, [<i>numTasks</i>]) </td>
<td> When called on streams of type (K, V) and (K, W), returns a stream of (K, Seq[V], Seq[W]) tuples. This operation is also called <code>groupWith</code>. </td>
</tr>
-</table>
-
-DStreams also support the following additional transformations:
-
-<table class="table">
<tr>
<td> <b>reduce</b>(<i>func</i>) </td>
<td> Create a new single-element stream by aggregating the elements of the stream using a function func (which takes two arguments and returns one). The function should be associative so that it can be computed correctly in parallel. </td>
</tr>
</table>
-
-## Windowed Transformations
-Spark streaming features windowed computations, which allow you to report statistics over a sliding window of data. All window functions take a <i>windowTime</i>, which represents the width of the window and a <i>slideTime</i>, which represents the frequency during which the window is calculated.
+Spark Streaming features windowed computations, which allow you to report statistics over a sliding window of data. All window functions take a <i>windowTime</i>, which represents the width of the window and a <i>slideTime</i>, which represents the frequency during which the window is calculated.
<table class="table">
<tr><th style="width:25%">Transformation</th><th>Meaning</th></tr>
@@ -128,7 +132,7 @@ Spark streaming features windowed computations, which allow you to report statis
</table>
-## Output Operators
+### Output Operators
When an output operator is called, it triggers the computation of a stream. Currently the following output operators are defined:
<table class="table">
@@ -140,22 +144,22 @@ When an output operator is called, it triggers the computation of a stream. Curr
<tr>
<td> <b>print</b>() </td>
- <td> Prints the contents of this DStream on the driver. At each interval, this will take at most ten elements from the DStream's RDD and print them. </td>
+ <td> Prints first ten elements of every batch of data in a DStream on the driver. </td>
</tr>
<tr>
- <td> <b>saveAsObjectFile</b>(<i>prefix</i>, [<i>suffix</i>]) </td>
+ <td> <b>saveAsObjectFiles</b>(<i>prefix</i>, [<i>suffix</i>]) </td>
<td> Save this DStream's contents as a <code>SequenceFile</code> of serialized objects. The file name at each batch interval is calculated based on <i>prefix</i> and <i>suffix</i>: <i>"prefix-TIME_IN_MS[.suffix]"</i>.
</td>
</tr>
<tr>
- <td> <b>saveAsTextFile</b>(<i>prefix</i>, <i>suffix</i>) </td>
+ <td> <b>saveAsTextFiles</b>(<i>prefix</i>, [<i>suffix</i>]) </td>
<td> Save this DStream's contents as a text files. The file name at each batch interval is calculated based on <i>prefix</i> and <i>suffix</i>: <i>"prefix-TIME_IN_MS[.suffix]"</i>. </td>
</tr>
<tr>
- <td> <b>saveAsHadoopFiles</b>(<i>prefix</i>, <i>suffix</i>) </td>
+ <td> <b>saveAsHadoopFiles</b>(<i>prefix</i>, [<i>suffix</i>]) </td>
<td> Save this DStream's contents as a Hadoop file. The file name at each batch interval is calculated based on <i>prefix</i> and <i>suffix</i>: <i>"prefix-TIME_IN_MS[.suffix]"</i>. </td>
</tr>