diff options
author | Andrew Ash <andrew@andrewash.com> | 2013-04-10 13:44:10 -0300 |
---|---|---|
committer | Andrew Ash <andrew@andrewash.com> | 2013-04-10 13:44:10 -0300 |
commit | 6efc8cae8f4497d431e2a861778e2e120e774990 (patch) | |
tree | 5aeefde8e2375e4498a69c4f01a06394d239f017 /docs/streaming-programming-guide.md | |
parent | 7cd83bf0f8546e7ed5b999b6c8b3ac2667211c47 (diff) | |
download | spark-6efc8cae8f4497d431e2a861778e2e120e774990.tar.gz spark-6efc8cae8f4497d431e2a861778e2e120e774990.tar.bz2 spark-6efc8cae8f4497d431e2a861778e2e120e774990.zip |
Typos: cluser -> cluster
Diffstat (limited to 'docs/streaming-programming-guide.md')
-rw-r--r-- | docs/streaming-programming-guide.md | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md index b30699cf3d..f5788dc467 100644 --- a/docs/streaming-programming-guide.md +++ b/docs/streaming-programming-guide.md @@ -83,7 +83,7 @@ DStreams support many of the transformations available on normal Spark RDD's: <tr> <td> <b>groupByKey</b>([<i>numTasks</i>]) </td> <td> When called on a DStream of (K, V) pairs, returns a new DStream of (K, Seq[V]) pairs by grouping together all the values of each key in the RDDs of the source DStream. <br /> - <b>Note:</b> By default, this uses Spark's default number of parallel tasks (2 for local machine, 8 for a cluser) to do the grouping. You can pass an optional <code>numTasks</code> argument to set a different number of tasks. + <b>Note:</b> By default, this uses Spark's default number of parallel tasks (2 for local machine, 8 for a cluster) to do the grouping. You can pass an optional <code>numTasks</code> argument to set a different number of tasks. </td> </tr> <tr> @@ -132,7 +132,7 @@ Spark Streaming features windowed computations, which allow you to apply transfo <td> <b>groupByKeyAndWindow</b>(<i>windowDuration</i>, <i>slideDuration</i>, [<i>numTasks</i>]) </td> <td> When called on a DStream of (K, V) pairs, returns a new DStream of (K, Seq[V]) pairs by grouping together values of each key over batches in a sliding window. <br /> -<b>Note:</b> By default, this uses Spark's default number of parallel tasks (2 for local machine, 8 for a cluser) to do the grouping. You can pass an optional <code>numTasks</code> argument to set a different number of tasks.</td> +<b>Note:</b> By default, this uses Spark's default number of parallel tasks (2 for local machine, 8 for a cluster) to do the grouping. You can pass an optional <code>numTasks</code> argument to set a different number of tasks.</td> </tr> <tr> <td> <b>reduceByKeyAndWindow</b>(<i>func</i>, <i>windowDuration</i>, <i>slideDuration</i>, [<i>numTasks</i>]) </td> |