aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-programming-guide.md
diff options
context:
space:
mode:
authorNishkam Ravi <nravi@cloudera.com>2015-06-01 21:34:41 +0100
committerSean Owen <sowen@cloudera.com>2015-06-01 21:36:50 +0100
commite7c7e51f2ec158d12a8429f753225c746f92d513 (patch)
tree095db89387f210002d30678a0157f20c3621490d /docs/streaming-programming-guide.md
parent3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf (diff)
downloadspark-e7c7e51f2ec158d12a8429f753225c746f92d513.tar.gz
spark-e7c7e51f2ec158d12a8429f753225c746f92d513.tar.bz2
spark-e7c7e51f2ec158d12a8429f753225c746f92d513.zip
[DOC] Minor modification to Streaming docs with regards to parallel data receiving
pwendell tdas Author: Nishkam Ravi <nravi@cloudera.com> Author: nishkamravi2 <nishkamravi@gmail.com> Author: nravi <nravi@c1704.halxg.cloudera.com> Closes #6544 from nishkamravi2/master_nravi and squashes the following commits: 46e8c03 [Nishkam Ravi] Slight modification to streaming docs
Diffstat (limited to 'docs/streaming-programming-guide.md')
-rw-r--r--docs/streaming-programming-guide.md8
1 files changed, 4 insertions, 4 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index bd863d48d5..42b3394787 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -1946,10 +1946,10 @@ creates a single receiver (running on a worker machine) that receives a single s
Receiving multiple data streams can therefore be achieved by creating multiple input DStreams
and configuring them to receive different partitions of the data stream from the source(s).
For example, a single Kafka input DStream receiving two topics of data can be split into two
-Kafka input streams, each receiving only one topic. This would run two receivers on two workers,
-thus allowing data to be received in parallel, and increasing overall throughput. These multiple
-DStream can be unioned together to create a single DStream. Then the transformations that was
-being applied on the single input DStream can applied on the unified stream. This is done as follows.
+Kafka input streams, each receiving only one topic. This would run two receivers,
+allowing data to be received in parallel, and increasing overall throughput. These multiple
+DStreams can be unioned together to create a single DStream. Then the transformations that were
+being applied on a single input DStream can be applied on the unified stream. This is done as follows.
<div class="codetabs">
<div data-lang="scala" markdown="1">