aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-programming-guide.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/streaming-programming-guide.md')
-rw-r--r--docs/streaming-programming-guide.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 4d0a1122dc..d7eafff38f 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -612,7 +612,7 @@ as well as to run the receiver(s).
- When running a Spark Streaming program locally, do not use "local" or "local[1]" as the master URL.
Either of these means that only one thread will be used for running tasks locally. If you are using
- a input DStream based on a receiver (e.g. sockets, Kafka, Flume, etc.), then the single thread will
+ an input DStream based on a receiver (e.g. sockets, Kafka, Flume, etc.), then the single thread will
be used to run the receiver, leaving no thread for processing the received data. Hence, when
running locally, always use "local[*n*]" as the master URL, where *n* > number of receivers to run
(see [Spark Properties](configuration.html#spark-properties) for information on how to set
@@ -1788,7 +1788,7 @@ This example appends the word counts of network data into a file.
This behavior is made simple by using `JavaStreamingContext.getOrCreate`. This is used as follows.
{% highlight java %}
-// Create a factory object that can create a and setup a new JavaStreamingContext
+// Create a factory object that can create and setup a new JavaStreamingContext
JavaStreamingContextFactory contextFactory = new JavaStreamingContextFactory() {
@Override public JavaStreamingContext create() {
JavaStreamingContext jssc = new JavaStreamingContext(...); // new context