aboutsummaryrefslogtreecommitdiff
path: root/docs/structured-streaming-kafka-integration.md
diff options
context:
space:
mode:
authorNiranjan Padmanabhan <niranjan.padmanabhan@gmail.com>2017-01-04 15:07:29 +0000
committerSean Owen <sowen@cloudera.com>2017-01-04 15:07:29 +0000
commita1e40b1f5d651305bbd0ba05779263a44f607498 (patch)
treef70fcf889a0c6f366bc44f5d012ec7f3e91ffbcc /docs/structured-streaming-kafka-integration.md
parent7a82505817d479007adff6424473063d2003fcc1 (diff)
downloadspark-a1e40b1f5d651305bbd0ba05779263a44f607498.tar.gz
spark-a1e40b1f5d651305bbd0ba05779263a44f607498.tar.bz2
spark-a1e40b1f5d651305bbd0ba05779263a44f607498.zip
[MINOR][DOCS] Remove consecutive duplicated words/typo in Spark Repo
## What changes were proposed in this pull request? There are many locations in the Spark repo where the same word occurs consecutively. Sometimes they are appropriately placed, but many times they are not. This PR removes the inappropriately duplicated words. ## How was this patch tested? N/A since only docs or comments were updated. Author: Niranjan Padmanabhan <niranjan.padmanabhan@gmail.com> Closes #16455 from neurons/np.structure_streaming_doc.
Diffstat (limited to 'docs/structured-streaming-kafka-integration.md')
-rw-r--r--docs/structured-streaming-kafka-integration.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/structured-streaming-kafka-integration.md b/docs/structured-streaming-kafka-integration.md
index 2458bb5ffa..9b82e8e744 100644
--- a/docs/structured-streaming-kafka-integration.md
+++ b/docs/structured-streaming-kafka-integration.md
@@ -244,7 +244,7 @@ Note that the following Kafka params cannot be set and the Kafka source will thr
- **group.id**: Kafka source will create a unique group id for each query automatically.
- **auto.offset.reset**: Set the source option `startingOffsets` to specify
where to start instead. Structured Streaming manages which offsets are consumed internally, rather
- than rely on the kafka Consumer to do it. This will ensure that no data is missed when when new
+ than rely on the kafka Consumer to do it. This will ensure that no data is missed when new
topics/partitions are dynamically subscribed. Note that `startingOffsets` only applies when a new
Streaming query is started, and that resuming will always pick up from where the query left off.
- **key.deserializer**: Keys are always deserialized as byte arrays with ByteArrayDeserializer. Use