aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-programming-guide.md
diff options
context:
space:
mode:
authorCodingCat <zhunansjtu@gmail.com>2014-02-19 15:54:03 -0800
committerPatrick Wendell <pwendell@gmail.com>2014-02-19 15:54:03 -0800
commit7b012c93973201a1cbb4fc9a02e322152e5185a9 (patch)
treee4042829f0e73bf662844a2a6f4fd5c945b6779e /docs/streaming-programming-guide.md
parentb61435c7ff620a05bee65607aed249541ab54b13 (diff)
downloadspark-7b012c93973201a1cbb4fc9a02e322152e5185a9.tar.gz
spark-7b012c93973201a1cbb4fc9a02e322152e5185a9.tar.bz2
spark-7b012c93973201a1cbb4fc9a02e322152e5185a9.zip
[SPARK-1105] fix site scala version error in docs
https://spark-project.atlassian.net/browse/SPARK-1105 fix site scala version error Author: CodingCat <zhunansjtu@gmail.com> Closes #618 from CodingCat/doc_version and squashes the following commits: 39bb8aa [CodingCat] more fixes 65bedb0 [CodingCat] fix site scala version error in doc
Diffstat (limited to 'docs/streaming-programming-guide.md')
-rw-r--r--docs/streaming-programming-guide.md16
1 files changed, 8 insertions, 8 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 924f0f4306..57e8858161 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -275,23 +275,23 @@ To write your own Spark Streaming program, you will have to add the following de
SBT or Maven project:
groupId = org.apache.spark
- artifactId = spark-streaming_{{site.SCALA_VERSION}}
+ artifactId = spark-streaming_{{site.SCALA_BINARY_VERSION}}
version = {{site.SPARK_VERSION}}
For ingesting data from sources like Kafka and Flume that are not present in the Spark
Streaming core
API, you will have to add the corresponding
-artifact `spark-streaming-xyz_{{site.SCALA_VERSION}}` to the dependencies. For example,
+artifact `spark-streaming-xyz_{{site.SCALA_BINARY_VERSION}}` to the dependencies. For example,
some of the common ones are as follows.
<table class="table">
<tr><th>Source</th><th>Artifact</th></tr>
-<tr><td> Kafka </td><td> spark-streaming-kafka_{{site.SCALA_VERSION}} </td></tr>
-<tr><td> Flume </td><td> spark-streaming-flume_{{site.SCALA_VERSION}} </td></tr>
-<tr><td> Twitter </td><td> spark-streaming-twitter_{{site.SCALA_VERSION}} </td></tr>
-<tr><td> ZeroMQ </td><td> spark-streaming-zeromq_{{site.SCALA_VERSION}} </td></tr>
-<tr><td> MQTT </td><td> spark-streaming-mqtt_{{site.SCALA_VERSION}} </td></tr>
+<tr><td> Kafka </td><td> spark-streaming-kafka_{{site.SCALA_BINARY_VERSION}} </td></tr>
+<tr><td> Flume </td><td> spark-streaming-flume_{{site.SCALA_BINARY_VERSION}} </td></tr>
+<tr><td> Twitter </td><td> spark-streaming-twitter_{{site.SCALA_BINARY_VERSION}} </td></tr>
+<tr><td> ZeroMQ </td><td> spark-streaming-zeromq_{{site.SCALA_BINARY_VERSION}} </td></tr>
+<tr><td> MQTT </td><td> spark-streaming-mqtt_{{site.SCALA_BINARY_VERSION}} </td></tr>
<tr><td> </td><td></td></tr>
</table>
@@ -410,7 +410,7 @@ Scala and [JavaStreamingContext](api/streaming/index.html#org.apache.spark.strea
Additional functionality for creating DStreams from sources such as Kafka, Flume, and Twitter
can be imported by adding the right dependencies as explained in an
[earlier](#linking) section. To take the
-case of Kafka, after adding the artifact `spark-streaming-kafka_{{site.SCALA_VERSION}}` to the
+case of Kafka, after adding the artifact `spark-streaming-kafka_{{site.SCALA_BINARY_VERSION}}` to the
project dependencies, you can create a DStream from Kafka as
<div class="codetabs">