aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorPatrick Wendell <patrick@databricks.com>2015-02-16 20:33:33 -0800
committerPatrick Wendell <patrick@databricks.com>2015-02-16 20:33:33 -0800
commita51d51ffac00931c80ce93889a98c2f77aef8953 (patch)
tree5a5c315af9f7f1f5eacfca85265e23dde83b4c01 /docs
parentac6fe67e1d8bf01ee565f9cc09ad48d88a275829 (diff)
downloadspark-a51d51ffac00931c80ce93889a98c2f77aef8953.tar.gz
spark-a51d51ffac00931c80ce93889a98c2f77aef8953.tar.bz2
spark-a51d51ffac00931c80ce93889a98c2f77aef8953.zip
SPARK-5850: Remove experimental label for Scala 2.11 and FlumePollingStream
Author: Patrick Wendell <patrick@databricks.com> Closes #4638 from pwendell/SPARK-5850 and squashes the following commits: 386126f [Patrick Wendell] SPARK-5850: Remove experimental label for Scala 2.11 and FlumePollingStream.
Diffstat (limited to 'docs')
-rw-r--r--docs/building-spark.md6
-rw-r--r--docs/streaming-flume-integration.md2
2 files changed, 4 insertions, 4 deletions
diff --git a/docs/building-spark.md b/docs/building-spark.md
index 088da7da49..4c3988e819 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -111,9 +111,9 @@ To produce a Spark package compiled with Scala 2.11, use the `-Dscala-2.11` prop
dev/change-version-to-2.11.sh
mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
-Scala 2.11 support in Spark is experimental and does not support a few features.
-Specifically, Spark's external Kafka library and JDBC component are not yet
-supported in Scala 2.11 builds.
+Scala 2.11 support in Spark does not support a few features due to dependencies
+which are themselves not Scala 2.11 ready. Specifically, Spark's external
+Kafka library and JDBC component are not yet supported in Scala 2.11 builds.
# Spark Tests in Maven
diff --git a/docs/streaming-flume-integration.md b/docs/streaming-flume-integration.md
index ac01dd3d80..40e17246fe 100644
--- a/docs/streaming-flume-integration.md
+++ b/docs/streaming-flume-integration.md
@@ -64,7 +64,7 @@ configuring Flume agents.
3. **Deploying:** Package `spark-streaming-flume_{{site.SCALA_BINARY_VERSION}}` and its dependencies (except `spark-core_{{site.SCALA_BINARY_VERSION}}` and `spark-streaming_{{site.SCALA_BINARY_VERSION}}` which are provided by `spark-submit`) into the application JAR. Then use `spark-submit` to launch your application (see [Deploying section](streaming-programming-guide.html#deploying-applications) in the main programming guide).
-## Approach 2 (Experimental): Pull-based Approach using a Custom Sink
+## Approach 2: Pull-based Approach using a Custom Sink
Instead of Flume pushing data directly to Spark Streaming, this approach runs a custom Flume sink that allows the following.
- Flume pushes data into the sink, and the data stays buffered.