aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-flume-integration.md
diff options
context:
space:
mode:
authorTathagata Das <tathagata.das1565@gmail.com>2014-09-03 17:38:01 -0700
committerTathagata Das <tathagata.das1565@gmail.com>2014-09-03 17:38:01 -0700
commita5224079286d1777864cf9fa77330aadae10cd7b (patch)
treeb44c8672b86a6b38769b62484772c6f237c39480 /docs/streaming-flume-integration.md
parent996b7434ee0d0c7c26987eb9cf050c139fdd2db2 (diff)
downloadspark-a5224079286d1777864cf9fa77330aadae10cd7b.tar.gz
spark-a5224079286d1777864cf9fa77330aadae10cd7b.tar.bz2
spark-a5224079286d1777864cf9fa77330aadae10cd7b.zip
[SPARK-2419][Streaming][Docs] Updates to the streaming programming guide
Updated the main streaming programming guide, and also added source-specific guides for Kafka, Flume, Kinesis. Author: Tathagata Das <tathagata.das1565@gmail.com> Author: Jacek Laskowski <jacek@japila.pl> Closes #2254 from tdas/streaming-doc-fix and squashes the following commits: e45c6d7 [Jacek Laskowski] More fixes from an old PR 5125316 [Tathagata Das] Fixed links dc02f26 [Tathagata Das] Refactored streaming kinesis guide and made many other changes. acbc3e3 [Tathagata Das] Fixed links between streaming guides. cb7007f [Tathagata Das] Added Streaming + Flume integration guide. 9bd9407 [Tathagata Das] Updated streaming programming guide with additional information from SPARK-2419.
Diffstat (limited to 'docs/streaming-flume-integration.md')
-rw-r--r--docs/streaming-flume-integration.md132
1 files changed, 132 insertions, 0 deletions
diff --git a/docs/streaming-flume-integration.md b/docs/streaming-flume-integration.md
new file mode 100644
index 0000000000..d57c3e0ef9
--- /dev/null
+++ b/docs/streaming-flume-integration.md
@@ -0,0 +1,132 @@
+---
+layout: global
+title: Spark Streaming + Flume Integration Guide
+---
+
+[Apache Flume](https://flume.apache.org/) is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Here we explain how to configure Flume and Spark Streaming to receive data from Flume. There are two approaches to this.
+
+## Approach 1: Flume-style Push-based Approach
+Flume is designed to push data between Flume agents. In this approach, Spark Streaming essentially sets up a receiver that acts an Avro agent for Flume, to which Flume can push the data. Here are the configuration steps.
+
+#### General Requirements
+Choose a machine in your cluster such that
+
+- When your Flume + Spark Streaming application is launched, one of the Spark workers must run on that machine.
+
+- Flume can be configured to push data to a port on that machine.
+
+Due to the push model, the streaming application needs to be up, with the receiver scheduled and listening on the chosen port, for Flume to be able push data.
+
+#### Configuring Flume
+Configure Flume agent to send data to an Avro sink by having the following in the configuration file.
+
+ agent.sinks = avroSink
+ agent.sinks.avroSink.type = avro
+ agent.sinks.avroSink.channel = memoryChannel
+ agent.sinks.avroSink.hostname = <chosen machine's hostname>
+ agent.sinks.avroSink.port = <chosen port on the machine>
+
+See the [Flume's documentation](https://flume.apache.org/documentation.html) for more information about
+configuring Flume agents.
+
+#### Configuring Spark Streaming Application
+1. **Linking:** In your SBT/Maven projrect definition, link your streaming application against the following artifact (see [Linking section](streaming-programming-guide.html#linking) in the main programming guide for further information).
+
+ groupId = org.apache.spark
+ artifactId = spark-streaming-flume_{{site.SCALA_BINARY_VERSION}}
+ version = {{site.SPARK_VERSION_SHORT}}
+
+2. **Programming:** In the streaming application code, import `FlumeUtils` and create input DStream as follows.
+
+ <div class="codetabs">
+ <div data-lang="scala" markdown="1">
+ import org.apache.spark.streaming.flume._
+
+ val flumeStream = FlumeUtils.createStream(streamingContext, [chosen machine's hostname], [chosen port])
+
+ See the [API docs](api/scala/index.html#org.apache.spark.streaming.flume.FlumeUtils$)
+ and the [example]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/examples/streaming/FlumeEventCount.scala).
+ </div>
+ <div data-lang="java" markdown="1">
+ import org.apache.spark.streaming.flume.*;
+
+ JavaReceiverInputDStream<SparkFlumeEvent> flumeStream =
+ FlumeUtils.createStream(streamingContext, [chosen machine's hostname], [chosen port]);
+
+ See the [API docs](api/java/index.html?org/apache/spark/streaming/flume/FlumeUtils.html)
+ and the [example]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaFlumeEventCount.java).
+ </div>
+ </div>
+
+ Note that the hostname should be the same as the one used by the resource manager in the
+ cluster (Mesos, YARN or Spark Standalone), so that resource allocation can match the names and launch
+ the receiver in the right machine.
+
+3. **Deploying:** Package `spark-streaming-flume_{{site.SCALA_BINARY_VERSION}}` and its dependencies (except `spark-core_{{site.SCALA_BINARY_VERSION}}` and `spark-streaming_{{site.SCALA_BINARY_VERSION}}` which are provided by `spark-submit`) into the application JAR. Then use `spark-submit` to launch your application (see [Deploying section](streaming-programming-guide.html#deploying-applications) in the main programming guide).
+
+## Approach 2 (Experimental): Pull-based Approach using a Custom Sink
+Instead of Flume pushing data directly to Spark Streaming, this approach runs a custom Flume sink that allows the following.
+- Flume pushes data into the sink, and the data stays buffered.
+- Spark Streaming uses transactions to pull data from the sink. Transactions succeed only after data is received and replicated by Spark Streaming.
+This ensures that better reliability and fault-tolerance than the previous approach. However, this requires configuring Flume to run a custom sink. Here are the configuration steps.
+
+#### General Requirements
+Choose a machine that will run the custom sink in a Flume agent. The rest of the Flume pipeline is configured to send data to that agent. Machines in the Spark cluster should have access to the chosen machine running the custom sink.
+
+#### Configuring Flume
+Configuring Flume on the chosen machine requires the following two steps.
+
+1. **Sink JARs**: Add the following JARs to Flume's classpath (see [Flume's documentation](https://flume.apache.org/documentation.html) to see how) in the machine designated to run the custom sink .
+
+ (i) *Custom sink JAR*: Download the JAR corresponding to the following artifact (or [direct link](http://search.maven.org/remotecontent?filepath=org/apache/spark/spark-streaming-flume-sink_{{site.SCALA_BINARY_VERSION}}/{{site.SPARK_VERSION_SHORT}}/spark-streaming-flume-sink_{{site.SCALA_BINARY_VERSION}}-{{site.SPARK_VERSION_SHORT}}.jar)).
+
+ groupId = org.apache.spark
+ artifactId = spark-streaming-flume-sink_{{site.SCALA_BINARY_VERSION}}
+ version = {{site.SPARK_VERSION_SHORT}}
+
+ (ii) *Scala library JAR*: Download the Scala library JAR for Scala {{site.SCALA_VERSION}}. It can be found with the following artifact detail (or, [direct link](http://search.maven.org/remotecontent?filepath=org/scala-lang/scala-library/{{site.SCALA_VERSION}}/scala-library-{{site.SCALA_VERSION}}.jar)).
+
+ groupId = org.scala-lang
+ artifactId = scala-library
+ version = {{site.SCALA_VERSION}}
+
+2. **Configuration file**: On that machine, configure Flume agent to send data to an Avro sink by having the following in the configuration file.
+
+ agent.sinks = spark
+ agent.sinks.spark.type = org.apache.spark.streaming.flume.sink.SparkSink
+ agent.sinks.spark.hostname = <hostname of the local machine>
+ agent.sinks.spark.port = <port to listen on for connection from Spark>
+ agent.sinks.spark.channel = memoryChannel
+
+ Also make sure that the upstream Flume pipeline is configured to send the data to the Flume agent running this sink.
+
+See the [Flume's documentation](https://flume.apache.org/documentation.html) for more information about
+configuring Flume agents.
+
+#### Configuring Spark Streaming Application
+1. **Linking:** In your SBT/Maven projrect definition, link your streaming application against the `spark-streaming-flume_{{site.SCALA_BINARY_VERSION}}` (see [Linking section](streaming-programming-guide.html#linking) in the main programming guide).
+
+2. **Programming:** In the streaming application code, import `FlumeUtils` and create input DStream as follows.
+
+ <div class="codetabs">
+ <div data-lang="scala" markdown="1">
+ import org.apache.spark.streaming.flume._
+
+ val flumeStream = FlumeUtils.createPollingStream(streamingContext, [sink machine hostname], [sink port])
+ </div>
+ <div data-lang="java" markdown="1">
+ import org.apache.spark.streaming.flume.*;
+
+ JavaReceiverInputDStream<SparkFlumeEvent>flumeStream =
+ FlumeUtils.createPollingStream(streamingContext, [sink machine hostname], [sink port]);
+ </div>
+ </div>
+
+ See the Scala example [FlumePollingEventCount]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/examples/streaming/FlumePollingEventCount.scala).
+
+ Note that each input DStream can be configured to receive data from multiple sinks.
+
+3. **Deploying:** Package `spark-streaming-flume_{{site.SCALA_BINARY_VERSION}}` and its dependencies (except `spark-core_{{site.SCALA_BINARY_VERSION}}` and `spark-streaming_{{site.SCALA_BINARY_VERSION}}` which are provided by `spark-submit`) into the application JAR. Then use `spark-submit` to launch your application (see [Deploying section](streaming-programming-guide.html#deploying-applications) in the main programming guide).
+
+
+