aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-programming-guide.md
diff options
context:
space:
mode:
authorTathagata Das <tathagata.das1565@gmail.com>2013-02-24 16:24:52 -0800
committerTathagata Das <tathagata.das1565@gmail.com>2013-02-24 16:24:52 -0800
commit5ab37be9831e8a70b2502b14aed1c87cb002a189 (patch)
tree52f37dddce0179a41a7855248d970e7fe6513719 /docs/streaming-programming-guide.md
parent28f8b721f65fc8e699f208c5dc64d90822a85d91 (diff)
downloadspark-5ab37be9831e8a70b2502b14aed1c87cb002a189.tar.gz
spark-5ab37be9831e8a70b2502b14aed1c87cb002a189.tar.bz2
spark-5ab37be9831e8a70b2502b14aed1c87cb002a189.zip
Fixed class paths and dependencies based on Matei's comments.
Diffstat (limited to 'docs/streaming-programming-guide.md')
-rw-r--r--docs/streaming-programming-guide.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index ded43e67cd..0e618a06c7 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -365,14 +365,14 @@ There are two failure behaviors based on which input sources are used.
Since all data is modeled as RDDs with their lineage of deterministic operations, any recomputation always leads to the same result. As a result, all DStream transformations are guaranteed to have _exactly-once_ semantics. That is, the final transformed result will be same even if there were was a worker node failure. However, output operations (like `foreach`) have _at-least once_ semantics, that is, the transformed data may get written to an external entity more than once in the event of a worker failure. While this is acceptable for saving to HDFS using the `saveAs*Files` operations (as the file will simply get over-written by the same data), additional transactions-like mechanisms may be necessary to achieve exactly-once semantics for output operations.
-## Failure of a Driver Node
-A system that is required to operate 24/7 needs to be able tolerate the failure of the drive node as well. Spark Streaming does this by saving the state of the DStream computation periodically to a HDFS file, that can be used to restart the streaming computation in the event of a failure of the driver node. To elaborate, the following state is periodically saved to a file.
+## Failure of the Driver Node
+A system that is required to operate 24/7 needs to be able tolerate the failure of the driver node as well. Spark Streaming does this by saving the state of the DStream computation periodically to a HDFS file, that can be used to restart the streaming computation in the event of a failure of the driver node. This checkpointing is enabled by setting a HDFS directory for checkpointing using `ssc.checkpoint(<checkpoint directory>)` as described [earlier](#rdd-checkpointing-within-dstreams). To elaborate, the following state is periodically saved to a file.
1. The DStream operator graph (input streams, output streams, etc.)
1. The configuration of each DStream (checkpoint interval, etc.)
1. The RDD checkpoint files of each DStream
-All this is periodically saved in the file `<checkpoint directory>/graph` where `<checkpoint directory>` is the HDFS path set using `ssc.checkpoint(...)` as described earlier. To recover, a new Streaming Context can be created with this directory by using
+All this is periodically saved in the file `<checkpoint directory>/graph`. To recover, a new Streaming Context can be created with this directory by using
{% highlight scala %}
val ssc = new StreamingContext(checkpointDirectory)