aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDavid Martin <dmartinpro@users.noreply.github.com>2015-09-28 10:41:39 +0100
committerSean Owen <sowen@cloudera.com>2015-09-28 10:41:39 +0100
commitb58249930d58e2de238c05aaf5fa9315b4c3cbab (patch)
tree56b5b7c119c978d31cfb948b791988b179bc3fd9
parentfb4c7be747a35ea733df6010194cd76db449adc9 (diff)
downloadspark-b58249930d58e2de238c05aaf5fa9315b4c3cbab.tar.gz
spark-b58249930d58e2de238c05aaf5fa9315b4c3cbab.tar.bz2
spark-b58249930d58e2de238c05aaf5fa9315b4c3cbab.zip
Fix two mistakes in programming-guide page
seperate -> separate sees -> see Author: David Martin <dmartinpro@users.noreply.github.com> Closes #8928 from dmartinpro/patch-1.
-rw-r--r--docs/programming-guide.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/programming-guide.md b/docs/programming-guide.md
index 8ad238315f..22656fd791 100644
--- a/docs/programming-guide.md
+++ b/docs/programming-guide.md
@@ -805,9 +805,9 @@ print("Counter value: " + counter)
The primary challenge is that the behavior of the above code is undefined. In local mode with a single JVM, the above code will sum the values within the RDD and store it in **counter**. This is because both the RDD and the variable **counter** are in the same memory space on the driver node.
-However, in `cluster` mode, what happens is more complicated, and the above may not work as intended. To execute jobs, Spark breaks up the processing of RDD operations into tasks - each of which is operated on by an executor. Prior to execution, Spark computes the **closure**. The closure is those variables and methods which must be visible for the executor to perform its computations on the RDD (in this case `foreach()`). This closure is serialized and sent to each executor. In `local` mode, there is only the one executors so everything shares the same closure. In other modes however, this is not the case and the executors running on seperate worker nodes each have their own copy of the closure.
+However, in `cluster` mode, what happens is more complicated, and the above may not work as intended. To execute jobs, Spark breaks up the processing of RDD operations into tasks - each of which is operated on by an executor. Prior to execution, Spark computes the **closure**. The closure is those variables and methods which must be visible for the executor to perform its computations on the RDD (in this case `foreach()`). This closure is serialized and sent to each executor. In `local` mode, there is only the one executors so everything shares the same closure. In other modes however, this is not the case and the executors running on separate worker nodes each have their own copy of the closure.
-What is happening here is that the variables within the closure sent to each executor are now copies and thus, when **counter** is referenced within the `foreach` function, it's no longer the **counter** on the driver node. There is still a **counter** in the memory of the driver node but this is no longer visible to the executors! The executors only sees the copy from the serialized closure. Thus, the final value of **counter** will still be zero since all operations on **counter** were referencing the value within the serialized closure.
+What is happening here is that the variables within the closure sent to each executor are now copies and thus, when **counter** is referenced within the `foreach` function, it's no longer the **counter** on the driver node. There is still a **counter** in the memory of the driver node but this is no longer visible to the executors! The executors only see the copy from the serialized closure. Thus, the final value of **counter** will still be zero since all operations on **counter** were referencing the value within the serialized closure.
To ensure well-defined behavior in these sorts of scenarios one should use an [`Accumulator`](#AccumLink). Accumulators in Spark are used specifically to provide a mechanism for safely updating a variable when execution is split up across worker nodes in a cluster. The Accumulators section of this guide discusses these in more detail.