aboutsummaryrefslogtreecommitdiff
path: root/docs/scala-programming-guide.md
diff options
context:
space:
mode:
authorXiangrui Meng <meng@databricks.com>2014-02-21 22:44:45 -0800
committerPatrick Wendell <pwendell@gmail.com>2014-02-21 22:44:45 -0800
commitaaec7d4a80ed370847671e9e29ce2e92f1cff2c7 (patch)
treef4396ab0c4985b9f383dd5327878b1a9c6b697e6 /docs/scala-programming-guide.md
parentfefd22f4c3e95d904cb6f4f3fd88b89050907ae9 (diff)
downloadspark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.tar.gz
spark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.tar.bz2
spark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.zip
SPARK-1117: update accumulator docs
The current doc hints spark doesn't support accumulators of type `Long`, which is wrong. JIRA: https://spark-project.atlassian.net/browse/SPARK-1117 Author: Xiangrui Meng <meng@databricks.com> Closes #631 from mengxr/acc and squashes the following commits: 45ecd25 [Xiangrui Meng] update accumulator docs
Diffstat (limited to 'docs/scala-programming-guide.md')
-rw-r--r--docs/scala-programming-guide.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index cd847e07f9..506d3faa76 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -344,7 +344,7 @@ After the broadcast variable is created, it should be used instead of the value
## Accumulators
-Accumulators are variables that are only "added" to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of type Int and Double, and programmers can add support for new types.
+Accumulators are variables that are only "added" to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric value types and standard mutable collections, and programmers can add support for new types.
An accumulator is created from an initial value `v` by calling `SparkContext.accumulator(v)`. Tasks running on the cluster can then add to it using the `+=` operator. However, they cannot read its value. Only the driver program can read the accumulator's value, using its `value` method.