aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorXiangrui Meng <meng@databricks.com>2014-02-21 22:44:45 -0800
committerPatrick Wendell <pwendell@gmail.com>2014-02-21 22:44:45 -0800
commitaaec7d4a80ed370847671e9e29ce2e92f1cff2c7 (patch)
treef4396ab0c4985b9f383dd5327878b1a9c6b697e6
parentfefd22f4c3e95d904cb6f4f3fd88b89050907ae9 (diff)
downloadspark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.tar.gz
spark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.tar.bz2
spark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.zip
SPARK-1117: update accumulator docs
The current doc hints spark doesn't support accumulators of type `Long`, which is wrong. JIRA: https://spark-project.atlassian.net/browse/SPARK-1117 Author: Xiangrui Meng <meng@databricks.com> Closes #631 from mengxr/acc and squashes the following commits: 45ecd25 [Xiangrui Meng] update accumulator docs
-rw-r--r--core/src/main/scala/org/apache/spark/Accumulators.scala4
-rw-r--r--docs/scala-programming-guide.md2
2 files changed, 3 insertions, 3 deletions
diff --git a/core/src/main/scala/org/apache/spark/Accumulators.scala b/core/src/main/scala/org/apache/spark/Accumulators.scala
index 73dd471ab1..d5f3e3f6ec 100644
--- a/core/src/main/scala/org/apache/spark/Accumulators.scala
+++ b/core/src/main/scala/org/apache/spark/Accumulators.scala
@@ -189,8 +189,8 @@ class GrowableAccumulableParam[R <% Growable[T] with TraversableOnce[T] with Ser
* A simpler value of [[Accumulable]] where the result type being accumulated is the same
* as the types of elements being merged, i.e. variables that are only "added" to through an
* associative operation and can therefore be efficiently supported in parallel. They can be used
- * to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of type
- * `Int` and `Double`, and programmers can add support for new types.
+ * to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric
+ * value types, and programmers can add support for new types.
*
* An accumulator is created from an initial value `v` by calling [[SparkContext#accumulator]].
* Tasks running on the cluster can then add to it using the [[Accumulable#+=]] operator.
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index cd847e07f9..506d3faa76 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -344,7 +344,7 @@ After the broadcast variable is created, it should be used instead of the value
## Accumulators
-Accumulators are variables that are only "added" to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of type Int and Double, and programmers can add support for new types.
+Accumulators are variables that are only "added" to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric value types and standard mutable collections, and programmers can add support for new types.
An accumulator is created from an initial value `v` by calling `SparkContext.accumulator(v)`. Tasks running on the cluster can then add to it using the `+=` operator. However, they cannot read its value. Only the driver program can read the accumulator's value, using its `value` method.