diff options
author | Xiangrui Meng <meng@databricks.com> | 2014-02-21 22:44:45 -0800 |
---|---|---|
committer | Patrick Wendell <pwendell@gmail.com> | 2014-02-21 22:44:45 -0800 |
commit | aaec7d4a80ed370847671e9e29ce2e92f1cff2c7 (patch) | |
tree | f4396ab0c4985b9f383dd5327878b1a9c6b697e6 /core/src | |
parent | fefd22f4c3e95d904cb6f4f3fd88b89050907ae9 (diff) | |
download | spark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.tar.gz spark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.tar.bz2 spark-aaec7d4a80ed370847671e9e29ce2e92f1cff2c7.zip |
SPARK-1117: update accumulator docs
The current doc hints spark doesn't support accumulators of type `Long`, which is wrong.
JIRA: https://spark-project.atlassian.net/browse/SPARK-1117
Author: Xiangrui Meng <meng@databricks.com>
Closes #631 from mengxr/acc and squashes the following commits:
45ecd25 [Xiangrui Meng] update accumulator docs
Diffstat (limited to 'core/src')
-rw-r--r-- | core/src/main/scala/org/apache/spark/Accumulators.scala | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/core/src/main/scala/org/apache/spark/Accumulators.scala b/core/src/main/scala/org/apache/spark/Accumulators.scala index 73dd471ab1..d5f3e3f6ec 100644 --- a/core/src/main/scala/org/apache/spark/Accumulators.scala +++ b/core/src/main/scala/org/apache/spark/Accumulators.scala @@ -189,8 +189,8 @@ class GrowableAccumulableParam[R <% Growable[T] with TraversableOnce[T] with Ser * A simpler value of [[Accumulable]] where the result type being accumulated is the same * as the types of elements being merged, i.e. variables that are only "added" to through an * associative operation and can therefore be efficiently supported in parallel. They can be used - * to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of type - * `Int` and `Double`, and programmers can add support for new types. + * to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric + * value types, and programmers can add support for new types. * * An accumulator is created from an initial value `v` by calling [[SparkContext#accumulator]]. * Tasks running on the cluster can then add to it using the [[Accumulable#+=]] operator. |