summaryrefslogtreecommitdiff
path: root/site/docs/0.9.0/scala-programming-guide.html
diff options
context:
space:
mode:
Diffstat (limited to 'site/docs/0.9.0/scala-programming-guide.html')
-rw-r--r--site/docs/0.9.0/scala-programming-guide.html16
1 files changed, 3 insertions, 13 deletions
diff --git a/site/docs/0.9.0/scala-programming-guide.html b/site/docs/0.9.0/scala-programming-guide.html
index da95dc8a4..dc355b3d4 100644
--- a/site/docs/0.9.0/scala-programming-guide.html
+++ b/site/docs/0.9.0/scala-programming-guide.html
@@ -170,7 +170,7 @@
<h1 id="linking-with-spark">Linking with Spark</h1>
-<p>Spark 0.9.0-incubating uses Scala 2.10. If you write applications in Scala, you&#8217;ll need to use this same version of Scala in your program &#8211; newer major versions may not work.</p>
+<p>Spark 0.9.0-incubating uses Scala 2.10. If you write applications in Scala, you will need to use a compatible Scala version (e.g. 2.10.X) &#8211; newer major versions may not work.</p>
<p>To write a Spark application, you need to add a dependency on Spark. If you use SBT or Maven, Spark is available through Maven Central at:</p>
@@ -496,7 +496,7 @@ let you continue running tasks on the RDD without waiting to recompute a lost pa
<h2 id="accumulators">Accumulators</h2>
-<p>Accumulators are variables that are only &#8220;added&#8221; to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of type Int and Double, and programmers can add support for new types.</p>
+<p>Accumulators are variables that are only &#8220;added&#8221; to through an associative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric value types and standard mutable collections, and programmers can add support for new types.</p>
<p>An accumulator is created from an initial value <code>v</code> by calling <code>SparkContext.accumulator(v)</code>. Tasks running on the cluster can then add to it using the <code>+=</code> operator. However, they cannot read its value. Only the driver program can read the accumulator&#8217;s value, using its <code>value</code> method.</p>
@@ -515,7 +515,7 @@ let you continue running tasks on the RDD without waiting to recompute a lost pa
<h1 id="where-to-go-from-here">Where to Go from Here</h1>
-<p>You can see some <a href="http://spark.incubator.apache.org/examples.html">example Spark programs</a> on the Spark website.
+<p>You can see some <a href="http://spark.apache.org/examples.html">example Spark programs</a> on the Spark website.
In addition, Spark includes several samples in <code>examples/src/main/scala</code>. Some of them have both Spark versions and local (non-parallel) versions, allowing you to see what had to be changed to make the program run on a cluster. You can run them using by passing the class name to the <code>bin/run-example</code> script included in Spark; for example:</p>
<pre><code>./bin/run-example org.apache.spark.examples.SparkPi
@@ -555,16 +555,6 @@ making sure that your data is stored in memory in an efficient format.</p>
<hr>-->
- <footer>
- <hr>
- <p style="text-align: center; veritcal-align: middle; color: #999;">
- Apache Spark is an effort undergoing incubation at the Apache Software Foundation.
- <a href="http://incubator.apache.org">
- <img style="margin-left: 20px;" src="img/incubator-logo.png" />
- </a>
- </p>
- </footer>
-
</div> <!-- /container -->
<script src="js/vendor/jquery-1.8.0.min.js"></script>