summaryrefslogtreecommitdiff
path: root/site/releases/spark-release-0-8-0.html
diff options
context:
space:
mode:
authorPatrick Wendell <pwendell@apache.org>2013-09-25 23:03:27 +0000
committerPatrick Wendell <pwendell@apache.org>2013-09-25 23:03:27 +0000
commitb43f9bd235652530bd715dfbb6ccb7b6012dddf1 (patch)
tree9d9f3d3ac78a30ee8334d2ce64698d0931bfb884 /site/releases/spark-release-0-8-0.html
parent97f4aff3a2fc3583087822255fad5e34aa86a749 (diff)
downloadspark-website-b43f9bd235652530bd715dfbb6ccb7b6012dddf1.tar.gz
spark-website-b43f9bd235652530bd715dfbb6ccb7b6012dddf1.tar.bz2
spark-website-b43f9bd235652530bd715dfbb6ccb7b6012dddf1.zip
Minor changes to release description.
Diffstat (limited to 'site/releases/spark-release-0-8-0.html')
-rw-r--r--site/releases/spark-release-0-8-0.html4
1 files changed, 2 insertions, 2 deletions
diff --git a/site/releases/spark-release-0-8-0.html b/site/releases/spark-release-0-8-0.html
index 4a6801c1f..d284729d9 100644
--- a/site/releases/spark-release-0-8-0.html
+++ b/site/releases/spark-release-0-8-0.html
@@ -112,7 +112,7 @@
<p>Spark 0.8.0 is a major release that includes many new capabilities and usability improvements. It’s also our first release under the Apache incubator. It is the largest Spark release yet, with contributions from 67 developers and 24 companies.</p>
-<p>You can download Spark 0.8.0 as either a <a href="http://spark-project.org/download/spark-0.8.0-incubating.tgz">source package</a> (4 MB tar.gz) or a prebuilt pacakge for <a href="http://spark-project.org/download/spark-0.8.0-incubating-bin-hadoop1.tgz">Hadoop 1 / CDH3</a> or <a href="http://spark-project.org/download/spark-0.8.0-incubating-bin-cdh4.tgz">CDH4</a> (125 MB tar.gz).</p>
+<p>You can download Spark 0.8.0 as either a <a href="http://spark-project.org/download/spark-0.8.0-incubating.tgz">source package</a> (4 MB tar.gz) or a prebuilt pacakge for <a href="http://spark-project.org/download/spark-0.8.0-incubating-bin-hadoop1.tgz">Hadoop 1 / CDH3</a> or <a href="http://spark-project.org/download/spark-0.8.0-incubating-bin-cdh4.tgz">CDH4</a> (125 MB tar.gz). Download signatures and checksums are available at the official <a href="http://www.apache.org/dist/incubator/spark/spark-0.8.0-incubating/">Apache download site</a>.</p>
<h3 id="monitoring-ui-and-metrics">Monitoring UI and Metrics</h3>
<p>Spark now displays a variety of monitoring data in a web UI (by default at port 4040 on the driver node). A new job dashboard contains information about running, succeeded, and failed jobs, including percentile statistics covering task runtime, shuffled data, and garbage collection. The existing storage dashboard has been extended, and additional pages have been added to display total storage and task information per-executor. Finally, a new metrics library exposes internal Spark metrics through various API’s including JMX and Ganglia.</p>
@@ -122,7 +122,7 @@
</p>
<h3 id="machine-learning-library">Machine Learning Library</h3>
-<p>This release introduces MLlib, a standard library of high-quality machine learning and optimization algorithms for Spark. MLlib was developed in collaboration with the <a href="http://www.mlbase.org/">U.C. Berkeley MLBase project</a>. The current library contains seven algorithms, including support vector machines (SVMs), logistic regression, several regularized variants of linear regression, a clustering algorithm (KMeans), and alternating least squares collaborative filtering.</p>
+<p>This release introduces MLlib, a standard library of high-quality machine learning and optimization algorithms for Spark. MLlib was developed in collaboration with the <a href="http://www.mlbase.org/">UC Berkeley MLbase project</a>. The current library contains seven algorithms, including support vector machines (SVMs), logistic regression, several regularized variants of linear regression, a clustering algorithm (KMeans), and alternating least squares collaborative filtering.</p>
<h3 id="python-improvements">Python Improvements</h3>
<p>The Python API has been extended with many previously missing features. This includes support for different storage levels, sampling, and various missing RDD operators. We’ve also added support for running Spark in <a href="http://ipython.org/">IPython</a>, including the IPython Notebook, and for running PySpark on Windows.</p>