summaryrefslogtreecommitdiff
path: root/site/releases/spark-release-0-8-0.html
diff options
context:
space:
mode:
authorReynold Xin <rxin@apache.org>2016-01-14 18:30:11 +0000
committerReynold Xin <rxin@apache.org>2016-01-14 18:30:11 +0000
commitca9adef30c96f9f948bf689f80b83a2b17c12a10 (patch)
treeb2281cd62e6bea1318cc2d13ac7a55a2140359e7 /site/releases/spark-release-0-8-0.html
parente40429c0bc9ee74121a6327be9b2fba3abd1ec43 (diff)
downloadspark-website-ca9adef30c96f9f948bf689f80b83a2b17c12a10.tar.gz
spark-website-ca9adef30c96f9f948bf689f80b83a2b17c12a10.tar.bz2
spark-website-ca9adef30c96f9f948bf689f80b83a2b17c12a10.zip
spark summit east agenda
Diffstat (limited to 'site/releases/spark-release-0-8-0.html')
-rw-r--r--site/releases/spark-release-0-8-0.html10
1 files changed, 5 insertions, 5 deletions
diff --git a/site/releases/spark-release-0-8-0.html b/site/releases/spark-release-0-8-0.html
index 8819c96e9..51fd7045b 100644
--- a/site/releases/spark-release-0-8-0.html
+++ b/site/releases/spark-release-0-8-0.html
@@ -134,6 +134,9 @@
<h5>Latest News</h5>
<ul class="list-unstyled">
+ <li><a href="/news/spark-summit-east-agenda-posted.html">Spark Summit East (Feb 16, 2016, New York) agenda posted</a>
+ <span class="small">(Jan 14, 2016)</span></li>
+
<li><a href="/news/spark-1-6-0-released.html">Spark 1.6.0 released</a>
<span class="small">(Jan 04, 2016)</span></li>
@@ -143,9 +146,6 @@
<li><a href="/news/spark-1-5-2-released.html">Spark 1.5.2 released</a>
<span class="small">(Nov 09, 2015)</span></li>
- <li><a href="/news/submit-talks-to-spark-summit-east-2016.html">Submission is open for Spark Summit East 2016</a>
- <span class="small">(Oct 14, 2015)</span></li>
-
</ul>
<p class="small" style="text-align: right;"><a href="/news/index.html">Archive</a></p>
</div>
@@ -194,13 +194,13 @@
<p>Spark’s internal job scheduler has been refactored and extended to include more sophisticated scheduling policies. In particular, a <a href="http://spark.incubator.apache.org/docs/0.8.0/job-scheduling.html#scheduling-within-an-application">fair scheduler</a> implementation now allows multiple users to share an instance of Spark, which helps users running shorter jobs to achieve good performance, even when longer-running jobs are running in parallel. Support for topology-aware scheduling has been extended, including the ability to take into account rack locality and support for multiple executors on a single machine.</p>
<h3 id="easier-deployment-and-linking">Easier Deployment and Linking</h3>
-<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code>spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>. </p>
+<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code>spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>.</p>
<h3 id="expanded-ec2-capabilities">Expanded EC2 Capabilities</h3>
<p>Spark’s EC2 scripts now support launching in any availability zone. Support has also been added for EC2 instance types which use the newer “HVM” architecture. This includes the cluster compute (cc1/cc2) family of instance types. We’ve also added support for running newer versions of HDFS alongside Spark. Finally, we’ve added the ability to launch clusters with maintenance releases of Spark in addition to launching the newest release.</p>
<h3 id="improved-documentation">Improved Documentation</h3>
-<p>This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark. </p>
+<p>This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark.</p>
<h3 id="other-improvements">Other Improvements</h3>
<ul>