From ac807a0867562c68243f3d93df6c2f9600d2d799 Mon Sep 17 00:00:00 2001 From: Patrick Wendell Date: Sat, 12 Jul 2014 00:41:27 +0000 Subject: Adding 1.0.1 release of Spark. --- site/releases/spark-release-0-3.html | 10 +- site/releases/spark-release-0-5-0.html | 16 +-- site/releases/spark-release-0-5-1.html | 10 +- site/releases/spark-release-0-5-2.html | 8 +- site/releases/spark-release-0-6-0.html | 14 +- site/releases/spark-release-0-6-1.html | 8 +- site/releases/spark-release-0-6-2.html | 8 +- site/releases/spark-release-0-7-0.html | 12 +- site/releases/spark-release-0-7-2.html | 8 +- site/releases/spark-release-0-7-3.html | 8 +- site/releases/spark-release-0-8-0.html | 146 +++++++++---------- site/releases/spark-release-0-8-1.html | 94 ++++++------- site/releases/spark-release-0-9-0.html | 170 +++++++++++------------ site/releases/spark-release-0-9-1.html | 16 +-- site/releases/spark-release-1-0-0.html | 246 ++++++++++++++++----------------- 15 files changed, 387 insertions(+), 387 deletions(-) (limited to 'site/releases') diff --git a/site/releases/spark-release-0-3.html b/site/releases/spark-release-0-3.html index 3315edb14..abe3e2913 100644 --- a/site/releases/spark-release-0-3.html +++ b/site/releases/spark-release-0-3.html @@ -96,7 +96,7 @@ @@ -124,6 +124,9 @@
Latest News

Archive

@@ -176,7 +176,7 @@

Native Types for SequenceFiles

-

In working with SequenceFiles, which store objects that implement Hadoop’s Writable interface, Spark will now let you use native types for certain common Writable types, like IntWritable and Text. For example:

+

In working with SequenceFiles, which store objects that implement Hadoop’s Writable interface, Spark will now let you use native types for certain common Writable types, like IntWritable and Text. For example:

// Will read a SequenceFile of (IntWritable, Text)
diff --git a/site/releases/spark-release-0-5-0.html b/site/releases/spark-release-0-5-0.html index 8e857b451..480b95649 100644 --- a/site/releases/spark-release-0-5-0.html +++ b/site/releases/spark-release-0-5-0.html @@ -96,7 +96,7 @@ @@ -124,6 +124,9 @@
Latest News

Archive

@@ -164,10 +164,10 @@

Mesos 0.9 Support

-

This release runs on Apache Mesos 0.9, the first Apache Incubator release of Mesos, which contains significant usability and stability improvements. Most notable are better memory accounting for applications with long-term memory use, easier access of old jobs’ traces and logs (by keeping a history of executed tasks on the web UI), and simpler installation.

+

This release runs on Apache Mesos 0.9, the first Apache Incubator release of Mesos, which contains significant usability and stability improvements. Most notable are better memory accounting for applications with long-term memory use, easier access of old jobs’ traces and logs (by keeping a history of executed tasks on the web UI), and simpler installation.

Performance Improvements

-

Spark’s scheduling is more communication-efficient when sending out operations on RDDs with large lineage graphs. In addition, the cache replacement policy has been improved to more smartly replace data when an RDD does not fit in the cache, shuffles are more efficient, and the serializer used for shipping closures is now configurable, making it possible to use faster libraries than Java serialization there.

+

Spark’s scheduling is more communication-efficient when sending out operations on RDDs with large lineage graphs. In addition, the cache replacement policy has been improved to more smartly replace data when an RDD does not fit in the cache, shuffles are more efficient, and the serializer used for shipping closures is now configurable, making it possible to use faster libraries than Java serialization there.

Debug Improvements

@@ -179,11 +179,11 @@

EC2 Launch Script Improvements

-

Spark’s EC2 launch scripts are now included in the main package, and have the ability to discover and use the latest Spark AMI automatically instead of launching a hardcoded machine image ID.

+

Spark’s EC2 launch scripts are now included in the main package, and have the ability to discover and use the latest Spark AMI automatically instead of launching a hardcoded machine image ID.

New Hadoop API Support

-

You can now use Spark to read and write data to storage formats in the new org.apache.mapreduce packages (the “new Hadoop” API). In addition, this release fixes an issue caused by a HDFS initialization bug in some recent versions of HDFS.

+

You can now use Spark to read and write data to storage formats in the new org.apache.mapreduce packages (the “new Hadoop” API). In addition, this release fixes an issue caused by a HDFS initialization bug in some recent versions of HDFS.

diff --git a/site/releases/spark-release-0-5-1.html b/site/releases/spark-release-0-5-1.html index 287ffaf7b..7bb3a8939 100644 --- a/site/releases/spark-release-0-5-1.html +++ b/site/releases/spark-release-0-5-1.html @@ -96,7 +96,7 @@

@@ -124,6 +124,9 @@
Latest News

Archive

@@ -193,7 +193,7 @@

EC2 Improvements

-

Spark’s EC2 launch script now configures Spark’s memory limit automatically based on the machine’s available RAM.

+

Spark’s EC2 launch script now configures Spark’s memory limit automatically based on the machine’s available RAM.

diff --git a/site/releases/spark-release-0-5-2.html b/site/releases/spark-release-0-5-2.html index fd464d613..e0302ead4 100644 --- a/site/releases/spark-release-0-5-2.html +++ b/site/releases/spark-release-0-5-2.html @@ -96,7 +96,7 @@

@@ -124,6 +124,9 @@
Latest News

Archive

diff --git a/site/releases/spark-release-0-6-0.html b/site/releases/spark-release-0-6-0.html index ab4cc0c16..df2615647 100644 --- a/site/releases/spark-release-0-6-0.html +++ b/site/releases/spark-release-0-6-0.html @@ -96,7 +96,7 @@ @@ -124,6 +124,9 @@
Latest News

Archive

@@ -172,11 +172,11 @@

Java API

-

Java programmers can now use Spark through a new Java API layer. This layer makes available all of Spark’s features, including parallel transformations, distributed datasets, broadcast variables, and accumulators, in a Java-friendly manner.

+

Java programmers can now use Spark through a new Java API layer. This layer makes available all of Spark’s features, including parallel transformations, distributed datasets, broadcast variables, and accumulators, in a Java-friendly manner.

Expanded Documentation

-

Spark’s documentation has been expanded with a new quick start guide, additional deployment instructions, configuration guide, tuning guide, and improved Scaladoc API documentation.

+

Spark’s documentation has been expanded with a new quick start guide, additional deployment instructions, configuration guide, tuning guide, and improved Scaladoc API documentation.

Engine Changes

@@ -199,7 +199,7 @@

Enhanced Debugging

-

Spark’s log now prints which operation in your program each RDD and job described in your logs belongs to, making it easier to tie back to which parts of your code experience problems.

+

Spark’s log now prints which operation in your program each RDD and job described in your logs belongs to, making it easier to tie back to which parts of your code experience problems.

Maven Artifacts

diff --git a/site/releases/spark-release-0-6-1.html b/site/releases/spark-release-0-6-1.html index 2e0de3150..0984d9504 100644 --- a/site/releases/spark-release-0-6-1.html +++ b/site/releases/spark-release-0-6-1.html @@ -96,7 +96,7 @@ @@ -124,6 +124,9 @@
Latest News

Archive

diff --git a/site/releases/spark-release-0-6-2.html b/site/releases/spark-release-0-6-2.html index 0a57d5adb..4e67d35ce 100644 --- a/site/releases/spark-release-0-6-2.html +++ b/site/releases/spark-release-0-6-2.html @@ -96,7 +96,7 @@ @@ -124,6 +124,9 @@
Latest News

Archive

diff --git a/site/releases/spark-release-0-7-0.html b/site/releases/spark-release-0-7-0.html index 562e42918..2bc4b2b25 100644 --- a/site/releases/spark-release-0-7-0.html +++ b/site/releases/spark-release-0-7-0.html @@ -96,7 +96,7 @@ @@ -124,6 +124,9 @@
Latest News

Archive

@@ -186,7 +186,7 @@

New Operations

-

This release adds several RDD transformations, including keys, values, keyBy, subtract, coalesce, zip. It also adds SparkContext.hadoopConfiguration to allow programs to configure Hadoop input/output settings globally across operations. Finally, it adds the RDD.toDebugString() method, which can be used to print an RDD’s lineage graph for troubleshooting.

+

This release adds several RDD transformations, including keys, values, keyBy, subtract, coalesce, zip. It also adds SparkContext.hadoopConfiguration to allow programs to configure Hadoop input/output settings globally across operations. Finally, it adds the RDD.toDebugString() method, which can be used to print an RDD’s lineage graph for troubleshooting.

EC2 Improvements

@@ -223,7 +223,7 @@

Credits

-

Spark 0.7 was the work of many contributors from Berkeley and outside—in total, 31 different contributors, of which 20 were from outside Berkeley. Here are the people who contributed, along with areas they worked on:

+

Spark 0.7 was the work of many contributors from Berkeley and outside—in total, 31 different contributors, of which 20 were from outside Berkeley. Here are the people who contributed, along with areas they worked on:

Compatibility