From 9700f2f4afe566412bdb73b443b3aad99b375af1 Mon Sep 17 00:00:00 2001
From: Matei Zaharia Spark’s internal job scheduler has been refactored and extended to include more sophisticated scheduling policies. In particular, a fair scheduler implementation now allows multiple users to share an instance of Spark, which helps users running shorter jobs to achieve good performance, even when longer-running jobs are running in parallel. Support for topology-aware scheduling has been extended, including the ability to take into account rack locality and support for multiple executors on a single machine. User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of Spark’s EC2 scripts now support launching in any availability zone. Support has also been added for EC2 instance types which use the newer “HVM” architecture. This includes the cluster compute (cc1/cc2) family of instance types. We’ve also added support for running newer versions of HDFS alongside Spark. Finally, we’ve added the ability to launch clusters with maintenance releases of Spark in addition to launching the newest release. This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark. This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark. Spark SQL adds a number of new features and performance improvements in this release. A JDBC/ODBC server allows users to connect to SparkSQL from many different applications and provides shared access to cached tables. A new module provides support for loading JSON data directly into Spark’s SchemaRDD format, including automatic schema inference. Spark SQL introduces dynamic bytecode generation in this release, a technique which significantly speeds up execution for queries that perform complex expression evaluation. This release also adds support for registering Python, Scala, and Java lambda functions as UDFs, which can then be called directly in SQL. Spark 1.1 adds a public types API to allow users to create SchemaRDD’s from custom data sources. Finally, many optimizations have been added to the native Parquet support as well as throughout the engine. MLlib adds several new algorithms and optimizations in this release. 1.1 introduces a new library of statistical packages which provides exploratory analytic functions. These include stratified sampling, correlations, chi-squared tests and support for creating random datasets. This release adds utilities for feature extraction (Word2Vec and TF-IDF) and feature transformation (normalization and standard scaling). Also new are support for nonnegative matrix factorization and SVD via Lanczos. The decision tree algorithm has been added in Python and Java. A tree aggregation primitive has been added to help optimize many existing algorithms. Performance improves across the board in MLlib 1.1, with improvements of around 2-3X for many algorithms and up to 5X for large scale decision tree problems. MLlib adds several new algorithms and optimizations in this release. 1.1 introduces a new library of statistical packages which provides exploratory analytic functions. These include stratified sampling, correlations, chi-squared tests and support for creating random datasets. This release adds utilities for feature extraction (Word2Vec and TF-IDF) and feature transformation (normalization and standard scaling). Also new are support for nonnegative matrix factorization and SVD via Lanczos. The decision tree algorithm has been added in Python and Java. A tree aggregation primitive has been added to help optimize many existing algorithms. Performance improves across the board in MLlib 1.1, with improvements of around 2-3X for many algorithms and up to 5X for large scale decision tree problems. Spark streaming adds a new data source Amazon Kinesis. For the Apache Flume, a new mode is supported which pulls data from Flume, simplifying deployment and providing high availability. The first of a set of streaming machine learning algorithms is introduced with streaming linear regression. Finally, rate limiting has been added for streaming inputs. GraphX adds custom storage levels for vertices and edges along with improved numerical precision across the board. Finally, GraphX adds a new label propagation algorithm. In 1.2 Spark core upgrades two major subsystems to improve the performance and stability of very large scale shuffles. The first is Spark’s communication manager used during bulk transfers, which upgrades to a netty-based implementation. The second is Spark’s shuffle mechanism, which upgrades to the “sort based” shuffle initially released in Spark 1.1. These both improve the performance and stability of very large scale shuffles. Spark also adds an elastic scaling mechanism designed to improve cluster utilization during long running ETL-style jobs. This is currently supported on YARN and will make its way to other cluster managers in future versions. Finally, Spark 1.2 adds support for Scala 2.11. For instructions on building for Scala 2.11 see the build documentation. This release includes two major feature additions to Spark’s streaming library, a Python API and a write ahead log for full driver H/A. The Python API covers almost all the DStream transformations and output operations. Input sources based on text files and text over sockets are currently supported. Support for Kafka and Flume input streams in Python will be added in the next release. Second, Spark streaming now features H/A driver support through a write ahead log (WAL). In Spark 1.1 and earlier, some buffered (received but not yet processed) data can be lost during driver restarts. To prevent this Spark 1.2 adds an optional WAL, which buffers received data into a fault-tolerant file system (e.g. HDFS). See the streaming programming guide for more details. This release includes two major feature additions to Spark’s streaming library, a Python API and a write ahead log for full driver H/A. The Python API covers almost all the DStream transformations and output operations. Input sources based on text files and text over sockets are currently supported. Support for Kafka and Flume input streams in Python will be added in the next release. Second, Spark streaming now features H/A driver support through a write ahead log (WAL). In Spark 1.1 and earlier, some buffered (received but not yet processed) data can be lost during driver restarts. To prevent this Spark 1.2 adds an optional WAL, which buffers received data into a fault-tolerant file system (e.g. HDFS). See the streaming programming guide for more details. Spark 1.2 previews a new set of machine learning API’s in a package called spark.ml that supports learning pipelines, where multiple algorithms are run in sequence with varying parameters. This type of pipeline is common in practical machine learning deployments. The new ML package uses Spark’s SchemaRDD to represent ML datasets, providing direct interoperability with Spark SQL. In addition to the new API, Spark 1.2 extends decision trees with two tree ensemble methods: random forests and gradient-boosted trees, among the most successful tree-based models for classification and regression. Finally, MLlib’s Python implementation receives a major update in 1.2 to simplify the process of adding Python APIs, along with better Python API coverage. To download Spark 1.3 visit the downloads page. Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports multi level aggregation trees to help speed up expensive reduce operations. Improved error reporting has been added for certain gotcha operations. Spark’s Jetty dependency is now shaded to help avoid conflicts with user programs. Spark now supports SSL encryption for some communication endpoints. Finaly, realtime GC metrics and record counts have been added to the UI. Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports multi level aggregation trees to help speed up expensive reduce operations. Improved error reporting has been added for certain gotcha operations. Spark’s Jetty dependency is now shaded to help avoid conflicts with user programs. Spark now supports SSL encryption for some communication endpoints. Finaly, realtime GC metrics and record counts have been added to the UI. Spark 1.3 adds a new DataFrames API that provides powerful and convenient operators when working with structured datasets. The DataFrame is an evolution of the base RDD API that includes named fields along with schema information. It’s easy to construct a DataFrame from sources such as Hive tables, JSON data, a JDBC database, or any implementation of Spark’s new data source API. Data frames will become a common interchange format between Spark components and when importing and exporting data to other systems. Data frames are supported in Python, Scala, and Java. In this release Spark MLlib introduces several new algorithms: latent Dirichlet allocation (LDA) for topic modeling, multinomial logistic regression for multiclass classification, Gaussian mixture model (GMM) and power iteration clustering for clustering, FP-growth for frequent pattern mining, and block matrix abstraction for distributed linear algebra. Initial support has been added for model import/export in exchangeable format, which will be expanded in future versions to cover more model types in Java/Python/Scala. The implementations of k-means and ALS receive updates that lead to significant performance gain. PySpark now supports the ML pipeline API added in Spark 1.2, and gradient boosted trees and Gaussian mixture model. Finally, the ML pipeline API has been ported to support the new DataFrames abstraction. Spark 1.3 introduces a new direct Kafka API (docs) which enables exactly-once delivery without the use of write ahead logs. It also adds a Python Kafka API along with infrastructure for additional Python API’s in future releases. An online version of logistic regression and the ability to read binary records have also been added. For stateful operations, support has been added for loading of an initial state RDD. Finally, the streaming programming guide has been updated to include information about SQL and DataFrame operations within streaming applications, and important clarifications to the fault-tolerance semantics. Spark 1.3 introduces a new direct Kafka API (docs) which enables exactly-once delivery without the use of write ahead logs. It also adds a Python Kafka API along with infrastructure for additional Python API’s in future releases. An online version of logistic regression and the ability to read binary records have also been added. For stateful operations, support has been added for loading of an initial state RDD. Finally, the streaming programming guide has been updated to include information about SQL and DataFrame operations within streaming applications, and important clarifications to the fault-tolerance semantics. GraphX adds a handful of utility functions in this release, including conversion into a canonical edge graph.Easier Deployment and Linking
-spark-core
specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided here.spark-core
specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided here. Expanded EC2 Capabilities
Improved Documentation
-Other Improvements
@@ -331,7 +331,7 @@ We’d especially like to thank Patrick Wendell for acting as the release manage
diff --git a/site/releases/spark-release-0-8-1.html b/site/releases/spark-release-0-8-1.html
index ad1bd3cec..8c93ac72b 100644
--- a/site/releases/spark-release-0-8-1.html
+++ b/site/releases/spark-release-0-8-1.html
@@ -295,7 +295,7 @@
diff --git a/site/releases/spark-release-0-9-0.html b/site/releases/spark-release-0-9-0.html
index 4c7eda2c9..beff7a554 100644
--- a/site/releases/spark-release-0-9-0.html
+++ b/site/releases/spark-release-0-9-0.html
@@ -389,7 +389,7 @@
diff --git a/site/releases/spark-release-0-9-1.html b/site/releases/spark-release-0-9-1.html
index 60766b3eb..386b37200 100644
--- a/site/releases/spark-release-0-9-1.html
+++ b/site/releases/spark-release-0-9-1.html
@@ -201,9 +201,9 @@
Improvements to other deployment scenarios
@@ -230,19 +230,19 @@
Optimizations to MLLib
-
Bug fixes and better API parity for PySpark
@@ -274,13 +274,13 @@
MLlib
-GraphX and Spark Streaming
@@ -275,7 +275,7 @@
spark.io.compression.codec
is now snappy
for improved memory usage. Old behavior can be restored by switching to lzf
.spark.broadcast.factory
is now org.apache.spark.broadcast.TorrentBroadcastFactory
for improved efficiency of broadcasts. Old behavior can be restored by switching to org.apache.spark.broadcast.HttpBroadcastFactory
.spark.broadcast.factory
is now org.apache.spark.broadcast.TorrentBroadcastFactory
for improved efficiency of broadcasts. Old behavior can be restored by switching to org.apache.spark.broadcast.HttpBroadcastFactory
. spark.shuffle.spill
to false
.spark.default.parallelism
to the number of cores in the cluster.Spark Streaming
-MLLib
Spark Core
-DataFrame API
Spark Streaming
-GraphX
@@ -415,7 +415,7 @@
diff --git a/site/releases/spark-release-1-3-1.html b/site/releases/spark-release-1-3-1.html
index e13b6acf5..5c444b632 100644
--- a/site/releases/spark-release-1-3-1.html
+++ b/site/releases/spark-release-1-3-1.html
@@ -196,10 +196,10 @@
collect()
.Spark SQL
Spark Streaming
@@ -302,7 +302,7 @@
diff --git a/site/releases/spark-release-1-4-0.html b/site/releases/spark-release-1-4-0.html
index c5ba82024..e6e1f0286 100644
--- a/site/releases/spark-release-1-4-0.html
+++ b/site/releases/spark-release-1-4-0.html
@@ -250,7 +250,7 @@ Python coverage. MLlib also adds several new algorithms.
Spark streaming adds visual instrumentation graphs and significantly improved debugging information in the UI. It also enhances support for both Kafka and Kinesis.
+Spark streaming adds visual instrumentation graphs and significantly improved debugging information in the UI. It also enhances support for both Kafka and Kinesis.
Thanks to The following organizations, who helped benchmark or integration test release candidates:
Intel, Palantir, Cloudera, Mesosphere, Huawei, Shopify, Netflix, Yahoo, UC Berkeley and Databricks.
Thanks to The following organizations, who helped benchmark or integration test release candidates:
Intel, Palantir, Cloudera, Mesosphere, Huawei, Shopify, Netflix, Yahoo, UC Berkeley and Databricks.
You can consult JIRA for the detailed changes. We have curated a list of high level changes here:
You can consult JIRA for the detailed changes. We have curated a list of high level changes here:
<=>
) will now execute using SortMergeJoin instead of computing a cartisian product.mapWithState
- a DStream transformation for stateful stream processing, supercedes updateStateByKey
in functionality and performance.To download Apache Spark 2.0.0, visit the downloads page. You can consult JIRA for the detailed changes. We have curated a list of high level changes here, grouped by major modules.