aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorSean Owen <sowen@cloudera.com>2016-11-03 17:27:23 -0700
committerReynold Xin <rxin@databricks.com>2016-11-03 17:27:23 -0700
commitdc4c60098641cf64007e2f0e36378f000ad5f6b1 (patch)
treefad72496e3f06613484fdac6c8c13353c79eb838 /docs
parentf22954ad49bf5a32c7b6d8487cd38ffe0da904ca (diff)
downloadspark-dc4c60098641cf64007e2f0e36378f000ad5f6b1.tar.gz
spark-dc4c60098641cf64007e2f0e36378f000ad5f6b1.tar.bz2
spark-dc4c60098641cf64007e2f0e36378f000ad5f6b1.zip
[SPARK-18138][DOCS] Document that Java 7, Python 2.6, Scala 2.10, Hadoop < 2.6 are deprecated in Spark 2.1.0
## What changes were proposed in this pull request? Document that Java 7, Python 2.6, Scala 2.10, Hadoop < 2.6 are deprecated in Spark 2.1.0. This does not actually implement any of the change in SPARK-18138, just peppers the documentation with notices about it. ## How was this patch tested? Doc build Author: Sean Owen <sowen@cloudera.com> Closes #15733 from srowen/SPARK-18138.
Diffstat (limited to 'docs')
-rw-r--r--docs/building-spark.md6
-rw-r--r--docs/index.md4
-rw-r--r--docs/programming-guide.md4
3 files changed, 14 insertions, 0 deletions
diff --git a/docs/building-spark.md b/docs/building-spark.md
index ebe46a42a1..2b404bd3e1 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -13,6 +13,7 @@ redirect_from: "building-with-maven.html"
The Maven-based build is the build of reference for Apache Spark.
Building Spark using Maven requires Maven 3.3.9 or newer and Java 7+.
+Note that support for Java 7 is deprecated as of Spark 2.0.0 and may be removed in Spark 2.2.0.
### Setting up Maven's Memory Usage
@@ -79,6 +80,9 @@ Because HDFS is not protocol-compatible across versions, if you want to read fro
</tbody>
</table>
+Note that support for versions of Hadoop before 2.6 are deprecated as of Spark 2.1.0 and may be
+removed in Spark 2.2.0.
+
You can enable the `yarn` profile and optionally set the `yarn.version` property if it is different from `hadoop.version`. Spark only supports YARN versions 2.2.0 and later.
@@ -129,6 +133,8 @@ To produce a Spark package compiled with Scala 2.10, use the `-Dscala-2.10` prop
./dev/change-scala-version.sh 2.10
./build/mvn -Pyarn -Phadoop-2.4 -Dscala-2.10 -DskipTests clean package
+
+Note that support for Scala 2.10 is deprecated as of Spark 2.1.0 and may be removed in Spark 2.2.0.
## Building submodules individually
diff --git a/docs/index.md b/docs/index.md
index a7a92f6c4f..fe51439ae0 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -28,6 +28,10 @@ Spark runs on Java 7+, Python 2.6+/3.4+ and R 3.1+. For the Scala API, Spark {{s
uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a compatible Scala version
({{site.SCALA_BINARY_VERSION}}.x).
+Note that support for Java 7 and Python 2.6 are deprecated as of Spark 2.0.0, and support for
+Scala 2.10 and versions of Hadoop before 2.6 are deprecated as of Spark 2.1.0, and may be
+removed in Spark 2.2.0.
+
# Running the Examples and Shell
Spark comes with several sample programs. Scala, Java, Python and R examples are in the
diff --git a/docs/programming-guide.md b/docs/programming-guide.md
index 7516579ec6..b9a2110b60 100644
--- a/docs/programming-guide.md
+++ b/docs/programming-guide.md
@@ -59,6 +59,8 @@ Spark {{site.SPARK_VERSION}} works with Java 7 and higher. If you are using Java
for concisely writing functions, otherwise you can use the classes in the
[org.apache.spark.api.java.function](api/java/index.html?org/apache/spark/api/java/function/package-summary.html) package.
+Note that support for Java 7 is deprecated as of Spark 2.0.0 and may be removed in Spark 2.2.0.
+
To write a Spark application in Java, you need to add a dependency on Spark. Spark is available through Maven Central at:
groupId = org.apache.spark
@@ -87,6 +89,8 @@ import org.apache.spark.SparkConf
Spark {{site.SPARK_VERSION}} works with Python 2.6+ or Python 3.4+. It can use the standard CPython interpreter,
so C libraries like NumPy can be used. It also works with PyPy 2.3+.
+Note that support for Python 2.6 is deprecated as of Spark 2.0.0, and may be removed in Spark 2.2.0.
+
To run Spark applications in Python, use the `bin/spark-submit` script located in the Spark directory.
This script will load Spark's Java/Scala libraries and allow you to submit applications to a cluster.
You can also use `bin/pyspark` to launch an interactive Python shell.