From 643c49c75ee95243fd19ae73b5170e6e6e212b8d Mon Sep 17 00:00:00 2001 From: Sean Owen Date: Sun, 1 Nov 2015 12:25:49 +0000 Subject: [SPARK-11305][DOCS] Remove Third-Party Hadoop Distributions Doc Page Remove Hadoop third party distro page, and move Hadoop cluster config info to configuration page CC pwendell Author: Sean Owen Closes #9298 from srowen/SPARK-11305. --- docs/programming-guide.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) (limited to 'docs/programming-guide.md') diff --git a/docs/programming-guide.md b/docs/programming-guide.md index 22656fd791..f823b89a4b 100644 --- a/docs/programming-guide.md +++ b/docs/programming-guide.md @@ -34,8 +34,7 @@ To write a Spark application, you need to add a Maven dependency on Spark. Spark version = {{site.SPARK_VERSION}} In addition, if you wish to access an HDFS cluster, you need to add a dependency on -`hadoop-client` for your version of HDFS. Some common HDFS version tags are listed on the -[third party distributions](hadoop-third-party-distributions.html) page. +`hadoop-client` for your version of HDFS. groupId = org.apache.hadoop artifactId = hadoop-client @@ -66,8 +65,7 @@ To write a Spark application in Java, you need to add a dependency on Spark. Spa version = {{site.SPARK_VERSION}} In addition, if you wish to access an HDFS cluster, you need to add a dependency on -`hadoop-client` for your version of HDFS. Some common HDFS version tags are listed on the -[third party distributions](hadoop-third-party-distributions.html) page. +`hadoop-client` for your version of HDFS. groupId = org.apache.hadoop artifactId = hadoop-client @@ -93,8 +91,7 @@ This script will load Spark's Java/Scala libraries and allow you to submit appli You can also use `bin/pyspark` to launch an interactive Python shell. If you wish to access HDFS data, you need to use a build of PySpark linking -to your version of HDFS. Some common HDFS version tags are listed on the -[third party distributions](hadoop-third-party-distributions.html) page. +to your version of HDFS. [Prebuilt packages](http://spark.apache.org/downloads.html) are also available on the Spark homepage for common HDFS versions. -- cgit v1.2.3