aboutsummaryrefslogtreecommitdiff
path: root/docs/programming-guide.md
diff options
context:
space:
mode:
authorSean Owen <sowen@cloudera.com>2015-11-01 12:25:49 +0000
committerSean Owen <sowen@cloudera.com>2015-11-01 12:25:49 +0000
commit643c49c75ee95243fd19ae73b5170e6e6e212b8d (patch)
treeff52206281101054824fa1152b2cb8cff53e196d /docs/programming-guide.md
parentaa494a9c2ebd59baec47beb434cd09bf3f188218 (diff)
downloadspark-643c49c75ee95243fd19ae73b5170e6e6e212b8d.tar.gz
spark-643c49c75ee95243fd19ae73b5170e6e6e212b8d.tar.bz2
spark-643c49c75ee95243fd19ae73b5170e6e6e212b8d.zip
[SPARK-11305][DOCS] Remove Third-Party Hadoop Distributions Doc Page
Remove Hadoop third party distro page, and move Hadoop cluster config info to configuration page CC pwendell Author: Sean Owen <sowen@cloudera.com> Closes #9298 from srowen/SPARK-11305.
Diffstat (limited to 'docs/programming-guide.md')
-rw-r--r--docs/programming-guide.md9
1 files changed, 3 insertions, 6 deletions
diff --git a/docs/programming-guide.md b/docs/programming-guide.md
index 22656fd791..f823b89a4b 100644
--- a/docs/programming-guide.md
+++ b/docs/programming-guide.md
@@ -34,8 +34,7 @@ To write a Spark application, you need to add a Maven dependency on Spark. Spark
version = {{site.SPARK_VERSION}}
In addition, if you wish to access an HDFS cluster, you need to add a dependency on
-`hadoop-client` for your version of HDFS. Some common HDFS version tags are listed on the
-[third party distributions](hadoop-third-party-distributions.html) page.
+`hadoop-client` for your version of HDFS.
groupId = org.apache.hadoop
artifactId = hadoop-client
@@ -66,8 +65,7 @@ To write a Spark application in Java, you need to add a dependency on Spark. Spa
version = {{site.SPARK_VERSION}}
In addition, if you wish to access an HDFS cluster, you need to add a dependency on
-`hadoop-client` for your version of HDFS. Some common HDFS version tags are listed on the
-[third party distributions](hadoop-third-party-distributions.html) page.
+`hadoop-client` for your version of HDFS.
groupId = org.apache.hadoop
artifactId = hadoop-client
@@ -93,8 +91,7 @@ This script will load Spark's Java/Scala libraries and allow you to submit appli
You can also use `bin/pyspark` to launch an interactive Python shell.
If you wish to access HDFS data, you need to use a build of PySpark linking
-to your version of HDFS. Some common HDFS version tags are listed on the
-[third party distributions](hadoop-third-party-distributions.html) page.
+to your version of HDFS.
[Prebuilt packages](http://spark.apache.org/downloads.html) are also available on the Spark homepage
for common HDFS versions.