aboutsummaryrefslogtreecommitdiff
path: root/docs/hadoop-third-party-distributions.md
diff options
context:
space:
mode:
authorAndrew Or <andrewor14@gmail.com>2014-05-12 19:44:14 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-05-12 19:44:14 -0700
commit2ffd1eafd28635dcecc0ac738d4a62c05d740925 (patch)
tree0c2b30a97dfd24fc6268d4f429111fe6c7348bbe /docs/hadoop-third-party-distributions.md
parentba96bb3d591130075763706526f86fb2aaffa3ae (diff)
downloadspark-2ffd1eafd28635dcecc0ac738d4a62c05d740925.tar.gz
spark-2ffd1eafd28635dcecc0ac738d4a62c05d740925.tar.bz2
spark-2ffd1eafd28635dcecc0ac738d4a62c05d740925.zip
[SPARK-1753 / 1773 / 1814] Update outdated docs for spark-submit, YARN, standalone etc.
YARN - SparkPi was updated to not take in master as an argument; we should update the docs to reflect that. - The default YARN build guide should be in maven, not sbt. - This PR also adds a paragraph on steps to debug a YARN application. Standalone - Emphasize spark-submit more. Right now it's one small paragraph preceding the legacy way of launching through `org.apache.spark.deploy.Client`. - The way we set configurations / environment variables according to the old docs is outdated. This needs to reflect changes introduced by the Spark configuration changes we made. In general, this PR also adds a little more documentation on the new spark-shell, spark-submit, spark-defaults.conf etc here and there. Author: Andrew Or <andrewor14@gmail.com> Closes #701 from andrewor14/yarn-docs and squashes the following commits: e2c2312 [Andrew Or] Merge in changes in #752 (SPARK-1814) 25cfe7b [Andrew Or] Merge in the warning from SPARK-1753 a8c39c5 [Andrew Or] Minor changes 336bbd9 [Andrew Or] Tabs -> spaces 4d9d8f7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 041017a [Andrew Or] Abstract Spark submit documentation to cluster-overview.html 3cc0649 [Andrew Or] Detail how to set configurations + remove legacy instructions 5b7140a [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 85a51fc [Andrew Or] Update run-example, spark-shell, configuration etc. c10e8c7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 381fe32 [Andrew Or] Update docs for standalone mode 757c184 [Andrew Or] Add a note about the requirements for the debugging trick f8ca990 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs 924f04c [Andrew Or] Revert addition of --deploy-mode d5fe17b [Andrew Or] Update the YARN docs
Diffstat (limited to 'docs/hadoop-third-party-distributions.md')
-rw-r--r--docs/hadoop-third-party-distributions.md14
1 files changed, 10 insertions, 4 deletions
diff --git a/docs/hadoop-third-party-distributions.md b/docs/hadoop-third-party-distributions.md
index 454877a7fa..a0aeab5727 100644
--- a/docs/hadoop-third-party-distributions.md
+++ b/docs/hadoop-third-party-distributions.md
@@ -9,12 +9,14 @@ with these distributions:
# Compile-time Hadoop Version
-When compiling Spark, you'll need to
-[set the SPARK_HADOOP_VERSION flag](index.html#a-note-about-hadoop-versions):
+When compiling Spark, you'll need to specify the Hadoop version by defining the `hadoop.version`
+property. For certain versions, you will need to specify additional profiles. For more detail,
+see the guide on [building with maven](building-with-maven.html#specifying-the-hadoop-version):
- SPARK_HADOOP_VERSION=1.0.4 sbt/sbt assembly
+ mvn -Dhadoop.version=1.0.4 -DskipTests clean package
+ mvn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package
-The table below lists the corresponding `SPARK_HADOOP_VERSION` code for each CDH/HDP release. Note that
+The table below lists the corresponding `hadoop.version` code for each CDH/HDP release. Note that
some Hadoop releases are binary compatible across client versions. This means the pre-built Spark
distribution may "just work" without you needing to compile. That said, we recommend compiling with
the _exact_ Hadoop version you are running to avoid any compatibility errors.
@@ -46,6 +48,10 @@ the _exact_ Hadoop version you are running to avoid any compatibility errors.
</tr>
</table>
+In SBT, the equivalent can be achieved by setting the SPARK_HADOOP_VERSION flag:
+
+ SPARK_HADOOP_VERSION=1.0.4 sbt/sbt assembly
+
# Linking Applications to the Hadoop Version
In addition to compiling Spark itself against the right version, you need to add a Maven dependency on that