aboutsummaryrefslogtreecommitdiff
path: root/docs/building-spark.md
diff options
context:
space:
mode:
authorHolden Karau <holden@pigscanfly.ca>2016-04-06 16:00:29 -0700
committerAndrew Or <andrew@databricks.com>2016-04-06 16:00:29 -0700
commit457e58befe8cb7c346e54b344a45fa357b68cfc0 (patch)
tree7f5be1eb5fbbbbd01bd5d3754e765dd39b3e1c5b /docs/building-spark.md
parent9af5423ec28258becf27dbe89833b4f7d324d26a (diff)
downloadspark-457e58befe8cb7c346e54b344a45fa357b68cfc0.tar.gz
spark-457e58befe8cb7c346e54b344a45fa357b68cfc0.tar.bz2
spark-457e58befe8cb7c346e54b344a45fa357b68cfc0.zip
[SPARK-14424][BUILD][DOCS] Update the build docs to switch from assembly to package and add a no…
## What changes were proposed in this pull request? Change our build docs & shell scripts to that developers are aware of the change from "assembly" to "package" ## How was this patch tested? Manually ran ./bin/spark-shell after ./build/sbt assembly and verified error message printed, ran new suggested build target and verified ./bin/spark-shell runs after this. Author: Holden Karau <holden@pigscanfly.ca> Author: Holden Karau <holden@us.ibm.com> Closes #12197 from holdenk/SPARK-1424-spark-class-broken-fix-build-docs.
Diffstat (limited to 'docs/building-spark.md')
-rw-r--r--docs/building-spark.md13
1 files changed, 3 insertions, 10 deletions
diff --git a/docs/building-spark.md b/docs/building-spark.md
index 13aa80496e..40661604af 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -190,13 +190,6 @@ or
Java 8 tests are automatically enabled when a Java 8 JDK is detected.
If you have JDK 8 installed but it is not the system default, you can set JAVA_HOME to point to JDK 8 before running the tests.
-# Building for PySpark on YARN
-
-PySpark on YARN is only supported if the jar is built with Maven. Further, there is a known problem
-with building this assembly jar on Red Hat based operating systems (see [SPARK-1753](https://issues.apache.org/jira/browse/SPARK-1753)). If you wish to
-run PySpark on a YARN cluster with Red Hat installed, we recommend that you build the jar elsewhere,
-then ship it over to the cluster. We are investigating the exact cause for this.
-
# Packaging without Hadoop Dependencies for YARN
The assembly jar produced by `mvn package` will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with `yarn.application.classpath`. The `hadoop-provided` profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.
@@ -210,7 +203,7 @@ compilation. More advanced developers may wish to use SBT.
The SBT build is derived from the Maven POM files, and so the same Maven profiles and variables
can be set to control the SBT build. For example:
- build/sbt -Pyarn -Phadoop-2.3 assembly
+ build/sbt -Pyarn -Phadoop-2.3 package
To avoid the overhead of launching sbt each time you need to re-compile, you can launch sbt
in interactive mode by running `build/sbt`, and then run all build commands at the command
@@ -219,9 +212,9 @@ prompt. For more recommendations on reducing build time, refer to the
# Testing with SBT
-Some of the tests require Spark to be packaged first, so always run `build/sbt assembly` the first time. The following is an example of a correct (build, test) sequence:
+Some of the tests require Spark to be packaged first, so always run `build/sbt package` the first time. The following is an example of a correct (build, test) sequence:
- build/sbt -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver assembly
+ build/sbt -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver package
build/sbt -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver test
To run only a specific test suite as follows: