aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2013-08-27 19:39:54 -0700
committerMatei Zaharia <matei@eecs.berkeley.edu>2013-08-29 21:19:06 -0700
commit2de756ff195e580007d5d96d49fa27634e04c765 (patch)
tree691edcf7e36889f350ad3bb4b6d6ff38a868fa9a /docs
parent666d93c294458cb056cb590eb11bb6cf979861e5 (diff)
downloadspark-2de756ff195e580007d5d96d49fa27634e04c765.tar.gz
spark-2de756ff195e580007d5d96d49fa27634e04c765.tar.bz2
spark-2de756ff195e580007d5d96d49fa27634e04c765.zip
Update some build instructions because only sbt assembly and mvn package
are now needed
Diffstat (limited to 'docs')
-rw-r--r--docs/building-with-maven.md14
-rw-r--r--docs/index.md2
-rw-r--r--docs/python-programming-guide.md2
-rw-r--r--docs/quick-start.md2
4 files changed, 10 insertions, 10 deletions
diff --git a/docs/building-with-maven.md b/docs/building-with-maven.md
index a9f2cb8a7a..72d37fec0a 100644
--- a/docs/building-with-maven.md
+++ b/docs/building-with-maven.md
@@ -15,18 +15,18 @@ To enable support for HDFS and other Hadoop-supported storage systems, specify t
For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:
# Apache Hadoop 1.2.1
- $ mvn -Dhadoop.version=1.2.1 clean install
+ $ mvn -Dhadoop.version=1.2.1 clean package
# Cloudera CDH 4.2.0 with MapReduce v1
- $ mvn -Dhadoop.version=2.0.0-mr1-cdh4.2.0 clean install
+ $ mvn -Dhadoop.version=2.0.0-mr1-cdh4.2.0 clean package
For Apache Hadoop 2.x, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, enable the "hadoop2-yarn" profile:
# Apache Hadoop 2.0.5-alpha
- $ mvn -Phadoop2-yarn -Dhadoop.version=2.0.5-alpha clean install
+ $ mvn -Phadoop2-yarn -Dhadoop.version=2.0.5-alpha clean package
# Cloudera CDH 4.2.0 with MapReduce v2
- $ mvn -Phadoop2-yarn -Dhadoop.version=2.0.0-cdh4.2.0 clean install
+ $ mvn -Phadoop2-yarn -Dhadoop.version=2.0.0-cdh4.2.0 clean package
## Spark Tests in Maven ##
@@ -35,7 +35,7 @@ Tests are run by default via the scalatest-maven-plugin. With this you can do th
Skip test execution (but not compilation):
- $ mvn -Dhadoop.version=... -DskipTests clean install
+ $ mvn -Dhadoop.version=... -DskipTests clean package
To run a specific test suite:
@@ -72,8 +72,8 @@ This setup works fine in IntelliJ IDEA 11.1.4. After opening the project via the
## Building Spark Debian Packages ##
-It includes support for building a Debian package containing a 'fat-jar' which includes the repl, the examples and bagel. This can be created by specifying the deb profile:
+It includes support for building a Debian package containing a 'fat-jar' which includes the repl, the examples and bagel. This can be created by specifying the following profiles:
- $ mvn -Pdeb clean install
+ $ mvn -Prepl-bin -Pdeb clean package
The debian package can then be found under repl/target. We added the short commit hash to the file name so that we can distinguish individual packages build for SNAPSHOT versions.
diff --git a/docs/index.md b/docs/index.md
index e51a6998f6..ec9c7dd4f3 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -20,7 +20,7 @@ of these methods on slave nodes on your cluster.
Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with it. To compile the code, go into the top-level Spark directory and run
- sbt/sbt package
+ sbt/sbt assembly
Spark also supports building using Maven. If you would like to build using Maven, see the [instructions for building Spark with Maven](building-with-maven.html).
diff --git a/docs/python-programming-guide.md b/docs/python-programming-guide.md
index 794bff5647..15d3ebfcae 100644
--- a/docs/python-programming-guide.md
+++ b/docs/python-programming-guide.md
@@ -70,7 +70,7 @@ The script automatically adds the `pyspark` package to the `PYTHONPATH`.
The `pyspark` script launches a Python interpreter that is configured to run PySpark jobs. To use `pyspark` interactively, first build Spark, then launch it directly from the command line without any options:
{% highlight bash %}
-$ sbt/sbt package
+$ sbt/sbt assembly
$ ./pyspark
{% endhighlight %}
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 335643536a..4e9deadbaa 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -12,7 +12,7 @@ See the [programming guide](scala-programming-guide.html) for a more complete re
To follow along with this guide, you only need to have successfully built Spark on one machine. Simply go into your Spark directory and run:
{% highlight bash %}
-$ sbt/sbt package
+$ sbt/sbt assembly
{% endhighlight %}
# Interactive Analysis with the Spark Shell