aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndy Konwinski <andyk@berkeley.edu>2013-03-17 14:47:44 -0700
committerAndy Konwinski <andyk@berkeley.edu>2013-03-17 15:02:40 -0700
commitad7f0452ab27b71ede3eea67d03ebd6d1710ee90 (patch)
tree7e03a5b420b2fa40da61fa8dff6a4b215d062ef5
parentc1e9cdc49f89222b366a14a20ffd937ca0fb9adc (diff)
downloadspark-ad7f0452ab27b71ede3eea67d03ebd6d1710ee90.tar.gz
spark-ad7f0452ab27b71ede3eea67d03ebd6d1710ee90.tar.bz2
spark-ad7f0452ab27b71ede3eea67d03ebd6d1710ee90.zip
Adds page to docs about building using Maven.
Adds links to new instructions in: * The main Spark project README.md * The docs nav menu called "More" * The docs Overview page under the "Building" and "Where to Go from Here" sections
-rw-r--r--README.md2
-rwxr-xr-xdocs/_layouts/global.html1
-rw-r--r--docs/building-with-maven.md66
-rw-r--r--docs/index.md3
4 files changed, 72 insertions, 0 deletions
diff --git a/README.md b/README.md
index b0fc3524fa..1f8f7b6876 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,8 @@ which is packaged with it. To build Spark and its example programs, run:
sbt/sbt package
+Spark also supports building using Maven. If you would like to build using Maven, see the [instructions for building Spark with Maven](http://spark-project.org/docs/latest/building-with-maven.html) in the spark documentation..
+
To run Spark, you will need to have Scala's bin directory in your `PATH`, or
you will need to set the `SCALA_HOME` environment variable to point to where
you've installed Scala. Scala must be accessible through one of these
diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html
index 280ead0323..f06ab2d5b0 100755
--- a/docs/_layouts/global.html
+++ b/docs/_layouts/global.html
@@ -90,6 +90,7 @@
<li class="dropdown">
<a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
<ul class="dropdown-menu">
+ <li><a href="building-with-maven.html">Building Spark with Maven</a></li>
<li><a href="configuration.html">Configuration</a></li>
<li><a href="tuning.html">Tuning Guide</a></li>
<li><a href="bagel-programming-guide.html">Bagel (Pregel on Spark)</a></li>
diff --git a/docs/building-with-maven.md b/docs/building-with-maven.md
new file mode 100644
index 0000000000..bbf89cf17a
--- /dev/null
+++ b/docs/building-with-maven.md
@@ -0,0 +1,66 @@
+---
+layout: global
+title: Building Spark with Maven
+---
+
+* This will become a table of contents (this text will be scraped).
+{:toc}
+
+Building Spark using Maven Requires Maven 3 (the build process is tested with Maven 3.0.4) and Java 1.6 or newer.
+
+Building with Maven requires that a Hadoop profile be specified explicitly at the command line, there is no default. There are two profiles to choose from, one for building for Hadoop 1 or Hadoop 2.
+
+for Hadoop 1 (using 0.20.205.0) use:
+
+ $ mvn -Phadoop1 clean install
+
+
+for Hadoop 2 (using 2.0.0-mr1-cdh4.1.1) use:
+
+ $ mvn -Phadoop2 clean install
+
+It uses the scala-maven-plugin which supports incremental and continuous compilation. E.g.
+
+ $ mvn -Phadoop2 scala:cc
+
+…should run continuous compilation (i.e. wait for changes). However, this has not been tested extensively.
+
+## Spark Tests in Maven ##
+
+Tests are run by default via the scalatest-maven-plugin. With this you can do things like:
+
+skip test execution (but not compilation):
+
+ $ mvn -DskipTests -Phadoop2 clean install
+
+To runn a specific test suite:
+
+ $ mvn -Phadoop2 -Dsuites=spark.repl.ReplSuite test
+
+
+## Setting up JVM Memory Usage Via Maven ##
+
+You might run into the following errors if you're using a vanilla installation of Maven:
+
+ [INFO] Compiling 203 Scala sources and 9 Java sources to /Users/andyk/Development/spark/core/target/scala-2.9.2/classes...
+ [ERROR] PermGen space -> [Help 1]
+
+ [INFO] Compiling 203 Scala sources and 9 Java sources to /Users/andyk/Development/spark/core/target/scala-2.9.2/classes...
+ [ERROR] Java heap space -> [Help 1]
+
+To fix these, you can do the following:
+
+ export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=128M"
+
+
+## Using With IntelliJ IDEA ##
+
+This setup works fine in IntelliJ IDEA 11.1.4. After opening the project via the pom.xml file in the project root folder, you only need to activate either the hadoop1 or hadoop2 profile in the "Maven Properties" popout. We have not tried Eclipse/Scala IDE with this.
+
+## Building Spark Debian Packages ##
+
+It includes support for building a Debian package containing a 'fat-jar' which includes the repl, the examples and bagel. This can be created by specifying the deb profile:
+
+ $ mvn -Phadoop2,deb clean install
+
+The debian package can then be found under repl/target. We added the short commit hash to the file name so that we can distinguish individual packages build for SNAPSHOT versions.
diff --git a/docs/index.md b/docs/index.md
index 45facd8e63..51d505e1fa 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -22,6 +22,8 @@ Spark uses [Simple Build Tool](https://github.com/harrah/xsbt/wiki), which is bu
sbt/sbt package
+Spark also supports building using Maven. If you would like to build using Maven, see the [instructions for building Spark with Maven](building-with-maven.html).
+
# Testing the Build
Spark comes with a number of sample programs in the `examples` directory.
@@ -72,6 +74,7 @@ of `project/SparkBuild.scala`, then rebuilding Spark (`sbt/sbt clean compile`).
**Other documents:**
+* [Building Spark With Maven](building-with-maven.html): Build Spark using the Maven build tool
* [Configuration](configuration.html): customize Spark via its configuration system
* [Tuning Guide](tuning.html): best practices to optimize performance and memory use
* [Bagel](bagel-programming-guide.html): an implementation of Google's Pregel on Spark