aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorDongjoon Hyun <dongjoon@apache.org>2016-03-18 21:32:48 -0700
committerReynold Xin <rxin@databricks.com>2016-03-18 21:32:48 -0700
commitc11ea2e4138acdd8d4ed487049ded35346bca528 (patch)
tree85f112e9d24eea294fb93fa82383b15b9eabfa77 /docs
parentf43a26ef9260396761e28aafd5c7b9600c2b04d9 (diff)
downloadspark-c11ea2e4138acdd8d4ed487049ded35346bca528.tar.gz
spark-c11ea2e4138acdd8d4ed487049ded35346bca528.tar.bz2
spark-c11ea2e4138acdd8d4ed487049ded35346bca528.zip
[MINOR][DOCS] Update build descriptions and commands
## What changes were proposed in this pull request? This PR updates Scala and Hadoop versions in the build description and commands in `Building Spark` documents. ## How was this patch tested? N/A Author: Dongjoon Hyun <dongjoon@apache.org> Closes #11838 from dongjoon-hyun/fix_doc_building_spark.
Diffstat (limited to 'docs')
-rw-r--r--docs/building-spark.md11
-rw-r--r--docs/index.md4
-rw-r--r--docs/running-on-mesos.md4
-rw-r--r--docs/running-on-yarn.md4
4 files changed, 13 insertions, 10 deletions
diff --git a/docs/building-spark.md b/docs/building-spark.md
index e478954c62..1e202acb9e 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -98,8 +98,11 @@ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package
# Apache Hadoop 2.4.X or 2.5.X
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=VERSION -DskipTests clean package
-Versions of Hadoop after 2.5.X may or may not work with the -Phadoop-2.4 profile (they were
-released after this version of Spark).
+# Apache Hadoop 2.6.X
+mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean package
+
+# Apache Hadoop 2.7.X and later
+mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=VERSION -DskipTests clean package
# Different versions of HDFS and YARN.
mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Dyarn.version=2.2.0 -DskipTests clean package
@@ -140,10 +143,10 @@ It's possible to build Spark sub-modules using the `mvn -pl` option.
For instance, you can build the Spark Streaming module using:
{% highlight bash %}
-mvn -pl :spark-streaming_2.10 clean install
+mvn -pl :spark-streaming_2.11 clean install
{% endhighlight %}
-where `spark-streaming_2.10` is the `artifactId` as defined in `streaming/pom.xml` file.
+where `spark-streaming_2.11` is the `artifactId` as defined in `streaming/pom.xml` file.
# Continuous Compilation
diff --git a/docs/index.md b/docs/index.md
index 9dfc52a2bd..20eab567a5 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -130,8 +130,8 @@ options for deployment:
* [StackOverflow tag `apache-spark`](http://stackoverflow.com/questions/tagged/apache-spark)
* [Mailing Lists](http://spark.apache.org/mailing-lists.html): ask questions about Spark here
* [AMP Camps](http://ampcamp.berkeley.edu/): a series of training camps at UC Berkeley that featured talks and
- exercises about Spark, Spark Streaming, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/3/),
- [slides](http://ampcamp.berkeley.edu/3/) and [exercises](http://ampcamp.berkeley.edu/3/exercises/) are
+ exercises about Spark, Spark Streaming, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/6/),
+ [slides](http://ampcamp.berkeley.edu/6/) and [exercises](http://ampcamp.berkeley.edu/6/exercises/) are
available online for free.
* [Code Examples](http://spark.apache.org/examples.html): more are also available in the `examples` subfolder of Spark ([Scala]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/examples),
[Java]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/java/org/apache/spark/examples),
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 3a832de95f..293a82882e 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -167,8 +167,8 @@ For example:
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master mesos://207.184.161.138:7077 \
- --deploy-mode cluster
- --supervise
+ --deploy-mode cluster \
+ --supervise \
--executor-memory 20G \
--total-executor-cores 100 \
http://path/to/examples.jar \
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 8045f8c5b8..c775fe710f 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -49,8 +49,8 @@ In `cluster` mode, the driver runs on a different machine than the client, so `S
$ ./bin/spark-submit --class my.main.Class \
--master yarn \
--deploy-mode cluster \
- --jars my-other-jar.jar,my-other-other-jar.jar
- my-main-jar.jar
+ --jars my-other-jar.jar,my-other-other-jar.jar \
+ my-main-jar.jar \
app_arg1 app_arg2