aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorEvan Chan <ev@ooyala.com>2013-09-07 08:56:24 -0700
committerEvan Chan <ev@ooyala.com>2013-09-07 08:56:24 -0700
commitbe1ee28ca630e663f54f0ca043e7f1877ccb3da8 (patch)
treef10c879a16a0e6cbccb7d13b9bdb9906aefb9db2 /docs
parentff1dbf210691988cbe8b09aafa37815060fdd7ac (diff)
downloadspark-be1ee28ca630e663f54f0ca043e7f1877ccb3da8.tar.gz
spark-be1ee28ca630e663f54f0ca043e7f1877ccb3da8.tar.bz2
spark-be1ee28ca630e663f54f0ca043e7f1877ccb3da8.zip
CR feedback from Matei
Diffstat (limited to 'docs')
-rw-r--r--docs/index.md3
-rw-r--r--docs/spark-standalone.md5
2 files changed, 1 insertions, 7 deletions
diff --git a/docs/index.md b/docs/index.md
index ee82c207d7..d3aacc629f 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -21,9 +21,6 @@ Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_VERSION}}. If you write applications in Scala, you will need to use this same version of Scala in your own program -- newer major versions may not work. You can get the right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
-Note: if you are building a binary distribution using `./make-distribution.sh`, you will not need to run
-`sbt/sbt assembly`.
-
# Testing the Build
Spark comes with several sample programs in the `examples` directory.
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 30641bd777..69e1291580 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -5,7 +5,7 @@ title: Spark Standalone Mode
In addition to running on the Mesos or YARN cluster managers, Spark also provides a simple standalone deploy mode. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided [launch scripts](#cluster-launch-scripts). It is also possible to run these daemons on a single machine for testing.
-# Deploying Spark Standalone to a Cluster
+# Installing Spark Standalone to a Cluster
The easiest way to deploy Spark is by running the `./make-distribution.sh` script to create a binary distribution.
This distribution can be deployed to any machine with the Java runtime installed; there is no need to install Scala.
@@ -13,9 +13,6 @@ This distribution can be deployed to any machine with the Java runtime installed
The recommended procedure is to deploy and start the master on one node first, get the master spark URL,
then modify `conf/spark-env.sh` in the `dist/` directory before deploying to all the other nodes.
-It is also possible to deploy the source directory once you have built it with `sbt assembly`. Scala 2.9.3
-will need to be deployed on all the machines as well, and SCALA_HOME will need to point to the Scala installation.
-
# Starting a Cluster Manually
You can start a standalone master server by executing: