aboutsummaryrefslogtreecommitdiff
path: root/docs/spark-standalone.md
diff options
context:
space:
mode:
authorEvan Chan <ev@ooyala.com>2013-09-06 14:03:44 -0700
committerEvan Chan <ev@ooyala.com>2013-09-06 14:03:44 -0700
commit88d53f0dff133920fe14e40a2c4e36dd1c241ec6 (patch)
tree3663fe3330886ea64d4d8a7188d5d8c441d95fe0 /docs/spark-standalone.md
parent5a18b854a704fc37dc268f9183552da8655d5b1d (diff)
downloadspark-88d53f0dff133920fe14e40a2c4e36dd1c241ec6.tar.gz
spark-88d53f0dff133920fe14e40a2c4e36dd1c241ec6.tar.bz2
spark-88d53f0dff133920fe14e40a2c4e36dd1c241ec6.zip
"launch" scripts is more accurate terminology
Diffstat (limited to 'docs/spark-standalone.md')
-rw-r--r--docs/spark-standalone.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index c54de082f9..d81b4cd0eb 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -3,7 +3,7 @@ layout: global
title: Spark Standalone Mode
---
-In addition to running on the Mesos or YARN cluster managers, Spark also provides a simple standalone deploy mode. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided [deploy scripts](#cluster-launch-scripts). It is also possible to run these daemons on a single machine for testing.
+In addition to running on the Mesos or YARN cluster managers, Spark also provides a simple standalone deploy mode. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided [launch scripts](#cluster-launch-scripts). It is also possible to run these daemons on a single machine for testing.
# Starting a Cluster Manually
@@ -55,7 +55,7 @@ Finally, the following configuration options can be passed to the master and wor
# Cluster Launch Scripts
-To launch a Spark standalone cluster with the deploy scripts, you need to create a file called `conf/slaves` in your Spark directory, which should contain the hostnames of all the machines where you would like to start Spark workers, one per line. The master machine must be able to access each of the slave machines via password-less `ssh` (using a private key). For testing, you can just put `localhost` in this file.
+To launch a Spark standalone cluster with the launch scripts, you need to create a file called `conf/slaves` in your Spark directory, which should contain the hostnames of all the machines where you would like to start Spark workers, one per line. The master machine must be able to access each of the slave machines via password-less `ssh` (using a private key). For testing, you can just put `localhost` in this file.
Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/bin`: