From fcfe4f920484b64b01e4e22219d59c78ffd17054 Mon Sep 17 00:00:00 2001 From: shane-huang Date: Mon, 23 Sep 2013 12:42:34 +0800 Subject: add admin scripts to sbin Signed-off-by: shane-huang --- docs/spark-standalone.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'docs/spark-standalone.md') diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index 9d4ad1ec8d..b3f9160673 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -67,12 +67,12 @@ To launch a Spark standalone cluster with the launch scripts, you need to create Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/bin`: -- `bin/start-master.sh` - Starts a master instance on the machine the script is executed on. -- `bin/start-slaves.sh` - Starts a slave instance on each machine specified in the `conf/slaves` file. -- `bin/start-all.sh` - Starts both a master and a number of slaves as described above. -- `bin/stop-master.sh` - Stops the master that was started via the `bin/start-master.sh` script. -- `bin/stop-slaves.sh` - Stops the slave instances that were started via `bin/start-slaves.sh`. -- `bin/stop-all.sh` - Stops both the master and the slaves as described above. +- `sbin/start-master.sh` - Starts a master instance on the machine the script is executed on. +- `sbin/start-slaves.sh` - Starts a slave instance on each machine specified in the `conf/slaves` file. +- `sbin/start-all.sh` - Starts both a master and a number of slaves as described above. +- `sbin/stop-master.sh` - Stops the master that was started via the `bin/start-master.sh` script. +- `sbin/stop-slaves.sh` - Stops the slave instances that were started via `bin/start-slaves.sh`. +- `sbin/stop-all.sh` - Stops both the master and the slaves as described above. Note that these scripts must be executed on the machine you want to run the Spark master on, not your local machine. -- cgit v1.2.3