aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorshane-huang <shengsheng.huang@intel.com>2013-09-27 09:28:33 +0800
committershane-huang <shengsheng.huang@intel.com>2013-09-27 09:28:33 +0800
commit84849baf88d31cfaaeee158a947c4db1abe94ce6 (patch)
tree364ffa6e77f252264e1ecaf9e517ff387920b8ab /docs
parent714fdabd99bbff3a0cdec5dcf06b021a3a3f2da8 (diff)
parent3a5aa920fc9839aa99ea1befc467cc1f60230f3d (diff)
downloadspark-84849baf88d31cfaaeee158a947c4db1abe94ce6.tar.gz
spark-84849baf88d31cfaaeee158a947c4db1abe94ce6.tar.bz2
spark-84849baf88d31cfaaeee158a947c4db1abe94ce6.zip
Merge branch 'reorgscripts' into scripts-reorg
Diffstat (limited to 'docs')
-rw-r--r--docs/running-on-yarn.md4
-rw-r--r--docs/spark-standalone.md14
2 files changed, 9 insertions, 9 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index c611db0af4..767eb5cdac 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -42,7 +42,7 @@ This would be used to connect to the cluster, write to the dfs and submit jobs t
The command to launch the YARN Client is as follows:
- SPARK_JAR=<SPARK_ASSEMBLY_JAR_FILE> ./spark-class org.apache.spark.deploy.yarn.Client \
+ SPARK_JAR=<SPARK_ASSEMBLY_JAR_FILE> ./sbin/spark-class org.apache.spark.deploy.yarn.Client \
--jar <YOUR_APP_JAR_FILE> \
--class <APP_MAIN_CLASS> \
--args <APP_MAIN_ARGUMENTS> \
@@ -62,7 +62,7 @@ For example:
# Submit Spark's ApplicationMaster to YARN's ResourceManager, and instruct Spark to run the SparkPi example
$ SPARK_JAR=./assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop2.0.5-alpha.jar \
- ./spark-class org.apache.spark.deploy.yarn.Client \
+ ./sbin/spark-class org.apache.spark.deploy.yarn.Client \
--jar examples/target/scala-{{site.SCALA_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
--class org.apache.spark.examples.SparkPi \
--args yarn-standalone \
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 81cdbefd0c..b3f9160673 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -25,7 +25,7 @@ the master's web UI, which is [http://localhost:8080](http://localhost:8080) by
Similarly, you can start one or more workers and connect them to the master via:
- ./spark-class org.apache.spark.deploy.worker.Worker spark://IP:PORT
+ ./sbin/spark-class org.apache.spark.deploy.worker.Worker spark://IP:PORT
Once you have started a worker, look at the master's web UI ([http://localhost:8080](http://localhost:8080) by default).
You should see the new node listed there, along with its number of CPUs and memory (minus one gigabyte left for the OS).
@@ -67,12 +67,12 @@ To launch a Spark standalone cluster with the launch scripts, you need to create
Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/bin`:
-- `bin/start-master.sh` - Starts a master instance on the machine the script is executed on.
-- `bin/start-slaves.sh` - Starts a slave instance on each machine specified in the `conf/slaves` file.
-- `bin/start-all.sh` - Starts both a master and a number of slaves as described above.
-- `bin/stop-master.sh` - Stops the master that was started via the `bin/start-master.sh` script.
-- `bin/stop-slaves.sh` - Stops the slave instances that were started via `bin/start-slaves.sh`.
-- `bin/stop-all.sh` - Stops both the master and the slaves as described above.
+- `sbin/start-master.sh` - Starts a master instance on the machine the script is executed on.
+- `sbin/start-slaves.sh` - Starts a slave instance on each machine specified in the `conf/slaves` file.
+- `sbin/start-all.sh` - Starts both a master and a number of slaves as described above.
+- `sbin/stop-master.sh` - Stops the master that was started via the `bin/start-master.sh` script.
+- `sbin/stop-slaves.sh` - Stops the slave instances that were started via `bin/start-slaves.sh`.
+- `sbin/stop-all.sh` - Stops both the master and the slaves as described above.
Note that these scripts must be executed on the machine you want to run the Spark master on, not your local machine.