From 3db404a43a90a9cca37090381857dc955496385a Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Sun, 1 Sep 2013 23:45:57 +0000 Subject: Run script fixes for Windows after package & assembly change --- docs/spark-standalone.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'docs/spark-standalone.md') diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index 9ab6ba0830..994a96f2c9 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -9,7 +9,7 @@ In addition to running on the Mesos or YARN cluster managers, Spark also provide You can start a standalone master server by executing: - ./spark-class spark.deploy.master.Master + ./spark-class org.apache.spark.deploy.master.Master Once started, the master will print out a `spark://HOST:PORT` URL for itself, which you can use to connect workers to it, or pass as the "master" argument to `SparkContext`. You can also find this URL on @@ -17,7 +17,7 @@ the master's web UI, which is [http://localhost:8080](http://localhost:8080) by Similarly, you can start one or more workers and connect them to the master via: - ./spark-class spark.deploy.worker.Worker spark://IP:PORT + ./spark-class org.apache.spark.deploy.worker.Worker spark://IP:PORT Once you have started a worker, look at the master's web UI ([http://localhost:8080](http://localhost:8080) by default). You should see the new node listed there, along with its number of CPUs and memory (minus one gigabyte left for the OS). -- cgit v1.2.3