From 76d5d2d3c5881782100571fa2976017fa8df4f57 Mon Sep 17 00:00:00 2001 From: Evan Chan Date: Fri, 6 Sep 2013 13:53:00 -0700 Subject: Add notes about starting spark-shell --- docs/spark-standalone.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) (limited to 'docs/spark-standalone.md') diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index 994a96f2c9..7d4bb1d8be 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -22,7 +22,7 @@ Similarly, you can start one or more workers and connect them to the master via: Once you have started a worker, look at the master's web UI ([http://localhost:8080](http://localhost:8080) by default). You should see the new node listed there, along with its number of CPUs and memory (minus one gigabyte left for the OS). -Finally, the following configuration options can be passed to the master and worker: +Finally, the following configuration options can be passed to the master and worker: @@ -134,6 +134,10 @@ To run an interactive Spark shell against the cluster, run the following command MASTER=spark://IP:PORT ./spark-shell +Note that if you are running spark-shell from one of the spark cluster machines, the `spark-shell` script will +automatically set MASTER from the `SPARK_MASTER_IP` and `SPARK_MASTER_PORT` variables in `conf/spark-env.sh`. + +You can also pass an option `-c ` to control the number of cores that spark-shell uses on the cluster. # Job Scheduling -- cgit v1.2.3
ArgumentMeaning