aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/scala-programming-guide.md9
-rw-r--r--docs/spark-debugger.md2
-rw-r--r--docs/spark-standalone.md4
-rw-r--r--docs/streaming-programming-guide.md6
4 files changed, 10 insertions, 11 deletions
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index a3171709ff..b8d89cf00f 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -60,17 +60,18 @@ which avoids hard-coding the master name in your application.
In the Spark shell, a special interpreter-aware SparkContext is already created for you, in the
variable called `sc`. Making your own SparkContext will not work. You can set which master the
-context connects to using the `MASTER` environment variable, and you can add JARs to the classpath
-with the `ADD_JARS` variable. For example, to run `bin/spark-shell` on exactly four cores, use
+context connects to using the `--master` argument, and you can add JARs to the classpath
+by passing a comma separated list to the `--jars` argument. For example, to run
+`bin/spark-shell` on exactly four cores, use
{% highlight bash %}
-$ MASTER=local[4] ./bin/spark-shell
+$ ./bin/spark-shell --master local[4]
{% endhighlight %}
Or, to also add `code.jar` to its classpath, use:
{% highlight bash %}
-$ MASTER=local[4] ADD_JARS=code.jar ./bin/spark-shell
+$ ./bin/spark-shell --master local[4] --jars code.jar
{% endhighlight %}
### Master URLs
diff --git a/docs/spark-debugger.md b/docs/spark-debugger.md
index 891c2bfa89..35d06c51aa 100644
--- a/docs/spark-debugger.md
+++ b/docs/spark-debugger.md
@@ -39,7 +39,7 @@ where `path/to/event-log` is where you want the event log to go relative to `$SP
### Loading the event log into the debugger
-1. Run a Spark shell with `MASTER=<i>host</i> ./bin/spark-shell`.
+1. Run a Spark shell with `./bin/spark-shell --master <i>hist</i>`.
2. Use `EventLogReader` to load the event log as follows:
{% highlight scala %}
spark> val r = new spark.EventLogReader(sc, Some("path/to/event-log"))
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 7e4eea323a..dc7f206e03 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -139,12 +139,12 @@ constructor](scala-programming-guide.html#initializing-spark).
To run an interactive Spark shell against the cluster, run the following command:
- MASTER=spark://IP:PORT ./bin/spark-shell
+ ./bin/spark-shell --master spark://IP:PORT
Note that if you are running spark-shell from one of the spark cluster machines, the `bin/spark-shell` script will
automatically set MASTER from the `SPARK_MASTER_IP` and `SPARK_MASTER_PORT` variables in `conf/spark-env.sh`.
-You can also pass an option `-c <numCores>` to control the number of cores that spark-shell uses on the cluster.
+You can also pass an option `--cores <numCores>` to control the number of cores that spark-shell uses on the cluster.
# Launching Compiled Spark Applications
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 946d6c4879..7ad06427ca 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -272,12 +272,10 @@ Time: 1357008430000 ms
</td>
</table>
-If you plan to run the Scala code for Spark Streaming-based use cases in the Spark
-shell, you should start the shell with the SparkConfiguration pre-configured to
-discard old batches periodically:
+You can also use Spark Streaming directly from the Spark shell:
{% highlight bash %}
-$ SPARK_JAVA_OPTS=-Dspark.cleaner.ttl=10000 bin/spark-shell
+$ bin/spark-shell
{% endhighlight %}
... and create your StreamingContext by wrapping the existing interactive shell