aboutsummaryrefslogtreecommitdiff
path: root/docs/scala-programming-guide.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/scala-programming-guide.md')
-rw-r--r--docs/scala-programming-guide.md13
1 files changed, 8 insertions, 5 deletions
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index f25e9cca88..3ed86e460c 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -56,7 +56,7 @@ The `master` parameter is a string specifying a [Spark, Mesos or YARN cluster UR
to connect to, or a special "local" string to run in local mode, as described below. `appName` is
a name for your application, which will be shown in the cluster web UI. It's also possible to set
these variables [using a configuration file](cluster-overview.html#loading-configurations-from-a-file)
-which avoids hard-coding the master name in your application.
+which avoids hard-coding the master url in your application.
In the Spark shell, a special interpreter-aware SparkContext is already created for you, in the
variable called `sc`. Making your own SparkContext will not work. You can set which master the
@@ -74,6 +74,11 @@ Or, to also add `code.jar` to its classpath, use:
$ ./bin/spark-shell --master local[4] --jars code.jar
{% endhighlight %}
+For a complete list of options, run Spark shell with the `--help` option. Behind the scenes,
+Spark shell invokes the more general [Spark submit script](cluster-overview.html#launching-applications-with-spark-submit)
+used for launching applications, and passes on all of its parameters. As a result, these two scripts
+share the same parameters.
+
### Master URLs
The master URL passed to Spark can be in one of the following formats:
@@ -98,7 +103,7 @@ cluster mode. The cluster location will be inferred based on the local Hadoop co
</td></tr>
</table>
-If no master URL is specified, the spark shell defaults to "local[*]".
+If no master URL is specified, the spark shell defaults to `local[*]`.
# Resilient Distributed Datasets (RDDs)
@@ -432,9 +437,7 @@ res2: Int = 10
You can see some [example Spark programs](http://spark.apache.org/examples.html) on the Spark website.
In addition, Spark includes several samples in `examples/src/main/scala`. Some of them have both Spark versions and local (non-parallel) versions, allowing you to see what had to be changed to make the program run on a cluster. You can run them using by passing the class name to the `bin/run-example` script included in Spark; for example:
- ./bin/run-example org.apache.spark.examples.SparkPi
-
-Each example program prints usage help when run without any arguments.
+ ./bin/run-example SparkPi
For help on optimizing your program, the [configuration](configuration.html) and
[tuning](tuning.html) guides provide information on best practices. They are especially important for