aboutsummaryrefslogtreecommitdiff
path: root/docs/scala-programming-guide.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@eecs.berkeley.edu>2013-02-25 15:13:30 -0800
committerMatei Zaharia <matei@eecs.berkeley.edu>2013-02-25 15:13:30 -0800
commit490f056cddc3dc02066a1e2414be6576d6441d51 (patch)
treef97a548bc4bd37d57e10b6c232015fdb32e27d2a /docs/scala-programming-guide.md
parent568bdaf8ae784c9b832f564cb99f1b81ad487f73 (diff)
downloadspark-490f056cddc3dc02066a1e2414be6576d6441d51.tar.gz
spark-490f056cddc3dc02066a1e2414be6576d6441d51.tar.bz2
spark-490f056cddc3dc02066a1e2414be6576d6441d51.zip
Allow passing sparkHome and JARs to StreamingContext constructor
Also warns if spark.cleaner.ttl is not set in the version where you pass your own SparkContext.
Diffstat (limited to 'docs/scala-programming-guide.md')
-rw-r--r--docs/scala-programming-guide.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md
index b98718a553..2315aadbdf 100644
--- a/docs/scala-programming-guide.md
+++ b/docs/scala-programming-guide.md
@@ -38,10 +38,10 @@ The first thing a Spark program must do is to create a `SparkContext` object, wh
This is done through the following constructor:
{% highlight scala %}
-new SparkContext(master, jobName, [sparkHome], [jars])
+new SparkContext(master, appName, [sparkHome], [jars])
{% endhighlight %}
-The `master` parameter is a string specifying a [Mesos](running-on-mesos.html) cluster to connect to, or a special "local" string to run in local mode, as described below. `jobName` is a name for your job, which will be shown in the Mesos web UI when running on a cluster. Finally, the last two parameters are needed to deploy your code to a cluster if running in distributed mode, as described later.
+The `master` parameter is a string specifying a [Spark or Mesos cluster URL](#master-urls) to connect to, or a special "local" string to run in local mode, as described below. `appName` is a name for your application, which will be shown in the cluster web UI. Finally, the last two parameters are needed to deploy your code to a cluster if running in distributed mode, as described later.
In the Spark shell, a special interpreter-aware SparkContext is already created for you, in the variable called `sc`. Making your own SparkContext will not work. You can set which master the context connects to using the `MASTER` environment variable. For example, to run on four cores, use