From 0fa5809768cf60ec62b4277f04e23a44dc1582e2 Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Mon, 30 Dec 2013 22:17:28 -0500 Subject: Updated docs for SparkConf and handled review comments --- docs/scala-programming-guide.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'docs/scala-programming-guide.md') diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md index 56d2a3a4a0..1db255ca53 100644 --- a/docs/scala-programming-guide.md +++ b/docs/scala-programming-guide.md @@ -49,6 +49,9 @@ This is done through the following constructor: new SparkContext(master, appName, [sparkHome], [jars]) {% endhighlight %} +or through `new SparkContext(conf)`, which takes a [SparkConf](api/core/index.html#org.apache.spark.SparkConf) +object for more advanced configuration. + The `master` parameter is a string specifying a [Spark or Mesos cluster URL](#master-urls) to connect to, or a special "local" string to run in local mode, as described below. `appName` is a name for your application, which will be shown in the cluster web UI. Finally, the last two parameters are needed to deploy your code to a cluster if running in distributed mode, as described later. In the Spark shell, a special interpreter-aware SparkContext is already created for you, in the variable called `sc`. Making your own SparkContext will not work. You can set which master the context connects to using the `MASTER` environment variable, and you can add JARs to the classpath with the `ADD_JARS` variable. For example, to run `spark-shell` on four cores, use @@ -94,7 +97,6 @@ If you want to run your application on a cluster, you will need to specify the t If you run `spark-shell` on a cluster, you can add JARs to it by specifying the `ADD_JARS` environment variable before you launch it. This variable should contain a comma-separated list of JARs. For example, `ADD_JARS=a.jar,b.jar ./spark-shell` will launch a shell with `a.jar` and `b.jar` on its classpath. In addition, any new classes you define in the shell will automatically be distributed. - # Resilient Distributed Datasets (RDDs) Spark revolves around the concept of a _resilient distributed dataset_ (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. There are currently two types of RDDs: *parallelized collections*, which take an existing Scala collection and run functions on it in parallel, and *Hadoop datasets*, which run functions on each record of a file in Hadoop distributed file system or any other storage system supported by Hadoop. Both types of RDDs can be operated on through the same methods. -- cgit v1.2.3