aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorSandy Ryza <sandy@cloudera.com>2014-07-23 23:09:25 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-07-23 23:11:26 -0700
commite34922a221738bae1195d8ace90369c9ddc3a48d (patch)
tree644023570fd785b835094002cad336aa2bfe0733 /docs
parent78d18fdbaa62d8ed235c29b2e37fd6607263c639 (diff)
downloadspark-e34922a221738bae1195d8ace90369c9ddc3a48d.tar.gz
spark-e34922a221738bae1195d8ace90369c9ddc3a48d.tar.bz2
spark-e34922a221738bae1195d8ace90369c9ddc3a48d.zip
SPARK-2310. Support arbitrary Spark properties on the command line with ...
...spark-submit The PR allows invocations like spark-submit --class org.MyClass --spark.shuffle.spill false myjar.jar Author: Sandy Ryza <sandy@cloudera.com> Closes #1253 from sryza/sandy-spark-2310 and squashes the following commits: 1dc9855 [Sandy Ryza] More doc and cleanup 00edfb9 [Sandy Ryza] Review comments 91b244a [Sandy Ryza] Change format to --conf PROP=VALUE 8fabe77 [Sandy Ryza] SPARK-2310. Support arbitrary Spark properties on the command line with spark-submit
Diffstat (limited to 'docs')
-rw-r--r--docs/configuration.md8
-rw-r--r--docs/submitting-applications.md2
2 files changed, 7 insertions, 3 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index 02af461267..cb0c65e2d2 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -42,13 +42,15 @@ val sc = new SparkContext(new SparkConf())
Then, you can supply configuration values at runtime:
{% highlight bash %}
-./bin/spark-submit --name "My fancy app" --master local[4] myApp.jar
+./bin/spark-submit --name "My app" --master local[4] --conf spark.shuffle.spill=false
+ --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar
{% endhighlight %}
The Spark shell and [`spark-submit`](cluster-overview.html#launching-applications-with-spark-submit)
tool support two ways to load configurations dynamically. The first are command line options,
-such as `--master`, as shown above. Running `./bin/spark-submit --help` will show the entire list
-of options.
+such as `--master`, as shown above. `spark-submit` can accept any Spark property using the `--conf`
+flag, but uses special flags for properties that play a part in launching the Spark application.
+Running `./bin/spark-submit --help` will show the entire list of these options.
`bin/spark-submit` will also read configuration options from `conf/spark-defaults.conf`, in which
each line consists of a key and a value separated by whitespace. For example:
diff --git a/docs/submitting-applications.md b/docs/submitting-applications.md
index e05883072b..45b70b1a54 100644
--- a/docs/submitting-applications.md
+++ b/docs/submitting-applications.md
@@ -33,6 +33,7 @@ dependencies, and can support different cluster managers and deploy modes that S
--class <main-class>
--master <master-url> \
--deploy-mode <deploy-mode> \
+ --conf <key>=<value> \
... # other options
<application-jar> \
[application-arguments]
@@ -43,6 +44,7 @@ Some of the commonly used options are:
* `--class`: The entry point for your application (e.g. `org.apache.spark.examples.SparkPi`)
* `--master`: The [master URL](#master-urls) for the cluster (e.g. `spark://23.195.26.187:7077`)
* `--deploy-mode`: Whether to deploy your driver on the worker nodes (`cluster`) or locally as an external client (`client`) (default: `client`)*
+* `--conf`: Arbitrary Spark configuration property in key=value format. For values that contain spaces wrap "key=value" in quotes (as shown).
* `application-jar`: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an `hdfs://` path or a `file://` path that is present on all nodes.
* `application-arguments`: Arguments passed to the main method of your main class, if any