aboutsummaryrefslogtreecommitdiff
path: root/docs/cluster-overview.md
diff options
context:
space:
mode:
authorMatei Zaharia <matei@databricks.com>2014-05-30 00:34:33 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-05-30 00:34:33 -0700
commitc8bf4131bc2a2e147e977159fc90e94b85738830 (patch)
treea2f885df8fb6654bd7750bb344b97a6cb6889bf3 /docs/cluster-overview.md
parenteeee978a348ec2a35cc27865cea6357f9db75b74 (diff)
downloadspark-c8bf4131bc2a2e147e977159fc90e94b85738830.tar.gz
spark-c8bf4131bc2a2e147e977159fc90e94b85738830.tar.bz2
spark-c8bf4131bc2a2e147e977159fc90e94b85738830.zip
[SPARK-1566] consolidate programming guide, and general doc updates
This is a fairly large PR to clean up and update the docs for 1.0. The major changes are: * A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs * New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark * Spark-submit guide moved to a separate page and expanded slightly * Various cleanups of the menu system, security docs, and others * Updated look of title bar to differentiate the docs from previous Spark versions You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html. Author: Matei Zaharia <matei@databricks.com> Closes #896 from mateiz/1.0-docs and squashes the following commits: 03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs 0779508 [Matei Zaharia] tweak ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks 1bf4112 [Matei Zaharia] Review comments 4414f88 [Matei Zaharia] tweaks d04e979 [Matei Zaharia] Fix some old links to Java guide a34ed33 [Matei Zaharia] tweak 541bb3b [Matei Zaharia] miscellaneous changes fcefdec [Matei Zaharia] Moved submitting apps to separate doc 61d72b4 [Matei Zaharia] stuff 181f217 [Matei Zaharia] migration guide, remove old language guides e11a0da [Matei Zaharia] Add more API functions 6a030a9 [Matei Zaharia] tweaks 8db0ae3 [Matei Zaharia] Added key-value pairs section 318d2c9 [Matei Zaharia] tweaks 1c81477 [Matei Zaharia] New section on basics and function syntax e38f559 [Matei Zaharia] Actually added programming guide to Git a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout 3b6a876 [Matei Zaharia] More CSS tweaks 01ec8bf [Matei Zaharia] More CSS tweaks e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
Diffstat (limited to 'docs/cluster-overview.md')
-rw-r--r--docs/cluster-overview.md108
1 files changed, 6 insertions, 102 deletions
diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md
index f05a755de7..6a75d5c457 100644
--- a/docs/cluster-overview.md
+++ b/docs/cluster-overview.md
@@ -4,7 +4,8 @@ title: Cluster Mode Overview
---
This document gives a short overview of how Spark runs on clusters, to make it easier to understand
-the components involved.
+the components involved. Read through the [application submission guide](submitting-applications.html)
+to submit applications to a cluster.
# Components
@@ -50,107 +51,10 @@ The system currently supports three cluster managers:
In addition, Spark's [EC2 launch scripts](ec2-scripts.html) make it easy to launch a standalone
cluster on Amazon EC2.
-# Bundling and Launching Applications
-
-### Bundling Your Application's Dependencies
-If your code depends on other projects, you will need to package them alongside
-your application in order to distribute the code to a Spark cluster. To do this,
-to create an assembly jar (or "uber" jar) containing your code and its dependencies. Both
-[sbt](https://github.com/sbt/sbt-assembly) and
-[Maven](http://maven.apache.org/plugins/maven-shade-plugin/)
-have assembly plugins. When creating assembly jars, list Spark and Hadoop
-as `provided` dependencies; these need not be bundled since they are provided by
-the cluster manager at runtime. Once you have an assembled jar you can call the `bin/spark-submit`
-script as shown here while passing your jar.
-
-For Python, you can use the `pyFiles` argument of SparkContext
-or its `addPyFile` method to add `.py`, `.zip` or `.egg` files to be distributed.
-
-### Launching Applications with Spark submit
-
-Once a user application is bundled, it can be launched using the `spark-submit` script located in
-the bin directory. This script takes care of setting up the classpath with Spark and its
-dependencies, and can support different cluster managers and deploy modes that Spark supports:
-
- ./bin/spark-submit \
- --class <main-class>
- --master <master-url> \
- --deploy-mode <deploy-mode> \
- ... // other options
- <application-jar>
- [application-arguments]
-
- main-class: The entry point for your application (e.g. org.apache.spark.examples.SparkPi)
- master-url: The URL of the master node (e.g. spark://23.195.26.187:7077)
- deploy-mode: Whether to deploy this application within the cluster or from an external client (e.g. client)
- application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an `hdfs://` path or a `file://` path that is present on all nodes.
- application-arguments: Space delimited arguments passed to the main method of <main-class>, if any
-
-To enumerate all options available to `spark-submit` run it with the `--help` flag. Here are a few
-examples of common options:
-
-{% highlight bash %}
-# Run application locally
-./bin/spark-submit \
- --class org.apache.spark.examples.SparkPi
- --master local[8] \
- /path/to/examples.jar \
- 100
-
-# Run on a Spark standalone cluster
-./bin/spark-submit \
- --class org.apache.spark.examples.SparkPi
- --master spark://207.184.161.138:7077 \
- --executor-memory 20G \
- --total-executor-cores 100 \
- /path/to/examples.jar \
- 1000
-
-# Run on a YARN cluster
-HADOOP_CONF_DIR=XX ./bin/spark-submit \
- --class org.apache.spark.examples.SparkPi
- --master yarn-cluster \ # can also be `yarn-client` for client mode
- --executor-memory 20G \
- --num-executors 50 \
- /path/to/examples.jar \
- 1000
-{% endhighlight %}
-
-### Loading Configurations from a File
-
-The `spark-submit` script can load default [Spark configuration values](configuration.html) from a
-properties file and pass them on to your application. By default it will read configuration options
-from `conf/spark-defaults.conf`. For more detail, see the section on
-[loading default configurations](configuration.html#loading-default-configurations).
-
-Loading default Spark configurations this way can obviate the need for certain flags to
-`spark-submit`. For instance, if the `spark.master` property is set, you can safely omit the
-`--master` flag from `spark-submit`. In general, configuration values explicitly set on a
-`SparkConf` take the highest precedence, then flags passed to `spark-submit`, then values in the
-defaults file.
-
-If you are ever unclear where configuration options are coming from, you can print out fine-grained
-debugging information by running `spark-submit` with the `--verbose` option.
-
-### Advanced Dependency Management
-When using `spark-submit`, the application jar along with any jars included with the `--jars` option
-will be automatically transferred to the cluster. Spark uses the following URL scheme to allow
-different strategies for disseminating jars:
-
-- **file:** - Absolute paths and `file:/` URIs are served by the driver's HTTP file server, and
- every executor pulls the file from the driver HTTP server.
-- **hdfs:**, **http:**, **https:**, **ftp:** - these pull down files and JARs from the URI as expected
-- **local:** - a URI starting with local:/ is expected to exist as a local file on each worker node. This
- means that no network IO will be incurred, and works well for large files/JARs that are pushed to each worker,
- or shared via NFS, GlusterFS, etc.
-
-Note that JARs and files are copied to the working directory for each SparkContext on the executor nodes.
-This can use up a significant amount of space over time and will need to be cleaned up. With YARN, cleanup
-is handled automatically, and with Spark standalone, automatic cleanup can be configured with the
-`spark.worker.cleanup.appDataTtl` property.
-
-For python, the equivalent `--py-files` option can be used to distribute .egg and .zip libraries
-to executors.
+# Submitting Applications
+
+Applications can be submitted to a cluster of any type using the `spark-submit` script.
+The [application submission guide](submitting-applications.html) describes how to do this.
# Monitoring