aboutsummaryrefslogtreecommitdiff
path: root/docs/cluster-overview.md
diff options
context:
space:
mode:
authorEvan Chan <ev@ooyala.com>2013-11-01 10:58:11 -0700
committerEvan Chan <ev@ooyala.com>2013-11-01 10:58:11 -0700
commite54a37fe15a8fa8daec6c00fde4d191680b004c4 (patch)
tree03fe4e48a4a4ba0131fd2d9cc71d169f7c408a97 /docs/cluster-overview.md
parent8f1098a3f0de8c9b2eb9ede91a1b01da10a525ea (diff)
downloadspark-e54a37fe15a8fa8daec6c00fde4d191680b004c4.tar.gz
spark-e54a37fe15a8fa8daec6c00fde4d191680b004c4.tar.bz2
spark-e54a37fe15a8fa8daec6c00fde4d191680b004c4.zip
Document all the URIs for addJar/addFile
Diffstat (limited to 'docs/cluster-overview.md')
-rw-r--r--docs/cluster-overview.md14
1 files changed, 13 insertions, 1 deletions
diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md
index f679cad713..5927f736f3 100644
--- a/docs/cluster-overview.md
+++ b/docs/cluster-overview.md
@@ -13,7 +13,7 @@ object in your main program (called the _driver program_).
Specifically, to run on a cluster, the SparkContext can connect to several types of _cluster managers_
(either Spark's own standalone cluster manager or Mesos/YARN), which allocate resources across
applications. Once connected, Spark acquires *executors* on nodes in the cluster, which are
-worker processes that run computations and store data for your application.
+worker processes that run computations and store data for your application.
Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to
the executors. Finally, SparkContext sends *tasks* for the executors to run.
@@ -57,6 +57,18 @@ which takes a list of JAR files (Java/Scala) or .egg and .zip libraries (Python)
worker nodes. You can also dynamically add new files to be sent to executors with `SparkContext.addJar`
and `addFile`.
+## URIs for addJar / addFile
+
+- **file:** - Absolute paths and `file:/` URIs are served by the driver's HTTP file server, and every executor
+ pulls the file from the driver HTTP server
+- **hdfs:**, **http:**, **https:**, **ftp:** - these pull down files and JARs from the URI as expected
+- **local:** - a URI starting with local:/ is expected to exist as a local file on each worker node. This
+ means that no network IO will be incurred, and works well for large files/JARs that are pushed to each worker,
+ or shared via NFS, GlusterFS, etc.
+
+Note that JARs and files are copied to the working directory for each SparkContext on the executor nodes.
+Over time this can use up a significant amount of space and will need to be cleaned up.
+
# Monitoring
Each driver program has a web UI, typically on port 4040, that displays information about running