From 98fb69822cf780160bca51abeaab7c82e49fab54 Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Fri, 6 Sep 2013 00:29:37 -0400 Subject: Work in progress: - Add job scheduling docs - Rename some fair scheduler properties - Organize intro page better - Link to Apache wiki for "contributing to Spark" --- docs/_layouts/global.html | 4 ++- docs/cluster-overview.md | 5 +++ docs/configuration.md | 32 +++++++++++------ docs/contributing-to-spark.md | 24 ++----------- docs/index.md | 26 ++++++++------ docs/job-scheduling.md | 81 +++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 128 insertions(+), 44 deletions(-) create mode 100644 docs/cluster-overview.md create mode 100644 docs/job-scheduling.md (limited to 'docs') diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html index 90928c8021..5034111ecb 100755 --- a/docs/_layouts/global.html +++ b/docs/_layouts/global.html @@ -86,6 +86,7 @@ diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md new file mode 100644 index 0000000000..9e781bbf1f --- /dev/null +++ b/docs/cluster-overview.md @@ -0,0 +1,5 @@ +--- +layout: global +title: Cluster Mode Overview +--- + diff --git a/docs/configuration.md b/docs/configuration.md index 310e78a9eb..d4f85538b2 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -81,17 +81,6 @@ Apart from these, the following properties are also available, and may be useful - - - - - @@ -109,6 +98,17 @@ Apart from these, the following properties are also available, and may be useful it if you configure your own old generation size. + + + + + @@ -160,6 +160,16 @@ Apart from these, the following properties are also available, and may be useful Block size (in bytes) used in Snappy compression, in the case when Snappy compression codec is used. + + + + + diff --git a/docs/contributing-to-spark.md b/docs/contributing-to-spark.md index 50feeb2d6c..ef1b3ad6da 100644 --- a/docs/contributing-to-spark.md +++ b/docs/contributing-to-spark.md @@ -3,24 +3,6 @@ layout: global title: Contributing to Spark --- -The Spark team welcomes contributions in the form of GitHub pull requests. Here are a few tips to get your contribution in: - -- Break your work into small, single-purpose patches if possible. It's much harder to merge in a large change with a lot of disjoint features. -- Submit the patch as a GitHub pull request. For a tutorial, see the GitHub guides on [forking a repo](https://help.github.com/articles/fork-a-repo) and [sending a pull request](https://help.github.com/articles/using-pull-requests). -- Follow the style of the existing codebase. Specifically, we use [standard Scala style guide](http://docs.scala-lang.org/style/), but with the following changes: - * Maximum line length of 100 characters. - * Always import packages using absolute paths (e.g. `scala.collection.Map` instead of `collection.Map`). - * No "infix" syntax for methods other than operators. For example, don't write `table containsKey myKey`; replace it with `table.containsKey(myKey)`. -- Make sure that your code passes the unit tests. You can run the tests with `sbt/sbt test` in the root directory of Spark. - But first, make sure that you have [configured a spark-env.sh](configuration.html) with at least - `SCALA_HOME`, as some of the tests try to spawn subprocesses using this. -- Add new unit tests for your code. We use [ScalaTest](http://www.scalatest.org/) for testing. Just add a new Suite in `core/src/test`, or methods to an existing Suite. -- If you'd like to report a bug but don't have time to fix it, you can still post it to our [issue tracker]({{site.SPARK_ISSUE_TRACKER_URL}}), or email the [mailing list](http://www.spark-project.org/mailing-lists.html). - -# Licensing of Contributions - -Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please -state that the contribution is your original work and that you license the work to the project under the project's open source -license. *Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other -means you agree to license the material under the project's open source license and warrant that you have the legal authority -to do so.* +The Spark team welcomes all forms of contributions, including bug reports, documentation or patches. +For the newest information on how to contribute to the project, please read the +[wiki page on contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark). diff --git a/docs/index.md b/docs/index.md index d3aacc629f..1814cb19c8 100644 --- a/docs/index.md +++ b/docs/index.md @@ -21,7 +21,7 @@ Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_VERSION}}. If you write applications in Scala, you will need to use this same version of Scala in your own program -- newer major versions may not work. You can get the right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/). -# Testing the Build +# Running the Examples and Shell Spark comes with several sample programs in the `examples` directory. To run one of the samples, use `./run-example ` in the top-level Spark directory @@ -34,14 +34,16 @@ to connect to. This can be a [URL for a distributed cluster](scala-programming-g or `local` to run locally with one thread, or `local[N]` to run locally with N threads. You should start by using `local` for testing. -Finally, Spark can be used interactively through modified versions of the Scala shell (`./spark-shell`) or -Python interpreter (`./pyspark`). These are a great way to learn Spark. +Finally, you can run Spark interactively through modified versions of the Scala shell (`./spark-shell`) or +Python interpreter (`./pyspark`). These are a great way to learn the framework. -# Running on a Cluster +# Launching on a Cluster -Spark supports several options for deployment: +The Spark [cluster mode overview](cluster-overview.html) explains the key concepts in running on a cluster. +Spark can run both by itself, or over several existing cluster managers. It currently provides several +options for deployment: -* [Amazon EC2](ec2-scripts.html): our scripts let you launch a cluster in about 5 minutes +* [Amazon EC2](ec2-scripts.html): our EC2 scripts let you launch a cluster in about 5 minutes * [Standalone Deploy Mode](spark-standalone.html): simplest way to deploy Spark on a private cluster * [Apache Mesos](running-on-mesos.html) * [Hadoop YARN](running-on-yarn.html) @@ -91,19 +93,21 @@ In addition, if you wish to run Spark on [YARN](running-on-yarn.md), set **Deployment guides:** -* [Running Spark on Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes +* [Cluster Overview](cluster-overview.html): overview of concepts and components when running on a cluster +* [Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5 minutes * [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster quickly without a third-party cluster manager -* [Running Spark on Mesos](running-on-mesos.html): deploy a private cluster using +* [Mesos](running-on-mesos.html): deploy a private cluster using [Apache Mesos](http://incubator.apache.org/mesos) -* [Running Spark on YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN) +* [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN) **Other documents:** * [Configuration](configuration.html): customize Spark via its configuration system * [Tuning Guide](tuning.html): best practices to optimize performance and memory use * [Hardware Provisioning](hardware-provisioning.html): recommendations for cluster hardware -* [Building Spark with Maven](building-with-maven.html): Build Spark using the Maven build tool -* [Contributing to Spark](contributing-to-spark.html) +* [Job Scheduling](job-scheduling.html): scheduling resources across and within Spark applications +* [Building Spark with Maven](building-with-maven.html): build Spark using the Maven system +* [Contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark) **External resources:** diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md new file mode 100644 index 0000000000..44182571d5 --- /dev/null +++ b/docs/job-scheduling.md @@ -0,0 +1,81 @@ +--- +layout: global +title: Job Scheduling +--- + +Spark has several facilities for scheduling resources between jobs. First, recall that, as described +in the [cluster mode overview](cluster-overview.html), each Spark application (instance of SparkContext) +runs an independent set of executor processes. The cluster managers that Spark runs on provide +facilities for [scheduling across applications](#scheduling-across-applications). Second, +_within_ each Spark application, multiple jobs may be running concurrently if they were submitted +from different threads. This is common if your application is serving requests over the network; for +example, the [Shark](http://shark.cs.berkeley.edu) server works this way. Spark includes a +[fair scheduler](#scheduling-within-an-application) to schedule between these jobs. + +# Scheduling Across Applications + +When running on a cluster, each Spark application gets an independent set of executor JVMs that only +run tasks and store data for that application. If multiple users need to share your cluster, there are +different options to manage allocation, depending on the cluster manager. + +The simplest option, available on all cluster managers, is _static partitioning_ of resources. With +this approach, each application is given a maximum amount of resources it can use, and holds onto them +for its whole duration. This is the only approach available in Spark's [standalone](spark-standalone.html) +and [YARN](running-on-yarn.html) modes, as well as the +[coarse-grained Mesos mode](running-on-mesos.html#mesos-run-modes). +Resource allocation can be configured as follows, based on the cluster type: + +* **Standalone mode:** By default, applications submitted to the standalone mode cluster will run in + FIFO (first-in-first-out) order, and each application will try to use all available nodes. You can limit + the number of nodes an application uses by setting the `spark.cores.max` system property in it. This + will allow multiple users/applications to run concurrently. For example, you might launch a long-running + server that uses 10 cores, and allow users to launch shells that use 20 cores each. + Finally, in addition to controlling cores, each application's `spark.executor.memory` setting controls + its memory use. +* **Mesos:** To use static partitioning on Mesos, set the `spark.mesos.coarse` system property to `true`, + and optionally set `spark.cores.max` to limit each application's resource share as in the standalone mode. + You should also set `spark.executor.memory` to control the executor memory. +* **YARN:** The `--num-workers` option to the Spark YARN client controls how many workers it will allocate + on the cluster, while `--worker-memory` and `--worker-cores` control the resources per worker. + +A second option available on Mesos is _dynamic sharing_ of CPU cores. In this mode, each Spark application +still has a fixed and independent memory allocation (set by `spark.executor.memory`), but when the +application is not running tasks on a machine, other applications may run tasks on those cores. This mode +is useful when you expect large numbers of not overly active applications, such as shell sessions from +separate users. However, it comes with a risk of less predictable latency, because it may take a while for +an application to gain back cores on one node when it has work to do. To use this mode, simply use a +`mesos://` URL without setting `spark.mesos.coarse` to true. + +Note that none of the modes currently provide memory sharing across applications. If you would like to share +data this way, we recommend running a single server application that can serve multiple requests by querying +the same RDDs. For example, the [Shark](http://shark.cs.berkeley.edu) JDBC server works this way for SQL +queries. In future releases, in-memory storage systems such as [Tachyon](http://tachyon-project.org) will +provide another approach to share RDDs. + + +# Scheduling Within an Application + +Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if +they were submitted from separate threads. By "job", in this section, we mean a Spark action (e.g. `save`, +`collect`) and any tasks that need to run to evaluate that action. Spark's scheduler is fully thread-safe +and supports this use case to enable applications that serve multiple requests (e.g. queries for +multiple users). + +By default, Spark's scheduler runs jobs in FIFO fashion. Each job is divided into "stages" (e.g. map and +reduce phases), and the first job gets priority on all available resources while its stages have tasks to +launch, then the second job gets priority, etc. If the jobs at the head of the queue don't need to use +the whole cluster, later jobs can start to run right away, but if the jobs at the head of the queue are +large, then later jobs may be delayed significantly. + +Starting in Spark 0.8, it is also possible to configure fair sharing between jobs. Under fair sharing, +Spark assigns tasks between jobs in a "round robin" fashion, so that all jobs get a roughly equal share +of cluster resources. This means that short jobs submitted while a long job is running can start receiving +resources right away and still get good response times, without waiting for the long job to finish. This +mode is best for multi-user settings. + +To enable the fair scheduler, simply set the `spark.scheduler.mode` to `FAIR` before creating +a SparkContext: + + System.setProperty("spark.scheduler.mode", "FAIR") + +The fair scheduler also supports -- cgit v1.2.3 From 651a96adf7b53085bd810e153f8eabf52eed1994 Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Sat, 7 Sep 2013 00:34:12 -0400 Subject: More fair scheduler docs and property names. Also changed uses of "job" terminology to "application" when they referred to an entire Spark program, to avoid confusion. --- conf/fairscheduler.xml.template | 18 ++-- .../main/scala/org/apache/spark/SparkContext.scala | 5 +- .../scheduler/cluster/SchedulableBuilder.scala | 6 +- .../org/apache/spark/ui/UIWorkloadGenerator.scala | 2 +- .../apache/spark/ui/jobs/JobProgressListener.scala | 2 +- .../scheduler/cluster/ClusterSchedulerSuite.scala | 6 +- .../scheduler/local/LocalSchedulerSuite.scala | 4 +- docs/ec2-scripts.md | 8 +- docs/job-scheduling.md | 101 +++++++++++++++++++-- docs/python-programming-guide.md | 12 +-- docs/quick-start.md | 70 +++++++------- docs/running-on-mesos.md | 19 ++-- docs/scala-programming-guide.md | 4 +- docs/spark-standalone.md | 26 +++--- 14 files changed, 185 insertions(+), 98 deletions(-) (limited to 'docs') diff --git a/conf/fairscheduler.xml.template b/conf/fairscheduler.xml.template index 04a6b418dc..acf59e2a35 100644 --- a/conf/fairscheduler.xml.template +++ b/conf/fairscheduler.xml.template @@ -1,15 +1,13 @@ - - 2 - 1 + FAIR - - - 3 - 2 + 1 + 2 + + FIFO - - - + 2 + 3 + diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala b/core/src/main/scala/org/apache/spark/SparkContext.scala index 89318712a5..edf71c9db6 100644 --- a/core/src/main/scala/org/apache/spark/SparkContext.scala +++ b/core/src/main/scala/org/apache/spark/SparkContext.scala @@ -260,7 +260,7 @@ class SparkContext( private val localProperties = new DynamicVariable[Properties](null) def initLocalProperties() { - localProperties.value = new Properties() + localProperties.value = new Properties() } def setLocalProperty(key: String, value: String) { @@ -723,7 +723,8 @@ class SparkContext( val callSite = Utils.formatSparkCallSite logInfo("Starting job: " + callSite) val start = System.nanoTime - val result = dagScheduler.runJob(rdd, func, partitions, callSite, allowLocal, resultHandler, localProperties.value) + val result = dagScheduler.runJob(rdd, func, partitions, callSite, allowLocal, resultHandler, + localProperties.value) logInfo("Job finished: " + callSite + ", took " + (System.nanoTime - start) / 1e9 + " s") rdd.doCheckpoint() result diff --git a/core/src/main/scala/org/apache/spark/scheduler/cluster/SchedulableBuilder.scala b/core/src/main/scala/org/apache/spark/scheduler/cluster/SchedulableBuilder.scala index d04eeb6b98..f80823317b 100644 --- a/core/src/main/scala/org/apache/spark/scheduler/cluster/SchedulableBuilder.scala +++ b/core/src/main/scala/org/apache/spark/scheduler/cluster/SchedulableBuilder.scala @@ -51,8 +51,8 @@ private[spark] class FIFOSchedulableBuilder(val rootPool: Pool) private[spark] class FairSchedulableBuilder(val rootPool: Pool) extends SchedulableBuilder with Logging { - val schedulerAllocFile = System.getProperty("spark.fairscheduler.allocation.file") - val FAIR_SCHEDULER_PROPERTIES = "spark.scheduler.cluster.fair.pool" + val schedulerAllocFile = System.getProperty("spark.scheduler.allocation.file") + val FAIR_SCHEDULER_PROPERTIES = "spark.scheduler.pool" val DEFAULT_POOL_NAME = "default" val MINIMUM_SHARES_PROPERTY = "minShare" val SCHEDULING_MODE_PROPERTY = "schedulingMode" @@ -60,7 +60,7 @@ private[spark] class FairSchedulableBuilder(val rootPool: Pool) val POOL_NAME_PROPERTY = "@name" val POOLS_PROPERTY = "pool" val DEFAULT_SCHEDULING_MODE = SchedulingMode.FIFO - val DEFAULT_MINIMUM_SHARE = 2 + val DEFAULT_MINIMUM_SHARE = 0 val DEFAULT_WEIGHT = 1 override def buildPools() { diff --git a/core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala b/core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala index 2ae23cd523..3ec9760ed0 100644 --- a/core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala +++ b/core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala @@ -49,7 +49,7 @@ private[spark] object UIWorkloadGenerator { def setProperties(s: String) = { if(schedulingMode == SchedulingMode.FAIR) { - sc.setLocalProperty("spark.scheduler.cluster.fair.pool", s) + sc.setLocalProperty("spark.scheduler.pool", s) } sc.setLocalProperty(SparkContext.SPARK_JOB_DESCRIPTION, s) } diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala b/core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala index e2bcd98545..5d46f38a2a 100644 --- a/core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala +++ b/core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala @@ -95,7 +95,7 @@ private[spark] class JobProgressListener(val sc: SparkContext) extends SparkList activeStages += stage val poolName = Option(stageSubmitted.properties).map { - p => p.getProperty("spark.scheduler.cluster.fair.pool", DEFAULT_POOL_NAME) + p => p.getProperty("spark.scheduler.pool", DEFAULT_POOL_NAME) }.getOrElse(DEFAULT_POOL_NAME) stageToPool(stage) = poolName diff --git a/core/src/test/scala/org/apache/spark/scheduler/cluster/ClusterSchedulerSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/cluster/ClusterSchedulerSuite.scala index 92ad9f09b2..2b0d90e748 100644 --- a/core/src/test/scala/org/apache/spark/scheduler/cluster/ClusterSchedulerSuite.scala +++ b/core/src/test/scala/org/apache/spark/scheduler/cluster/ClusterSchedulerSuite.scala @@ -166,7 +166,7 @@ class ClusterSchedulerSuite extends FunSuite with LocalSparkContext with Logging val taskSet = new TaskSet(tasks.toArray,0,0,0,null) val xmlPath = getClass.getClassLoader.getResource("fairscheduler.xml").getFile() - System.setProperty("spark.fairscheduler.allocation.file", xmlPath) + System.setProperty("spark.scheduler.allocation.file", xmlPath) val rootPool = new Pool("", SchedulingMode.FAIR, 0, 0) val schedulableBuilder = new FairSchedulableBuilder(rootPool) schedulableBuilder.buildPools() @@ -183,9 +183,9 @@ class ClusterSchedulerSuite extends FunSuite with LocalSparkContext with Logging assert(rootPool.getSchedulableByName("3").weight === 1) val properties1 = new Properties() - properties1.setProperty("spark.scheduler.cluster.fair.pool","1") + properties1.setProperty("spark.scheduler.pool","1") val properties2 = new Properties() - properties2.setProperty("spark.scheduler.cluster.fair.pool","2") + properties2.setProperty("spark.scheduler.pool","2") val taskSetManager10 = createDummyTaskSetManager(1, 0, 1, clusterScheduler, taskSet) val taskSetManager11 = createDummyTaskSetManager(1, 1, 1, clusterScheduler, taskSet) diff --git a/core/src/test/scala/org/apache/spark/scheduler/local/LocalSchedulerSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/local/LocalSchedulerSuite.scala index ca9c590a7d..af76c843e8 100644 --- a/core/src/test/scala/org/apache/spark/scheduler/local/LocalSchedulerSuite.scala +++ b/core/src/test/scala/org/apache/spark/scheduler/local/LocalSchedulerSuite.scala @@ -73,7 +73,7 @@ class LocalSchedulerSuite extends FunSuite with LocalSparkContext { TaskThreadInfo.threadToStarted(threadIndex) = new CountDownLatch(1) new Thread { if (poolName != null) { - sc.setLocalProperty("spark.scheduler.cluster.fair.pool", poolName) + sc.setLocalProperty("spark.scheduler.pool", poolName) } override def run() { val ans = nums.map(number => { @@ -152,7 +152,7 @@ class LocalSchedulerSuite extends FunSuite with LocalSparkContext { val sem = new Semaphore(0) System.setProperty("spark.scheduler.mode", "FAIR") val xmlPath = getClass.getClassLoader.getResource("fairscheduler.xml").getFile() - System.setProperty("spark.fairscheduler.allocation.file", xmlPath) + System.setProperty("spark.scheduler.allocation.file", xmlPath) createThread(10,"1",sc,sem) TaskThreadInfo.threadToStarted(10).await() diff --git a/docs/ec2-scripts.md b/docs/ec2-scripts.md index da0c06e2a6..1e5575d657 100644 --- a/docs/ec2-scripts.md +++ b/docs/ec2-scripts.md @@ -80,7 +80,7 @@ another. permissions on your private key file, you can run `launch` with the `--resume` option to restart the setup process on an existing cluster. -# Running Jobs +# Running Applications - Go into the `ec2` directory in the release of Spark you downloaded. - Run `./spark-ec2 -k -i login ` to @@ -90,7 +90,7 @@ permissions on your private key file, you can run `launch` with the - To deploy code or data within your cluster, you can log in and use the provided script `~/spark-ec2/copy-dir`, which, given a directory path, RSYNCs it to the same location on all the slaves. -- If your job needs to access large datasets, the fastest way to do +- If your application needs to access large datasets, the fastest way to do that is to load them from Amazon S3 or an Amazon EBS device into an instance of the Hadoop Distributed File System (HDFS) on your nodes. The `spark-ec2` script already sets up a HDFS instance for you. It's @@ -103,8 +103,8 @@ permissions on your private key file, you can run `launch` with the (about 3 GB), but you can use the `--ebs-vol-size` option to `spark-ec2` to attach a persistent EBS volume to each node for storing the persistent HDFS. -- Finally, if you get errors while running your jobs, look at the slave's logs - for that job inside of the scheduler work directory (/root/spark/work). You can +- Finally, if you get errors while running your application, look at the slave's logs + for that application inside of the scheduler work directory (/root/spark/work). You can also view the status of the cluster using the web UI: `http://:8080`. # Configuration diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md index 44182571d5..11b733137d 100644 --- a/docs/job-scheduling.md +++ b/docs/job-scheduling.md @@ -3,14 +3,19 @@ layout: global title: Job Scheduling --- -Spark has several facilities for scheduling resources between jobs. First, recall that, as described +* This will become a table of contents (this text will be scraped). +{:toc} + +# Overview + +Spark has several facilities for scheduling resources between computations. First, recall that, as described in the [cluster mode overview](cluster-overview.html), each Spark application (instance of SparkContext) runs an independent set of executor processes. The cluster managers that Spark runs on provide facilities for [scheduling across applications](#scheduling-across-applications). Second, -_within_ each Spark application, multiple jobs may be running concurrently if they were submitted -from different threads. This is common if your application is serving requests over the network; for -example, the [Shark](http://shark.cs.berkeley.edu) server works this way. Spark includes a -[fair scheduler](#scheduling-within-an-application) to schedule between these jobs. +_within_ each Spark application, multiple "jobs" (Spark actions) may be running concurrently +if they were submitted by different threads. This is common if your application is serving requests +over the network; for example, the [Shark](http://shark.cs.berkeley.edu) server works this way. Spark +includes a [fair scheduler](#scheduling-within-an-application) to schedule resources within each SparkContext. # Scheduling Across Applications @@ -76,6 +81,88 @@ mode is best for multi-user settings. To enable the fair scheduler, simply set the `spark.scheduler.mode` to `FAIR` before creating a SparkContext: - System.setProperty("spark.scheduler.mode", "FAIR") +{% highlight scala %} +System.setProperty("spark.scheduler.mode", "FAIR") +{% endhighlight %} + +## Fair Scheduler Pools + +The fair scheduler also supports grouping jobs into _pools_, and setting different scheduling options +(e.g. weight) for each pool. This can be useful to create a "high-priority" pool for more important jobs, +for example, or to group the jobs of each user together and give _users_ equal shares regardless of how +many concurrent jobs they have instead of giving _jobs_ equal shares. This approach is modeled after the +[Hadoop Fair Scheduler](http://hadoop.apache.org/docs/stable/fair_scheduler.html). + +Without any intervention, newly submitted jobs go into a _default pool_, but jobs' pools can be set by +adding the `spark.scheduler.pool` "local property" to the SparkContext in the thread that's submitting them. +This is done as follows: + +{% highlight scala %} +// Assuming context is your SparkContext variable +context.setLocalProperty("spark.scheduler.pool", "pool1") +{% endhighlight %} + +After setting this local property, _all_ jobs submitted within this thread (by calls in this thread +to `RDD.save`, `count`, `collect`, etc) will use this pool name. The setting is per-thread to make +it easy to have a thread run multiple jobs on behalf of the same user. If you'd like to clear the +pool that a thread is associated with, simply call: + +{% highlight scala %} +context.setLocalProperty("spark.scheduler.pool", null) +{% endhighlight %} + +## Default Behavior of Pools + +By default, each pool gets an equal share of the cluster (also equal in share to each job in the default +pool), but inside each pool, jobs run in FIFO order. For example, if you create one pool per user, this +means that each user will get an equal share of the cluster, and that each user's queries will run in +order instead of later queries taking resources from that user's earlier ones. + +## Configuring Pool Properties + +Specific pools' properties can also be modified through a configuration file. Each pool supports three +properties: + +* `schedulingMode`: This can be FIFO or FAIR, to control whether jobs within the pool queue up behind + each other (the default) or share the pool's resources fairly. +* `weight`: This controls the pool's share of the cluster relative to other pools. By default, all pools + have a weight of 1. If you give a specific pool a weight of 2, for example, it will get 2x more + resources as other active pools. Setting a high weight such as 1000 also makes it possible to implement + _priority_ between pools---in essence, the weight-1000 pool will always get to launch tasks first + whenever it has jobs active. +* `minShare`: Apart from an overall weight, each pool can be given a _minimum shares_ (as a number of + CPU cores) that the administrator would like it to have. The fair scheduler always attempts to meet + all active pools' minimum shares before redistributing extra resources according to the weights. + The `minShare` property can therefore be another way to ensure that a pool can always get up to a + certain number of resources (e.g. 10 cores) quickly without giving it a high priority for the rest + of the cluster. By default, each pool's `minShare` is 0. + +The pool properties can be set by creating an XML file, similar to `conf/fairscheduler.xml.template`, +and setting the `spark.scheduler.allocation.file` property: + +{% highlight scala %} +System.setProperty("spark.scheduler.allocation.file", "/path/to/file") +{% endhighlight %} + +The format of the XML file is simply a `` element for each pool, with different elements +within it for the various settings. For example: + +{% highlight xml %} + + + + FAIR + 1 + 2 + + + FIFO + 2 + 3 + + +{% endhighlight %} -The fair scheduler also supports +A full example is also available in `conf/fairscheduler.xml.template`. Note that any pools not +configured in the XML file will simply get default values for all settings (scheduling mode FIFO, +weight 1, and minShare 0). diff --git a/docs/python-programming-guide.md b/docs/python-programming-guide.md index 8c33a953a4..5662e7d02a 100644 --- a/docs/python-programming-guide.md +++ b/docs/python-programming-guide.md @@ -53,20 +53,20 @@ In addition, PySpark fully supports interactive use---simply run `./pyspark` to # Installing and Configuring PySpark PySpark requires Python 2.6 or higher. -PySpark jobs are executed using a standard CPython interpreter in order to support Python modules that use C extensions. +PySpark applications are executed using a standard CPython interpreter in order to support Python modules that use C extensions. We have not tested PySpark with Python 3 or with alternative Python interpreters, such as [PyPy](http://pypy.org/) or [Jython](http://www.jython.org/). By default, PySpark requires `python` to be available on the system `PATH` and use it to run programs; an alternate Python executable may be specified by setting the `PYSPARK_PYTHON` environment variable in `conf/spark-env.sh` (or `.cmd` on Windows). All of PySpark's library dependencies, including [Py4J](http://py4j.sourceforge.net/), are bundled with PySpark and automatically imported. -Standalone PySpark jobs should be run using the `pyspark` script, which automatically configures the Java and Python environment using the settings in `conf/spark-env.sh` or `.cmd`. +Standalone PySpark applications should be run using the `pyspark` script, which automatically configures the Java and Python environment using the settings in `conf/spark-env.sh` or `.cmd`. The script automatically adds the `pyspark` package to the `PYTHONPATH`. # Interactive Use -The `pyspark` script launches a Python interpreter that is configured to run PySpark jobs. To use `pyspark` interactively, first build Spark, then launch it directly from the command line without any options: +The `pyspark` script launches a Python interpreter that is configured to run PySpark applications. To use `pyspark` interactively, first build Spark, then launch it directly from the command line without any options: {% highlight bash %} $ sbt/sbt assembly @@ -82,7 +82,7 @@ The Python shell can be used explore data interactively and is a simple way to l >>> help(pyspark) # Show all pyspark functions {% endhighlight %} -By default, the `pyspark` shell creates SparkContext that runs jobs locally on a single core. +By default, the `pyspark` shell creates SparkContext that runs applications locally on a single core. To connect to a non-local cluster, or use multiple cores, set the `MASTER` environment variable. For example, to use the `pyspark` shell with a [standalone Spark cluster](spark-standalone.html): @@ -119,13 +119,13 @@ IPython also works on a cluster or on multiple cores if you set the `MASTER` env # Standalone Programs PySpark can also be used from standalone Python scripts by creating a SparkContext in your script and running the script using `pyspark`. -The Quick Start guide includes a [complete example](quick-start.html#a-standalone-job-in-python) of a standalone Python job. +The Quick Start guide includes a [complete example](quick-start.html#a-standalone-app-in-python) of a standalone Python application. Code dependencies can be deployed by listing them in the `pyFiles` option in the SparkContext constructor: {% highlight python %} from pyspark import SparkContext -sc = SparkContext("local", "Job Name", pyFiles=['MyFile.py', 'lib.zip', 'app.egg']) +sc = SparkContext("local", "App Name", pyFiles=['MyFile.py', 'lib.zip', 'app.egg']) {% endhighlight %} Files listed here will be added to the `PYTHONPATH` and shipped to remote worker machines. diff --git a/docs/quick-start.md b/docs/quick-start.md index 70c3df8095..1b069ce982 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -6,7 +6,7 @@ title: Quick Start * This will become a table of contents (this text will be scraped). {:toc} -This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's interactive Scala shell (don't worry if you don't know Scala -- you will not need much for this), then show how to write standalone jobs in Scala, Java, and Python. +This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's interactive Scala shell (don't worry if you don't know Scala -- you will not need much for this), then show how to write standalone applications in Scala, Java, and Python. See the [programming guide](scala-programming-guide.html) for a more complete reference. To follow along with this guide, you only need to have successfully built Spark on one machine. Simply go into your Spark directory and run: @@ -36,7 +36,7 @@ scala> textFile.count() // Number of items in this RDD res0: Long = 74 scala> textFile.first() // First item in this RDD -res1: String = # Spark +res1: String = Welcome to the Spark documentation! {% endhighlight %} Now let's use a transformation. We will use the [`filter`](scala-programming-guide.html#transformations) transformation to return a new RDD with a subset of the items in the file. @@ -101,20 +101,20 @@ res9: Long = 15 It may seem silly to use Spark to explore and cache a 30-line text file. The interesting part is that these same functions can be used on very large data sets, even when they are striped across tens or hundreds of nodes. You can also do this interactively by connecting `spark-shell` to a cluster, as described in the [programming guide](scala-programming-guide.html#initializing-spark). -# A Standalone Job in Scala -Now say we wanted to write a standalone job using the Spark API. We will walk through a simple job in both Scala (with sbt) and Java (with maven). If you are using other build systems, consider using the Spark assembly JAR described in the developer guide. +# A Standalone App in Scala +Now say we wanted to write a standalone application using the Spark API. We will walk through a simple application in both Scala (with SBT), Java (with Maven), and Python. If you are using other build systems, consider using the Spark assembly JAR described in the developer guide. -We'll create a very simple Spark job in Scala. So simple, in fact, that it's named `SimpleJob.scala`: +We'll create a very simple Spark application in Scala. So simple, in fact, that it's named `SimpleApp.scala`: {% highlight scala %} -/*** SimpleJob.scala ***/ +/*** SimpleApp.scala ***/ import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ -object SimpleJob { +object SimpleApp { def main(args: Array[String]) { val logFile = "$YOUR_SPARK_HOME/README.md" // Should be some file on your system - val sc = new SparkContext("local", "Simple Job", "YOUR_SPARK_HOME", + val sc = new SparkContext("local", "Simple App", "YOUR_SPARK_HOME", List("target/scala-{{site.SCALA_VERSION}}/simple-project_{{site.SCALA_VERSION}}-1.0.jar")) val logData = sc.textFile(logFile, 2).cache() val numAs = logData.filter(line => line.contains("a")).count() @@ -124,7 +124,7 @@ object SimpleJob { } {% endhighlight %} -This job simply counts the number of lines containing 'a' and the number containing 'b' in the Spark README. Note that you'll need to replace $YOUR_SPARK_HOME with the location where Spark is installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkContext, we initialize a SparkContext as part of the job. We pass the SparkContext constructor four arguments, the type of scheduler we want to use (in this case, a local scheduler), a name for the job, the directory where Spark is installed, and a name for the jar file containing the job's sources. The final two arguments are needed in a distributed setting, where Spark is running across several nodes, so we include them for completeness. Spark will automatically ship the jar files you list to slave nodes. +This program simply counts the number of lines containing 'a' and the number containing 'b' in the Spark README. Note that you'll need to replace $YOUR_SPARK_HOME with the location where Spark is installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkContext, we initialize a SparkContext as part of the proogram. We pass the SparkContext constructor four arguments, the type of scheduler we want to use (in this case, a local scheduler), a name for the application, the directory where Spark is installed, and a name for the jar file containing the application's code. The final two arguments are needed in a distributed setting, where Spark is running across several nodes, so we include them for completeness. Spark will automatically ship the jar files you list to slave nodes. This file depends on the Spark API, so we'll also include an sbt configuration file, `simple.sbt` which explains that Spark is a dependency. This file also adds a repository that Spark depends on: @@ -146,7 +146,7 @@ If you also wish to read data from Hadoop's HDFS, you will also need to add a de libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "" {% endhighlight %} -Finally, for sbt to work correctly, we'll need to layout `SimpleJob.scala` and `simple.sbt` according to the typical directory structure. Once that is in place, we can create a JAR package containing the job's code, then use `sbt run` to execute our example job. +Finally, for sbt to work correctly, we'll need to layout `SimpleApp.scala` and `simple.sbt` according to the typical directory structure. Once that is in place, we can create a JAR package containing the application's code, then use `sbt run` to execute our program. {% highlight bash %} $ find . @@ -155,7 +155,7 @@ $ find . ./src ./src/main ./src/main/scala -./src/main/scala/SimpleJob.scala +./src/main/scala/SimpleApp.scala $ sbt package $ sbt run @@ -163,20 +163,20 @@ $ sbt run Lines with a: 46, Lines with b: 23 {% endhighlight %} -# A Standalone Job In Java -Now say we wanted to write a standalone job using the Java API. We will walk through doing this with Maven. If you are using other build systems, consider using the Spark assembly JAR described in the developer guide. +# A Standalone App in Java +Now say we wanted to write a standalone application using the Java API. We will walk through doing this with Maven. If you are using other build systems, consider using the Spark assembly JAR described in the developer guide. -We'll create a very simple Spark job, `SimpleJob.java`: +We'll create a very simple Spark application, `SimpleApp.java`: {% highlight java %} -/*** SimpleJob.java ***/ +/*** SimpleApp.java ***/ import org.apache.spark.api.java.*; import org.apache.spark.api.java.function.Function; -public class SimpleJob { +public class SimpleApp { public static void main(String[] args) { String logFile = "$YOUR_SPARK_HOME/README.md"; // Should be some file on your system - JavaSparkContext sc = new JavaSparkContext("local", "Simple Job", + JavaSparkContext sc = new JavaSparkContext("local", "Simple App", "$YOUR_SPARK_HOME", new String[]{"target/simple-project-1.0.jar"}); JavaRDD logData = sc.textFile(logFile).cache(); @@ -193,9 +193,9 @@ public class SimpleJob { } {% endhighlight %} -This job simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. Note that you'll need to replace $YOUR_SPARK_HOME with the location where Spark is installed. As with the Scala example, we initialize a SparkContext, though we use the special `JavaSparkContext` class to get a Java-friendly one. We also create RDDs (represented by `JavaRDD`) and run transformations on them. Finally, we pass functions to Spark by creating classes that extend `spark.api.java.function.Function`. The [Java programming guide](java-programming-guide.html) describes these differences in more detail. +This program simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. Note that you'll need to replace $YOUR_SPARK_HOME with the location where Spark is installed. As with the Scala example, we initialize a SparkContext, though we use the special `JavaSparkContext` class to get a Java-friendly one. We also create RDDs (represented by `JavaRDD`) and run transformations on them. Finally, we pass functions to Spark by creating classes that extend `spark.api.java.function.Function`. The [Java programming guide](java-programming-guide.html) describes these differences in more detail. -To build the job, we also write a Maven `pom.xml` file that lists Spark as a dependency. Note that Spark artifacts are tagged with a Scala version. +To build the program, we also write a Maven `pom.xml` file that lists Spark as a dependency. Note that Spark artifacts are tagged with a Scala version. {% highlight xml %} @@ -238,29 +238,29 @@ $ find . ./src ./src/main ./src/main/java -./src/main/java/SimpleJob.java +./src/main/java/SimpleApp.java {% endhighlight %} -Now, we can execute the job using Maven: +Now, we can execute the application using Maven: {% highlight bash %} $ mvn package -$ mvn exec:java -Dexec.mainClass="SimpleJob" +$ mvn exec:java -Dexec.mainClass="SimpleApp" ... Lines with a: 46, Lines with b: 23 {% endhighlight %} -# A Standalone Job In Python -Now we will show how to write a standalone job using the Python API (PySpark). +# A Standalone App in Python +Now we will show how to write a standalone application using the Python API (PySpark). -As an example, we'll create a simple Spark job, `SimpleJob.py`: +As an example, we'll create a simple Spark application, `SimpleApp.py`: {% highlight python %} -"""SimpleJob.py""" +"""SimpleApp.py""" from pyspark import SparkContext logFile = "$YOUR_SPARK_HOME/README.md" # Should be some file on your system -sc = SparkContext("local", "Simple job") +sc = SparkContext("local", "Simple App") logData = sc.textFile(logFile).cache() numAs = logData.filter(lambda s: 'a' in s).count() @@ -270,25 +270,25 @@ print "Lines with a: %i, lines with b: %i" % (numAs, numBs) {% endhighlight %} -This job simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. +This program simply counts the number of lines containing 'a' and the number containing 'b' in a system log file. Note that you'll need to replace $YOUR_SPARK_HOME with the location where Spark is installed. As with the Scala and Java examples, we use a SparkContext to create RDDs. We can pass Python functions to Spark, which are automatically serialized along with any variables that they reference. -For jobs that use custom classes or third-party libraries, we can add those code dependencies to SparkContext to ensure that they will be available on remote machines; this is described in more detail in the [Python programming guide](python-programming-guide.html). -`SimpleJob` is simple enough that we do not need to specify any code dependencies. +For applications that use custom classes or third-party libraries, we can add those code dependencies to SparkContext to ensure that they will be available on remote machines; this is described in more detail in the [Python programming guide](python-programming-guide.html). +`SimpleApp` is simple enough that we do not need to specify any code dependencies. -We can run this job using the `pyspark` script: +We can run this application using the `pyspark` script: {% highlight python %} $ cd $SPARK_HOME -$ ./pyspark SimpleJob.py +$ ./pyspark SimpleApp.py ... Lines with a: 46, Lines with b: 23 {% endhighlight python %} -# Running Jobs on a Cluster +# Running on a Cluster -There are a few additional considerations when running jobs on a +There are a few additional considerations when running applicaitons on a [Spark](spark-standalone.html), [YARN](running-on-yarn.html), or [Mesos](running-on-mesos.html) cluster. @@ -306,7 +306,7 @@ your dependent jars one-by-one when creating a SparkContext. ### Setting Configuration Options Spark includes several configuration options which influence the behavior -of your job. These should be set as +of your application. These should be set as [JVM system properties](configuration.html#system-properties) in your program. The options will be captured and shipped to all slave nodes. diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index b31f78e8bf..eee7a45891 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -17,10 +17,10 @@ Spark can run on private clusters managed by the [Apache Mesos](http://incubator * On all nodes, edit `/var/mesos/conf/mesos.conf` and add the line `master=HOST:5050`, where HOST is your master node. * Run `/sbin/mesos-start-cluster.sh` on your master to start Mesos. If all goes well, you should see Mesos's web UI on port 8080 of the master machine. * See Mesos's README file for more information on deploying it. -8. To run a Spark job against the cluster, when you create your `SparkContext`, pass the string `mesos://HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example: +8. To run a Spark application against the cluster, when you create your `SparkContext`, pass the string `mesos://HOST:5050` as the first parameter, where `HOST` is the machine running your Mesos master. In addition, pass the location of Spark on your nodes as the third parameter, and a list of JAR files containing your JAR's code as the fourth (these will automatically get copied to the workers). For example: {% highlight scala %} -new SparkContext("mesos://HOST:5050", "My Job Name", "/home/user/spark", List("my-job.jar")) +new SparkContext("mesos://HOST:5050", "My App Name", "/home/user/spark", List("my-app.jar")) {% endhighlight %} If you want to run Spark on Amazon EC2, you can use the Spark [EC2 launch scripts](ec2-scripts.html), which provide an easy way to launch a cluster with Mesos, Spark, and HDFS pre-configured. This will get you a cluster in about five minutes without any configuration on your part. @@ -28,24 +28,23 @@ If you want to run Spark on Amazon EC2, you can use the Spark [EC2 launch script # Mesos Run Modes Spark can run over Mesos in two modes: "fine-grained" and "coarse-grained". In fine-grained mode, which is the default, -each Spark task runs as a separate Mesos task. This allows multiple instances of Spark (and other applications) to share -machines at a very fine granularity, where each job gets more or fewer machines as it ramps up, but it comes with an -additional overhead in launching each task, which may be inappropriate for low-latency applications that aim for -sub-second Spark operations (e.g. interactive queries or serving web requests). The coarse-grained mode will instead +each Spark task runs as a separate Mesos task. This allows multiple instances of Spark (and other frameworks) to share +machines at a very fine granularity, where each application gets more or fewer machines as it ramps up, but it comes with an +additional overhead in launching each task, which may be inappropriate for low-latency applications (e.g. interactive queries or serving web requests). The coarse-grained mode will instead launch only *one* long-running Spark task on each Mesos machine, and dynamically schedule its own "mini-tasks" within it. The benefit is much lower startup overhead, but at the cost of reserving the Mesos resources for the complete duration -of the job. +of the application. To run in coarse-grained mode, set the `spark.mesos.coarse` system property to true *before* creating your SparkContext: {% highlight scala %} System.setProperty("spark.mesos.coarse", "true") -val sc = new SparkContext("mesos://HOST:5050", "Job Name", ...) +val sc = new SparkContext("mesos://HOST:5050", "App Name", ...) {% endhighlight %} In addition, for coarse-grained mode, you can control the maximum number of resources Spark will acquire. By default, -it will acquire *all* cores in the cluster (that get offered by Mesos), which only makes sense if you run just a single -job at a time. You can cap the maximum number of cores using `System.setProperty("spark.cores.max", "10")` (for example). +it will acquire *all* cores in the cluster (that get offered by Mesos), which only makes sense if you run just one +application at a time. You can cap the maximum number of cores using `System.setProperty("spark.cores.max", "10")` (for example). Again, this must be done *before* initializing a SparkContext. diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md index f7768e55fc..03647a2ad2 100644 --- a/docs/scala-programming-guide.md +++ b/docs/scala-programming-guide.md @@ -87,10 +87,10 @@ For running on YARN, Spark launches an instance of the standalone deploy cluster ### Deploying Code on a Cluster -If you want to run your job on a cluster, you will need to specify the two optional parameters to `SparkContext` to let it find your code: +If you want to run your application on a cluster, you will need to specify the two optional parameters to `SparkContext` to let it find your code: * `sparkHome`: The path at which Spark is installed on your worker machines (it should be the same on all of them). -* `jars`: A list of JAR files on the local machine containing your job's code and any dependencies, which Spark will deploy to all the worker nodes. You'll need to package your job into a set of JARs using your build system. For example, if you're using SBT, the [sbt-assembly](https://github.com/sbt/sbt-assembly) plugin is a good way to make a single JAR with your code and dependencies. +* `jars`: A list of JAR files on the local machine containing your application's code and any dependencies, which Spark will deploy to all the worker nodes. You'll need to package your application into a set of JARs using your build system. For example, if you're using SBT, the [sbt-assembly](https://github.com/sbt/sbt-assembly) plugin is a good way to make a single JAR with your code and dependencies. If you run `spark-shell` on a cluster, you can add JARs to it by specifying the `ADD_JARS` environment variable before you launch it. This variable should contain a comma-separated list of JARs. For example, `ADD_JARS=a.jar,b.jar ./spark-shell` will launch a shell with `a.jar` and `b.jar` on its classpath. In addition, any new classes you define in the shell will automatically be distributed. diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md index 69e1291580..81cdbefd0c 100644 --- a/docs/spark-standalone.md +++ b/docs/spark-standalone.md @@ -48,11 +48,11 @@ Finally, the following configuration options can be passed to the master and wor - + - + @@ -98,15 +98,15 @@ You can optionally configure the cluster further by setting environment variable - + - + - + @@ -133,9 +133,9 @@ You can optionally configure the cluster further by setting environment variable **Note:** The launch scripts do not currently support Windows. To run a Spark cluster on Windows, start the master and workers by hand. -# Connecting a Job to the Cluster +# Connecting an Application to the Cluster -To run a job on the Spark cluster, simply pass the `spark://IP:PORT` URL of the master as to the [`SparkContext` +To run an application on the Spark cluster, simply pass the `spark://IP:PORT` URL of the master as to the [`SparkContext` constructor](scala-programming-guide.html#initializing-spark). To run an interactive Spark shell against the cluster, run the following command: @@ -147,12 +147,14 @@ automatically set MASTER from the `SPARK_MASTER_IP` and `SPARK_MASTER_PORT` vari You can also pass an option `-c ` to control the number of cores that spark-shell uses on the cluster. -# Job Scheduling +# Resource Scheduling -The standalone cluster mode currently only supports a simple FIFO scheduler across jobs. -However, to allow multiple concurrent jobs, you can control the maximum number of resources each Spark job will acquire. -By default, it will acquire *all* the cores in the cluster, which only makes sense if you run just a single -job at a time. You can cap the number of cores using `System.setProperty("spark.cores.max", "10")` (for example). +The standalone cluster mode currently only supports a simple FIFO scheduler across applications. +However, to allow multiple concurrent users, you can control the maximum number of resources each +application will acquire. +By default, it will acquire *all* cores in the cluster, which only makes sense if you just run one +application at a time. You can cap the number of cores using +`System.setProperty("spark.cores.max", "10")` (for example). This value must be set *before* initializing your SparkContext. -- cgit v1.2.3 From f261d2a60fe9c0ec81c7a93a24fd79062c31f7ae Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Sun, 8 Sep 2013 00:41:18 -0400 Subject: Added cluster overview doc, made logo higher-resolution, and added more details on monitoring --- docs/_layouts/global.html | 3 +- docs/cluster-overview.md | 65 +++++++++++++++++++++++++++++++++++++++++ docs/img/cluster-overview.png | Bin 0 -> 28816 bytes docs/img/cluster-overview.pptx | Bin 0 -> 49698 bytes docs/img/spark-logo-hd.png | Bin 0 -> 13512 bytes docs/index.md | 5 ---- docs/monitoring.md | 30 +++++++++++++------ 7 files changed, 88 insertions(+), 15 deletions(-) create mode 100644 docs/img/cluster-overview.png create mode 100644 docs/img/cluster-overview.pptx create mode 100644 docs/img/spark-logo-hd.png (limited to 'docs') diff --git a/docs/_layouts/global.html b/docs/_layouts/global.html index 5034111ecb..238ad26de0 100755 --- a/docs/_layouts/global.html +++ b/docs/_layouts/global.html @@ -51,7 +51,7 @@
Property NameDefaultMeaning
spark.mesos.coarsefalse - If set to "true", runs over Mesos clusters in - "coarse-grained" sharing mode, - where Spark acquires one long-lived Mesos task on each machine instead of one Mesos task per Spark task. - This gives lower-latency scheduling for short queries, but leaves resources in use for the whole - duration of the Spark job. -
spark.default.parallelism 8
spark.mesos.coarsefalse + If set to "true", runs over Mesos clusters in + "coarse-grained" sharing mode, + where Spark acquires one long-lived Mesos task on each machine instead of one Mesos task per Spark task. + This gives lower-latency scheduling for short queries, but leaves resources in use for the whole + duration of the Spark job. +
spark.ui.port 3030
spark.scheduler.modeFIFO + The scheduling mode between + jobs submitted to the same SparkContext. Can be set to FAIR + to use fair sharing instead of queueing jobs one after another. Useful for + multi-user services. +
spark.reducer.maxMbInFlight 48
-c CORES, --cores CORESTotal CPU cores to allow Spark jobs to use on the machine (default: all available); only on workerTotal CPU cores to allow Spark applicatons to use on the machine (default: all available); only on worker
-m MEM, --memory MEMTotal amount of memory to allow Spark jobs to use on the machine, in a format like 1000M or 2G (default: your machine's total RAM minus 1 GB); only on workerTotal amount of memory to allow Spark applicatons to use on the machine, in a format like 1000M or 2G (default: your machine's total RAM minus 1 GB); only on worker
-d DIR, --work-dir DIR
SPARK_WORKER_DIRDirectory to run jobs in, which will include both logs and scratch space (default: SPARK_HOME/work).Directory to run applications in, which will include both logs and scratch space (default: SPARK_HOME/work).
SPARK_WORKER_CORESTotal number of cores to allow Spark jobs to use on the machine (default: all available cores).Total number of cores to allow Spark applications to use on the machine (default: all available cores).
SPARK_WORKER_MEMORYTotal amount of memory to allow Spark jobs to use on the machine, e.g. 1000m, 2g (default: total memory minus 1 GB); note that each job's individual memory is configured using its spark.executor.memory property.Total amount of memory to allow Spark applications to use on the machine, e.g. 1000m, 2g (default: total memory minus 1 GB); note that each application's individual memory is configured using its spark.executor.memory property.
SPARK_WORKER_WEBUI_PORT

CDH Releases

- - +
VersionHADOOP_VERSION
+ @@ -34,7 +35,7 @@ the _exact_ Hadoop version you are running to avoid any compatibility errors. - + diff --git a/docs/quick-start.md b/docs/quick-start.md index 1b069ce982..8f782db5b8 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -36,7 +36,7 @@ scala> textFile.count() // Number of items in this RDD res0: Long = 74 scala> textFile.first() // First item in this RDD -res1: String = Welcome to the Spark documentation! +res1: String = # Apache Spark {% endhighlight %} Now let's use a transformation. We will use the [`filter`](scala-programming-guide.html#transformations) transformation to return a new RDD with a subset of the items in the file. -- cgit v1.2.3
ReleaseVersion code
CDH 4.X.X (YARN mode)2.0.0-chd4.X.X
CDH 4.X.X2.0.0-mr1-chd4.X.X
CDH 3u60.20.2-cdh3u6

HDP Releases

- + @@ -44,7 +45,47 @@ the _exact_ Hadoop version you are running to avoid any compatibility errors.
VersionHADOOP_VERSION
ReleaseVersion code
HDP 1.31.2.0
HDP 1.21.1.2
HDP 1.11.0.3
+# Linking Applications to the Hadoop Version + +In addition to compiling Spark itself against the right version, you need to add a Maven dependency on that +version of `hadoop-client` to any Spark applications you run, so they can also talk to the HDFS version +on the cluster. If you are using CDH, you also need to add the Cloudera Maven repository. +This looks as follows in SBT: + +{% highlight scala %} +libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "" + +// If using CDH, also add Cloudera repo +resolvers += "Cloudera Repository" at "https://repository.cloudera.com/artifactory/cloudera-repos/" +{% endhighlight %} + +Or in Maven: + +{% highlight xml %} + + + ... + + org.apache.hadoop + hadoop-client + [version] + + + + + + ... + + Cloudera repository + https://repository.cloudera.com/artifactory/cloudera-repos/ + + + + +{% endhighlight %} + # Where to Run Spark + As described in the [Hardware Provisioning](hardware-provisioning.html#storage-systems) guide, Spark can run in a variety of deployment modes: @@ -57,6 +98,7 @@ Spark can run in a variety of deployment modes: These options are identical for those using CDH and HDP. # Inheriting Cluster Configuration + If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that should be included on Spark's classpath: -- cgit v1.2.3 From af8ffdb73c28012c9f5cf232ca7d4b4c6763628d Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Sun, 8 Sep 2013 13:36:50 -0700 Subject: Review comments --- docs/cluster-overview.md | 47 +++++++++++++++++++++++++++++++++++++++++++++++ docs/job-scheduling.md | 2 +- 2 files changed, 48 insertions(+), 1 deletion(-) (limited to 'docs') diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md index 143f93171f..cf6b48c05e 100644 --- a/docs/cluster-overview.md +++ b/docs/cluster-overview.md @@ -68,3 +68,50 @@ access this UI. The [monitoring guide](monitoring.html) also describes other mon Spark gives control over resource allocation both _across_ applications (at the level of the cluster manager) and _within_ applications (if multiple computations are happening on the same SparkContext). The [job scheduling overview](job-scheduling.html) describes this in more detail. + +# Glossary + +The following table summarizes terms you'll see used to refer to cluster concepts: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TermMeaning
ApplicationAny user program invoking Spark
Driver programThe process running the main() function of the application and creating the SparkContext
Cluster managerAn external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN)
Worker nodeAny node that can run application code in the cluster
ExecutorA process launched for an application on a worker node, that runs tasks and keeps data in memory + or disk storage across them. Each application has its own executors.
TaskA unit of work that will be sent to one executor
JobA parallel computation consisting of multiple tasks that gets spawned in response to a Spark action + (e.g. save, collect); you'll see this term used in the driver's logs.
StageEach job gets divided into smaller sets of tasks called stages that depend on each other + (similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs.
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md index 11b733137d..d304c5497b 100644 --- a/docs/job-scheduling.md +++ b/docs/job-scheduling.md @@ -25,7 +25,7 @@ different options to manage allocation, depending on the cluster manager. The simplest option, available on all cluster managers, is _static partitioning_ of resources. With this approach, each application is given a maximum amount of resources it can use, and holds onto them -for its whole duration. This is the only approach available in Spark's [standalone](spark-standalone.html) +for its whole duration. This is the approach used in Spark's [standalone](spark-standalone.html) and [YARN](running-on-yarn.html) modes, as well as the [coarse-grained Mesos mode](running-on-mesos.html#mesos-run-modes). Resource allocation can be configured as follows, based on the cluster type: -- cgit v1.2.3 From 5a587fb98dd4a3ff675d941e1d4e617bd30b28fc Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Sun, 8 Sep 2013 13:51:57 -0700 Subject: Updated cluster diagram to show caches --- docs/img/cluster-overview.png | Bin 28816 -> 28011 bytes docs/img/cluster-overview.pptx | Bin 49698 -> 51771 bytes 2 files changed, 0 insertions(+), 0 deletions(-) (limited to 'docs') diff --git a/docs/img/cluster-overview.png b/docs/img/cluster-overview.png index 2a1cf02fcf..368274068e 100644 Binary files a/docs/img/cluster-overview.png and b/docs/img/cluster-overview.png differ diff --git a/docs/img/cluster-overview.pptx b/docs/img/cluster-overview.pptx index 2a61db352d..af3c462cd9 100644 Binary files a/docs/img/cluster-overview.pptx and b/docs/img/cluster-overview.pptx differ -- cgit v1.2.3 From b458854977c437e85fd89056e5d40383c8fa962e Mon Sep 17 00:00:00 2001 From: Matei Zaharia Date: Sun, 8 Sep 2013 21:25:49 -0700 Subject: Fix some review comments --- docs/cluster-overview.md | 2 +- docs/quick-start.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) (limited to 'docs') diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md index cf6b48c05e..7025c23657 100644 --- a/docs/cluster-overview.md +++ b/docs/cluster-overview.md @@ -80,7 +80,7 @@ The following table summarizes terms you'll see used to refer to cluster concept
ApplicationAny user program invoking SparkUser program built on Spark. Consists of a driver program and executors on the cluster.
Driver program