aboutsummaryrefslogtreecommitdiff
path: root/docs
Commit message (Collapse)AuthorAgeFilesLines
* Merge remote-tracking branch 'upstream/master' into sparsesvdReza Zadeh2014-01-112-5/+35
|\
| * Merge pull request #377 from andrewor14/masterPatrick Wendell2014-01-101-2/+21
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | External Sorting for Aggregator and CoGroupedRDDs (Revisited) (This pull request is re-opened from https://github.com/apache/incubator-spark/pull/303, which was closed because Jenkins / github was misbehaving) The target issue for this patch is the out-of-memory exceptions triggered by aggregate operations such as reduce, groupBy, join, and cogroup. The existing AppendOnlyMap used by these operations resides purely in memory, and grows with the size of the input data until the amount of allocated memory is exceeded. Under large workloads, this problem is aggravated by the fact that OOM frequently occurs only after a very long (> 1 hour) map phase, in which case the entire job must be restarted. The solution is to spill the contents of this map to disk once a certain memory threshold is exceeded. This functionality is provided by ExternalAppendOnlyMap, which additionally sorts this buffer before writing it out to disk, and later merges these buffers back in sorted order. Under normal circumstances in which OOM is not triggered, ExternalAppendOnlyMap is simply a wrapper around AppendOnlyMap and incurs little overhead. Only when the memory usage is expected to exceed the given threshold does ExternalAppendOnlyMap spill to disk.
| | * Update documentation for externalSortingAndrew Or2014-01-101-3/+2
| | |
| | * Address Patrick's and Reynold's commentsAndrew Or2014-01-101-2/+22
| | | | | | | | | | | | | | | | | | | | | | | | Aside from trivial formatting changes, use nulls instead of Options for DiskMapIterator, and add documentation for spark.shuffle.externalSorting and spark.shuffle.memoryFraction. Also, set spark.shuffle.memoryFraction to 0.3, and spark.storage.memoryFraction = 0.6.
| * | Merge pull request #371 from tgravescs/yarn_client_addjar_misc_fixesThomas Graves2014-01-101-2/+13
| |\ \ | | | | | | | | | | | | | | | | | | | | Yarn client addjar and misc fixes Fix the addJar functionality in yarn-client mode, add support for the other options supported in yarn-standalone mode, set the application type on yarn in hadoop 2.X, add documentation, change heartbeat interval to be same code as the yarn-standalone so it doesn't take so long to get containers and exit.
| | * | yarn-client addJar fix and misc otherThomas Graves2014-01-091-2/+13
| | | |
| * | | Merge pull request #378 from pwendell/consolidate_onPatrick Wendell2014-01-091-1/+1
| |\ \ \ | | |_|/ | |/| | | | | | | | | | | | | | Enable shuffle consolidation by default. Bump this to being enabled for 0.9.0.
| | * | Enable shuffle consolidation by default.Patrick Wendell2014-01-091-1/+1
| | | | | | | | | | | | | | | | Bump this to being enabled for 0.9.0.
* | | | Merge remote-tracking branch 'upstream/master' into sparsesvdReza Zadeh2014-01-0915-149/+433
|\| | | | | | | | | | | | | | | | | | | Conflicts: docs/mllib-guide.md
| * | | Merge pull request #353 from pwendell/ipython-simplifyPatrick Wendell2014-01-091-2/+3
| |\ \ \ | | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify and fix pyspark script. This patch removes compatibility for IPython < 1.0 but fixes the launch script and makes it much simpler. I tested this using the three commands in the PySpark documentation page: 1. IPYTHON=1 ./pyspark 2. IPYTHON_OPTS="notebook" ./pyspark 3. IPYTHON_OPTS="notebook --pylab inline" ./pyspark There are two changes: - We rely on PYTHONSTARTUP env var to start PySpark - Removed the quotes around $IPYTHON_OPTS... having quotes gloms them together as a single argument passed to `exec` which seemed to cause ipython to fail (it instead expects them as multiple arguments).
| | * | Simplify and fix pyspark script.Patrick Wendell2014-01-071-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes compatibility for IPython < 1.0 but fixes the launch script and makes it much simpler. I tested this using the three commands in the PySpark documentation page: 1. IPYTHON=1 ./pyspark 2. IPYTHON_OPTS="notebook" ./pyspark 3. IPYTHON_OPTS="notebook --pylab inline" ./pyspark There are two changes: - We rely on PYTHONSTARTUP env var to start PySpark - Removed the quotes around $IPYTHON_OPTS... having quotes gloms them together as a single argument passed to `exec` which seemed to cause ipython to fail (it instead expects them as multiple arguments).
| * | | Merge pull request #293 from pwendell/standalone-driverPatrick Wendell2014-01-091-5/+33
| |\ \ \ | | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPARK-998: Support Launching Driver Inside of Standalone Mode [NOTE: I need to bring the tests up to date with new changes, so for now they will fail] This patch provides support for launching driver programs inside of a standalone cluster manager. It also supports monitoring and re-launching of driver programs which is useful for long running, recoverable applications such as Spark Streaming jobs. For those jobs, this patch allows a deployment mode which is resilient to the failure of any worker node, failure of a master node (provided a multi-master setup), and even failures of the applicaiton itself, provided they are recoverable on a restart. Driver information, such as the status and logs from a driver, is displayed in the UI There are a few small TODO's here, but the code is generally feature-complete. They are: - Bring tests up to date and add test coverage - Restarting on failure should be optional and maybe off by default. - See if we can re-use akka connections to facilitate clients behind a firewall A sensible place to start for review would be to look at the `DriverClient` class which presents users the ability to launch their driver program. I've also added an example program (`DriverSubmissionTest`) that allows you to test this locally and play around with killing workers, etc. Most of the code is devoted to persisting driver state in the cluster manger, exposing it in the UI, and dealing correctly with various types of failures. Instructions to test locally: - `sbt/sbt assembly/assembly examples/assembly` - start a local version of the standalone cluster manager ``` ./spark-class org.apache.spark.deploy.client.DriverClient \ -j -Dspark.test.property=something \ -e SPARK_TEST_KEY=SOMEVALUE \ launch spark://10.99.1.14:7077 \ ../path-to-examples-assembly-jar \ org.apache.spark.examples.DriverSubmissionTest 1000 some extra options --some-option-here -X 13 ``` - Go in the UI and make sure it started correctly, look at the output etc - Kill workers, the driver program, masters, etc.
| | * | Merge remote-tracking branch 'apache-github/master' into standalone-driverPatrick Wendell2014-01-0814-81/+348
| | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/test/scala/org/apache/spark/deploy/JsonProtocolSuite.scala pom.xml
| | * | | FixesPatrick Wendell2014-01-081-2/+3
| | | | |
| | * | | Some doc fixesPatrick Wendell2014-01-061-3/+2
| | | | |
| | * | | Merge remote-tracking branch 'apache-github/master' into standalone-driverPatrick Wendell2014-01-0623-155/+228
| | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/deploy/client/AppClient.scala core/src/main/scala/org/apache/spark/deploy/client/TestClient.scala core/src/main/scala/org/apache/spark/deploy/master/Master.scala core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
| | * | | | Documentation and adding supervise optionPatrick Wendell2013-12-291-5/+33
| | | | | |
| * | | | | Fixing config option "retained_stages" => "retainedStages".Patrick Wendell2014-01-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a very esoteric option and it's out of sync with the style we use. So it seems fitting to fix it for 0.9.0.
| * | | | | Merge pull request #345 from colorant/yarnThomas Graves2014-01-081-0/+2
| |\ \ \ \ \ | | |_|_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | support distributing extra files to worker for yarn client mode So that user doesn't need to package all dependency into one assemble jar as spark app jar
| | * | | | Export --file for YarnClient mode to support sending extra files to worker ↵Raymond Liu2014-01-071-0/+2
| | | |/ / | | |/| | | | | | | | | | | | on yarn cluster
| * | | | Merge pull request #322 from falaki/MLLibDocumentationImprovementPatrick Wendell2014-01-071-56/+274
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPARK-1009 Updated MLlib docs to show how to use it in Python In addition added detailed examples for regression, clustering and recommendation algorithms in a separate Scala section. Fixed a few minor issues with existing documentation.
| | * \ \ \ Fixed merge conflictHossein Falaki2014-01-0719-149/+233
| | |\ \ \ \ | | | | |_|/ | | | |/| |
| | * | | | Added proper evaluation example for collaborative filtering and fixed typoHossein Falaki2014-01-061-4/+8
| | | | | |
| | * | | | Added table of contents and minor fixesHossein Falaki2014-01-031-8/+16
| | | | | |
| | * | | | Commented the last part of collaborative filtering examples that lead to errorsHossein Falaki2014-01-021-5/+6
| | | | | |
| | * | | | Added Scala and Python examples for mllibHossein Falaki2014-01-021-52/+261
| | | | | |
| * | | | | Address review commentsMatei Zaharia2014-01-071-2/+2
| | | | | |
| * | | | | Add way to limit default # of cores used by applications on standalone modeMatei Zaharia2014-01-074-8/+42
| | |/ / / | |/| | | | | | | | | | | | | Also documents the spark.deploy.spreadOut option.
| * | | | Merge pull request #339 from ScrapCodes/conf-improvementsPatrick Wendell2014-01-071-0/+15
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conf improvements There are two new features. 1. Allow users to set arbitrary akka configurations via spark conf. 2. Allow configuration to be printed in logs for diagnosis.
| | * | | | formatting related fixes suggested by Patrick.Prashant Sharma2014-01-071-1/+1
| | | | | |
| | * | | | Allow configuration to be printed in logs for diagnosis.Prashant Sharma2014-01-071-0/+7
| | | | | |
| | * | | | Allow users to set arbitrary akka configurations via spark conf.Prashant Sharma2014-01-071-0/+8
| | | |/ / | | |/| |
| * | | | Merge pull request #331 from holdenk/masterReynold Xin2014-01-079-18/+18
| |\ \ \ \ | | |/ / / | |/| | | | | | | | | | | | | | | | | | Add a script to download sbt if not present on the system As per the discussion on the dev mailing list this script will use the system sbt if present or otherwise attempt to install the sbt launcher. The fall back error message in the event it fails instructs the user to install sbt. While the URLs it fetches from aren't controlled by the spark project directly, they are stable and the current authoritative sources.
| | * | | Code review feedbackHolden Karau2014-01-059-18/+18
| | | | |
| * | | | Clarify spark.cores.maxAndrew Ash2014-01-061-1/+2
| |/ / / | | | | | | | | It controls the count of cores across the cluster, not on a per-machine basis.
| * | | Merge remote-tracking branch 'apache-github/master' into remove-binariesPatrick Wendell2014-01-0312-64/+57
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/test/scala/org/apache/spark/DriverSuite.scala docs/python-programming-guide.md
| | * \ \ Merge pull request #317 from ScrapCodes/spark-915-segregate-scriptsPatrick Wendell2014-01-0311-52/+52
| | |\ \ \ | | | | | | | | | | | | | | | | | | Spark-915 segregate scripts
| | | * | | sbin/spark-class* -> bin/spark-class*Prashant Sharma2014-01-032-3/+3
| | | | | |
| | | * | | a few left over document changePrashant Sharma2014-01-021-1/+1
| | | | | |
| | | * | | pyspark -> bin/pysparkPrashant Sharma2014-01-023-17/+17
| | | | | |
| | | * | | run-example -> bin/run-examplePrashant Sharma2014-01-026-12/+12
| | | | | |
| | | * | | spark-shell -> bin/spark-shellPrashant Sharma2014-01-027-13/+13
| | | | | |
| | | * | | Merge branch 'scripts-reorg' of github.com:shane-huang/incubator-spark into ↵Prashant Sharma2014-01-022-9/+9
| | | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spark-915-segregate-scripts Conflicts: bin/spark-shell core/pom.xml core/src/main/scala/org/apache/spark/SparkContext.scala core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala core/src/test/scala/org/apache/spark/DriverSuite.scala python/run-tests sbin/compute-classpath.sh sbin/spark-class sbin/stop-slaves.sh
| | | | * \ \ Merge branch 'reorgscripts' into scripts-reorgshane-huang2013-09-272-9/+9
| | | | |\ \ \
| | | | | * | | add admin scripts to sbinshane-huang2013-09-231-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| | | | | * | | added spark-class and spark-executor to sbinshane-huang2013-09-232-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| | * | | | | | fix docs for yarnRaymond Liu2014-01-031-3/+0
| | | | | | | |
| | * | | | | | Using name yarn-alpha/yarn instead of yarn-2.0/yarn-2.2Raymond Liu2014-01-031-4/+4
| | | | | | | |
| | * | | | | | Update maven build documentationRaymond Liu2014-01-032-8/+4
| | | | | | | |
| | * | | | | | Fix yarn/README.md and update docs/running-on-yarn.mdRaymond Liu2014-01-031-1/+1
| | |/ / / / /