aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Removed unused code. Mosharaf Chowdhury2013-10-172-14/+11
| | | Changes to match Spark coding style.
* BroadcastTest2 --> BroadcastTestMosharaf Chowdhury2013-10-162-62/+12
|
* Fixes for the new BlockId naming convention.Mosharaf Chowdhury2013-10-162-7/+14
|
* Default blockSize is 4MB.Mosharaf Chowdhury2013-10-162-1/+60
| | | | BroadcastTest2 example added for testing broadcasts.
* Removed unnecessary code, and added comment of memory-latency tradeoff.Mosharaf Chowdhury2013-10-161-4/+6
|
* Torrent-ish broadcast based on BlockManager.Mosharaf Chowdhury2013-10-163-4/+251
|
* Merge pull request #65 from tgravescs/fixYarnMatei Zaharia2013-10-161-2/+2
|\ | | | | | | | | | | Fix yarn build Fix the yarn build after renaming StandAloneX to CoarseGrainedX from pull request 34.
| * Fix yarn buildtgravescs2013-10-161-2/+2
|/
* Merge pull request #63 from pwendell/masterMatei Zaharia2013-10-152-4/+10
|\ | | | | | | | | | | | | | | Fixing spark streaming example and a bug in examples build. - Examples assembly included a log4j.properties which clobbered Spark's - Example had an error where some classes weren't serializable - Did some other clean-up in this example
| * Fixing spark streaming example and a bug in examples build.Patrick Wendell2013-10-152-4/+10
| | | | | | | | | | | | - Examples assembly included a log4j.properties which clobbered Spark's - Example had an error where some classes weren't serializable - Did some other clean-up in this example
* | Merge pull request #62 from harveyfeng/masterMatei Zaharia2013-10-152-2/+5
|\ \ | |/ |/| | | Make TaskContext's stageId publicly accessible.
| * Proper formatting for SparkHadoopWriter class extensions.Harvey Feng2013-10-151-1/+3
| |
| * Fix line length > 100 chars in SparkHadoopWriterHarvey Feng2013-10-151-1/+2
| |
| * Make TaskContext's stageId publicly accessible.Harvey Feng2013-10-151-1/+1
| |
* | Merge pull request #8 from vchekan/checkpoint-ttl-restoreMatei Zaharia2013-10-152-0/+6
|\ \ | | | | | | | | | | | | | | | Serialize and restore spark.cleaner.ttl to savepoint In accordance to conversation in spark-dev maillist, preserve spark.cleaner.ttl parameter when serializing checkpoint.
| * | Serialize and restore spark.cleaner.ttl to savepointVadim Chekan2013-09-202-0/+6
| | |
* | | Merge pull request #34 from kayousterhout/renameMatei Zaharia2013-10-156-36/+42
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Renamed StandaloneX to CoarseGrainedX. (as suggested by @rxin here https://github.com/apache/incubator-spark/pull/14) The previous names were confusing because the components weren't just used in Standalone mode. The scheduler used for Standalone mode is called SparkDeploySchedulerBackend, so referring to the base class as StandaloneSchedulerBackend was misleading.
| * | | Fixed build error after merging in masterKay Ousterhout2013-10-151-1/+1
| | | |
| * | | Merge remote branch 'upstream/master' into renameKay Ousterhout2013-10-15175-1414/+5573
| |\ \ \ | | | |/ | | |/|
| * | | Added back fully qualified class nameKay Ousterhout2013-10-061-1/+1
| | | |
| * | | Renamed StandaloneX to CoarseGrainedX.Kay Ousterhout2013-10-046-35/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous names were confusing because the components weren't just used in Standalone mode -- in fact, the scheduler used for Standalone mode is called SparkDeploySchedulerBackend. So, the previous names were misleading.
* | | | Merge pull request #61 from kayousterhout/daemon_threadMatei Zaharia2013-10-157-38/+29
|\ \ \ \ | |_|/ / |/| | | | | | | | | | | | | | | Unified daemon thread pools As requested by @mateiz in an earlier pull request, this refactors various daemon thread pools to use a set of methods in utils.scala, and also changes the thread-pool-creation methods in utils.scala to use named thread pools for improved debugging.
| * | | Unified daemon thread poolsKay Ousterhout2013-10-157-38/+29
|/ / /
* | | Merge pull request #59 from rxin/warningMatei Zaharia2013-10-151-5/+5
|\ \ \ | | | | | | | | | | | | Bump up logging level to warning for failed tasks.
| * | | Bump up logging level to warning for failed tasks.Reynold Xin2013-10-141-5/+5
| | | |
* | | | Merge pull request #58 from hsaputra/update-pom-asfReynold Xin2013-10-151-1/+24
|\ \ \ \ | |/ / / |/| | | | | | | | | | | | | | | | | | | Update pom.xml to use version 13 of the ASF parent pom Update pom.xml to use version 13 of the ASF parent pom. Add mailingList element to pom.xml.
| * | | Update pom.xml to use version 13 of the ASF parent pom and add mailingLists ↵Henry Saputra2013-10-141-1/+24
| | | | | | | | | | | | | | | | element.
* | | | Merge pull request #29 from rxin/killPatrick Wendell2013-10-1450-515/+1528
|\ \ \ \ | |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Job killing Moving https://github.com/mesos/spark/pull/935 here The high level idea is to have an "interrupted" field in TaskContext, and a task should check that flag to determine if its execution should continue. For convenience, I provide an InterruptibleIterator which wraps around a normal iterator but checks for the interrupted flag. I also provide an InterruptibleRDD that wraps around an existing RDD. As part of this pull request, I added an AsyncRDDActions class that provides a number of RDD actions that return a FutureJob (extending scala.concurrent.Future). The FutureJob can be used to kill the job execution, or waits until the job finishes. This is NOT ready for merging yet. Remaining TODOs: 1. Add unit tests 2. Add job killing functionality for local scheduler (current job killing functionality only works in cluster scheduler) ------------- Update on Oct 10, 2013: This is ready! Related future work: - Figure out how to handle the job triggered by RangePartitioner (this one is tough; might become future work) - Java API - Python API
| * | | Merge branch 'master' of github.com:apache/incubator-spark into killReynold Xin2013-10-1453-457/+652
| |\ \ \ | |/ / / |/| | | | | | | | | | | Conflicts: core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
* | | | Merge pull request #57 from aarondav/bidReynold Xin2013-10-1444-385/+544
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor BlockId into an actual type Converts all of our BlockId strings into actual BlockId types. Here are some advantages of doing this now: + Type safety + Code clarity - it's now obvious what the key of a shuffle or rdd block is, for instance. Additionally, appearing in tuple/map type signatures is a big readability bonus. A Seq[(String, BlockStatus)] is not very clear. Further, we can now use more Scala features, like matching on BlockId types. + Explicit usage - we can now formally tell where various BlockIds are being used (without doing string searches); this makes updating current BlockIds a much clearer process, and compiler-supported. (I'm looking at you, shuffle file consolidation.) + It will only get harder to make this change as time goes on. Downside is, of course, that this is a very invasive change touching a lot of different files, which will inevitably lead to merge conflicts for many.
| * | | | Address Matei's commentsAaron Davidson2013-10-148-34/+28
| | | | |
| * | | | Change BlockId filename to name + rest of Patrick's commentsAaron Davidson2013-10-1311-36/+39
| | | | |
| * | | | Add unit test and address rest of Reynold's commentsAaron Davidson2013-10-1210-20/+144
| | | | |
| * | | | Refactor BlockId into an actual typeAaron Davidson2013-10-1243-385/+423
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is an unfortunately invasive change which converts all of our BlockId strings into actual BlockId types. Here are some advantages of doing this now: + Type safety + Code clarity - it's now obvious what the key of a shuffle or rdd block is, for instance. Additionally, appearing in tuple/map type signatures is a big readability bonus. A Seq[(String, BlockStatus)] is not very clear. Further, we can now use more Scala features, like matching on BlockId types. + Explicit usage - we can now formally tell where various BlockIds are being used (without doing string searches); this makes updating current BlockIds a much clearer process, and compiler-supported. (I'm looking at you, shuffle file consolidation.) + It will only get harder to make this change as time goes on. Since this touches a lot of files, it'd be best to either get this patch in quickly or throw it on the ground to avoid too many secondary merge conflicts.
* | | | | Merge pull request #52 from harveyfeng/hadoop-closureReynold Xin2013-10-122-55/+26
|\ \ \ \ \ | |/ / / / |/| | | | | | | | | | | | | | | | | | | Add an optional closure parameter to HadoopRDD instantiation to use when creating local JobConfs. Having HadoopRDD accept this optional closure eliminates the need for the HadoopFileRDD added earlier. It makes the HadoopRDD more general, in that the caller can specify any JobConf initialization flow.
| * | | | Remove the new HadoopRDD constructor from SparkContext API, plus some minor ↵Harvey Feng2013-10-122-27/+3
| | | | | | | | | | | | | | | | | | | | style changes.
| * | | | Add an optional closure parameter to HadoopRDD instantiation to used when ↵Harvey Feng2013-10-102-53/+48
| | | | | | | | | | | | | | | | | | | | creating any local JobConfs.
* | | | | Merge pull request #54 from aoiwelle/remove_unused_importsReynold Xin2013-10-111-2/+0
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove unnecessary mutable imports It appears that the imports aren't necessary here.
| * | | | | Remove unnecessary mutable importsNeal Wiggins2013-10-111-2/+0
| | | | | |
* | | | | | Merge pull request #53 from witgo/masterMatei Zaharia2013-10-111-0/+4
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a zookeeper compile dependency to fix build in maven Add a zookeeper compile dependency to fix build in maven
| * | | | | | Add a zookeeper compile dependency to fix build in mavenLiGuoqiang2013-10-111-0/+4
| | | | | | |
* | | | | | | Merge pull request #32 from mridulm/masterMatei Zaharia2013-10-1112-29/+93
|\ \ \ \ \ \ \ | |/ / / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Address review comments, move to incubator spark Also includes a small fix to speculative execution. <edit> Continued from https://github.com/mesos/spark/pull/914 </edit>
| * | | | | | - Allow for finer control of cleanerMridul Muralidharan2013-10-0612-29/+93
| | |_|_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | - Address review comments, move to incubator spark - Also includes a change to speculation - including preventing exceptions in rare cases.
| | | | * | Fixed PairRDDFunctionsSuite after removing InterruptibleRDD.Reynold Xin2013-10-121-1/+1
| | | | | |
| | | | * | Job cancellation: address Matei's code review feedback.Reynold Xin2013-10-1217-216/+248
| | | | | |
| | | | * | Job cancellation: addressed code review feedback round 2 from Kay.Reynold Xin2013-10-113-44/+47
| | | | | |
| | | | * | Fixed dagscheduler suite because of a logging message change.Reynold Xin2013-10-111-1/+1
| | | | | |
| | | | * | Job cancellation: addressed code review feedback from Kay.Reynold Xin2013-10-1112-80/+86
| | | | | |
| | | | * | Making takeAsync and collectAsync deterministic.Reynold Xin2013-10-113-19/+15
| | | | | |
| | | | * | Properly handle interrupted exception in FutureAction.Reynold Xin2013-10-111-7/+5
| | | | | |