aboutsummaryrefslogtreecommitdiff
path: root/core
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'master' into graphxReynold Xin2014-01-13122-621/+3167
|\
| * Merge pull request #397 from pwendell/host-portReynold Xin2014-01-129-42/+6
| |\ | | | | | | | | | | | | | | | Remove now un-needed hostPort option I noticed this was logging some scary error messages in various places. After I looked into it, this is no longer really used. I removed the option and re-wrote the one remaining use case (it was unnecessary there anyways).
| | * Removing mentions in testsPatrick Wendell2014-01-125-8/+1
| | |
| | * Remove now un-needed hostPort optionPatrick Wendell2014-01-124-34/+5
| | |
| * | Merge pull request #399 from pwendell/consolidate-offPatrick Wendell2014-01-121-1/+1
| |\ \ | | | | | | | | | | | | | | | | | | | | Disable shuffle file consolidation by default After running various performance tests for the 0.9 release, this still seems to have performance issues even on XFS. So let's keep this off-by-default for 0.9 and users can experiment with it depending on their disk configurations.
| | * | Disable shuffle file consolidation by defaultPatrick Wendell2014-01-121-1/+1
| | |/
| * | Merge pull request #395 from hsaputra/remove_simpleredundantreturn_scalaPatrick Wendell2014-01-1243-134/+134
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove simple redundant return statements for Scala methods/functions Remove simple redundant return statements for Scala methods/functions: -) Only change simple return statements at the end of method -) Ignore the complex if-else check -) Ignore the ones inside synchronized -) Add small changes to making var to val if possible and remove () for simple get This hopefully makes the review simpler =) Pass compile and tests.
| | * | Address code review concerns and comments.Henry Saputra2014-01-126-13/+12
| | | |
| | * | Fix accidental comment modification.Henry Saputra2014-01-121-1/+1
| | | |
| | * | Merge branch 'master' into remove_simpleredundantreturn_scalaHenry Saputra2014-01-1264-312/+2869
| | |\|
| | * | Remove simple redundant return statement for Scala methods/functions:Henry Saputra2014-01-1243-124/+125
| | | | | | | | | | | | | | | | | | | | | | | | -) Only change simple return statements at the end of method -) Ignore the complex if-else check -) Ignore the ones inside synchronized
| * | | Merge remote-tracking branch 'apache/master' into error-handlingTathagata Das2014-01-124-6/+18
| |\ \ \
| | * \ \ Merge pull request #396 from pwendell/executor-envPatrick Wendell2014-01-121-1/+1
| | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting load defaults to true in executor This preserves the behavior in earlier releases. If properties are set for the executors via `spark-env.sh` on the slaves, then they should take precedence over spark defaults. This is useful for if system administrators are setting properties for a standalone cluster, such as shuffle locations. /cc @andrewor14 who initially reported this issue.
| | | * | | Setting load defaults to true in executorPatrick Wendell2014-01-121-1/+1
| | | | |/ | | | |/|
| | * | | Merge pull request #392 from rxin/listenerbusReynold Xin2014-01-123-5/+17
| | |\ \ \ | | | |/ / | | |/| | | | | | | | | | | | | | | | | Stop SparkListenerBus daemon thread when DAGScheduler is stopped. Otherwise this leads to hundreds of SparkListenerBus daemon threads in our unit tests (and also problematic if user applications launches multiple SparkContext).
| | | * | Stop SparkListenerBus daemon thread when DAGScheduler is stopped.Reynold Xin2014-01-113-5/+17
| | | | |
| * | | | Merge remote-tracking branch 'apache/master' into error-handlingTathagata Das2014-01-1123-141/+1277
| |\| | |
| | * | | Merge pull request #389 from rxin/clone-writablesReynold Xin2014-01-113-41/+71
| | |\ \ \ | | | | | | | | | | | | | | | | | | Minor update for clone writables and more documentation.
| | | * | | Renamed cloneKeyValues to cloneRecords; updated docs.Reynold Xin2014-01-113-44/+45
| | | | | |
| | | * | | Minor update for clone writables and more documentation.Reynold Xin2014-01-113-12/+41
| | | |/ /
| | * | | Merge pull request #388 from pwendell/masterReynold Xin2014-01-111-1/+1
| | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix UI bug introduced in #244. The 'duration' field was incorrectly renamed to 'task time' in the table that lists stages.
| | | * | | Fix UI bug introduced in #244.Patrick Wendell2014-01-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'duration' field was incorrectly renamed to 'task time' in the table that lists stages.
| | * | | | Revert "Fix default TTL for metadata cleaner"Patrick Wendell2014-01-111-1/+1
| | | |/ / | | |/| | | | | | | | | | | | This reverts commit 669ba4caa95014f4511f842206c3e506f1a41a7a.
| | * | | Merge pull request #359 from ScrapCodes/clone-writablesReynold Xin2014-01-114-49/+106
| | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We clone hadoop key and values by default and reuse objects if asked to. We try to clone for most common types of writables and we call WritableUtils.clone otherwise intention is to optimize, for example for NullWritable there is no need and for Long, int and String creating a new object with value set would be faster than doing copy on object hopefully. There is another way to do this PR where we ask for both key and values whether to clone them or not, but could not think of a use case for it except either of them is actually a NullWritable for which I have already worked around. So thought that would be unnecessary.
| | | * | | Fixes corresponding to Reynolds feedback commentsPrashant Sharma2014-01-094-32/+43
| | | | | |
| | | * | | we clone hadoop key and values by default and reuse if specified.Prashant Sharma2014-01-084-41/+87
| | | | | |
| | * | | | Merge pull request #386 from pwendell/typo-fixReynold Xin2014-01-101-1/+1
| | |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | Small typo fix
| | | * | | | Small typo fixPatrick Wendell2014-01-101-1/+1
| | | | |/ / | | | |/| |
| | * | | | Merge pull request #381 from mateiz/default-ttlMatei Zaharia2014-01-101-1/+1
| | |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix default TTL for metadata cleaner It seems to have been set to 3500 in a previous commit for debugging, but it should be off by default.
| | | * | | | Fix default TTL for metadata cleanerMatei Zaharia2014-01-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems to have been set to 3500 in a previous commit for debugging, but it should be off by default
| | * | | | | Merge pull request #382 from RongGu/masterPatrick Wendell2014-01-101-1/+1
| | |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a type error in comment lines Fix a type error in comment lines
| | | * | | | | fix a type error in comment linesRongGu2014-01-111-1/+1
| | | |/ / / /
| | * | | | | Merge pull request #383 from tdas/driver-testPatrick Wendell2014-01-102-6/+13
| | |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | API for automatic driver recovery for streaming programs and other bug fixes 1. Added Scala and Java API for automatically loading checkpoint if it exists in the provided checkpoint directory. Scala API: `StreamingContext.getOrCreate(<checkpoint dir>, <function to create new StreamingContext>)` returns a StreamingContext Java API: `JavaStreamingContext.getOrCreate(<checkpoint dir>, <factory obj of type JavaStreamingContextFactory>)`, return a JavaStreamingContext See the RecoverableNetworkWordCount below as an example of how to use it. 2. Refactored streaming.Checkpoint*** code to fix bugs and make the DStream metadata checkpoint writing and reading more robust. Specifically, it fixes and improves the logic behind backing up and writing metadata checkpoint files. Also, it ensure that spark.driver.* and spark.hostPort is cleared from SparkConf before being written to checkpoint. 3. Fixed bug in cleaning up of checkpointed RDDs created by DStream. Specifically, this fix ensures that checkpointed RDD's files are not prematurely cleaned up, thus ensuring reliable recovery. 4. TimeStampedHashMap is upgraded to optionally update the timestamp on map.get(key). This allows clearing of data based on access time (i.e., clear records were last accessed before a threshold timestamp). 5. Added caching for file modification time in FileInputDStream using the updated TimeStampedHashMap. Without the caching, enumerating the mod times to find new files can take seconds if there are 1000s of files. This cache is automatically cleared. This PR is not entirely final as I may make some minor additions - a Java examples, and adding StreamingContext.getOrCreate to unit test. Edit: Java example to be added later, unit test added.
| | * \ \ \ \ \ Merge pull request #377 from andrewor14/masterPatrick Wendell2014-01-1015-81/+1078
| | |\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | External Sorting for Aggregator and CoGroupedRDDs (Revisited) (This pull request is re-opened from https://github.com/apache/incubator-spark/pull/303, which was closed because Jenkins / github was misbehaving) The target issue for this patch is the out-of-memory exceptions triggered by aggregate operations such as reduce, groupBy, join, and cogroup. The existing AppendOnlyMap used by these operations resides purely in memory, and grows with the size of the input data until the amount of allocated memory is exceeded. Under large workloads, this problem is aggravated by the fact that OOM frequently occurs only after a very long (> 1 hour) map phase, in which case the entire job must be restarted. The solution is to spill the contents of this map to disk once a certain memory threshold is exceeded. This functionality is provided by ExternalAppendOnlyMap, which additionally sorts this buffer before writing it out to disk, and later merges these buffers back in sorted order. Under normal circumstances in which OOM is not triggered, ExternalAppendOnlyMap is simply a wrapper around AppendOnlyMap and incurs little overhead. Only when the memory usage is expected to exceed the given threshold does ExternalAppendOnlyMap spill to disk.
| | | * | | | | | Address Patrick's and Reynold's commentsAndrew Or2014-01-104-45/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Aside from trivial formatting changes, use nulls instead of Options for DiskMapIterator, and add documentation for spark.shuffle.externalSorting and spark.shuffle.memoryFraction. Also, set spark.shuffle.memoryFraction to 0.3, and spark.storage.memoryFraction = 0.6.
| | | * | | | | | Defensively allocate memory from global poolAndrew Or2014-01-095-47/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is an alternative to the existing approach, which evenly distributes the collective shuffle memory among all running tasks. In the new approach, each thread requests a chunk of memory whenever its map is about to multiplicatively grow. If there is sufficient memory in the global pool, the thread allocates it and grows its map. Otherwise, it spills. A danger with the previous approach is that a new task may quickly fill up its map before old tasks finish spilling, potentially causing an OOM. This approach prevents this scenario as it favors existing tasks over new tasks; any thread that may step over the boundary of other threads defensively backs off and starts spilling. Testing through spark-perf reveals: (1) When no spills have occured, the performance of external sorting using this memory management approach is essentially the same as without external sorting. (2) When one or more spills have occured, the performance of external sorting is a small multiple (3x) worse
| | | * | | | | | Merge github.com:apache/incubator-sparkAndrew Or2014-01-0991-458/+1948
| | | |\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/SparkEnv.scala streaming/src/test/java/org/apache/spark/streaming/JavaAPISuite.java
| | | * | | | | | | Get SparkConf from SparkEnv, rather than creating new onesAndrew Or2014-01-073-6/+6
| | | | | | | | | |
| | | * | | | | | | Use AtomicInteger for numRunningTasksAndrew Or2014-01-041-12/+7
| | | | | | | | | |
| | | * | | | | | | Address Mark's commentsAndrew Or2014-01-043-18/+13
| | | | | | | | | |
| | | * | | | | | | Assign spill threshold as a fraction of maximum memoryAndrew Or2014-01-045-33/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Further, divide this threshold by the number of tasks running concurrently. Note that this does not guard against the following scenario: a new task quickly fills up its share of the memory before old tasks finish spilling their contents, in which case the total memory used by such maps may exceed what was specified. Currently, spark.shuffle.safetyFraction mitigates the effect of this.
| | | * | | | | | | Remove unnecessary ClassTag'sAndrew Or2014-01-032-7/+4
| | | | | | | | | |
| | | * | | | | | | Refactor using SparkConfAndrew Or2014-01-034-19/+21
| | | | | | | | | |
| | | * | | | | | | Merge remote-tracking branch 'spark/master'Andrew Or2014-01-02118-1176/+2100
| | | |\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/rdd/CoGroupedRDD.scala
| | | * | | | | | | | TempBlockId takes UUID and is explicitly non-serializableAaron Davidson2014-01-022-5/+6
| | | | | | | | | | |
| | | * | | | | | | | Simplify ExternalAppendOnlyMap on the assumption that the mergeCombiners ↵Andrew Or2014-01-013-135/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | function is specified
| | | * | | | | | | | Merge branch 'master' of github.com:andrewor14/incubator-sparkAndrew Or2013-12-314-9/+9
| | | |\ \ \ \ \ \ \ \
| | | | * | | | | | | | Rename IntermediateBlockId to TempBlockIdAaron Davidson2013-12-314-9/+9
| | | | | | | | | | | |
| | | * | | | | | | | | Address Patrick's and Reynold's commentsAndrew Or2013-12-311-49/+71
| | | |/ / / / / / / /
| | | * | | | | | | | Merge branch 'master' of github.com:andrewor14/incubator-sparkAndrew Or2013-12-313-97/+71
| | | |\ \ \ \ \ \ \ \