aboutsummaryrefslogtreecommitdiff
path: root/core
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-4446] [SPARK CORE]Leolh2014-11-191-1/+1
| | | | | | | | | | | | | MetadataCleaner schedule task with a wrong param for delay time . Author: Leolh <leosandylh@gmail.com> Closes #3306 from Leolh/master and squashes the following commits: 4a21f4e [Leolh] Update MetadataCleaner.scala (cherry picked from commit e216ffaead983274428052caa992b20760b2c5e0) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4480] Avoid many small spills in external data structuresAndrew Or2014-11-192-12/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | **Summary.** Currently, we may spill many small files in `ExternalAppendOnlyMap` and `ExternalSorter`. The underlying root cause of this is summarized in [SPARK-4452](https://issues.apache.org/jira/browse/SPARK-4452). This PR does not address this root cause, but simply provides the guarantee that we never spill the in-memory data structure if its size is less than a configurable threshold of 5MB. This config is not documented because we don't want users to set it themselves, and it is not hard-coded because we need to change it in tests. **Symptom.** Each spill is orders of magnitude smaller than 1MB, and there are many spills. In environments where the ulimit is set, this frequently causes "too many open file" exceptions observed in [SPARK-3633](https://issues.apache.org/jira/browse/SPARK-3633). ``` 14/11/13 19:20:43 INFO collection.ExternalSorter: Thread 60 spilling in-memory batch of 4792 B to disk (292769 spills so far) 14/11/13 19:20:43 INFO collection.ExternalSorter: Thread 60 spilling in-memory batch of 4760 B to disk (292770 spills so far) 14/11/13 19:20:43 INFO collection.ExternalSorter: Thread 60 spilling in-memory batch of 4520 B to disk (292771 spills so far) 14/11/13 19:20:43 INFO collection.ExternalSorter: Thread 60 spilling in-memory batch of 4560 B to disk (292772 spills so far) 14/11/13 19:20:43 INFO collection.ExternalSorter: Thread 60 spilling in-memory batch of 4792 B to disk (292773 spills so far) 14/11/13 19:20:43 INFO collection.ExternalSorter: Thread 60 spilling in-memory batch of 4784 B to disk (292774 spills so far) ``` **Reproduction.** I ran the following on a small 4-node cluster with 512MB executors. Note that the back-to-back shuffle here is necessary for reasons described in [SPARK-4522](https://issues.apache.org/jira/browse/SPARK-4452). The second shuffle is a `reduceByKey` because it performs a map-side combine. ``` sc.parallelize(1 to 100000000, 100) .map { i => (i, i) } .groupByKey() .reduceByKey(_ ++ _) .count() ``` Before the change, I notice that each thread may spill up to 1000 times, and the size of each spill is on the order of 10KB. After the change, each thread spills only up to 20 times in the worst case, and the size of each spill is on the order of 1MB. Author: Andrew Or <andrew@databricks.com> Closes #3353 from andrewor14/avoid-small-spills and squashes the following commits: 49f380f [Andrew Or] Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/spark into avoid-small-spills 27d6966 [Andrew Or] Merge branch 'master' of github.com:apache/spark into avoid-small-spills f4736e3 [Andrew Or] Fix tests a919776 [Andrew Or] Avoid many small spills (cherry picked from commit 0eb4a7fb0fa1fa56677488cbd74eb39e65317621) Signed-off-by: Andrew Or <andrew@databricks.com>
* [Spark-4484] Treat maxResultSize as unlimited when set to 0; improve error ↵Nishkam Ravi2014-11-193-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | message The check for maxResultSize > 0 is missing, results in failures. Also, error message needs to be improved so the developers know that there is a new parameter to be configured Author: Nishkam Ravi <nravi@cloudera.com> Author: nravi <nravi@c1704.halxg.cloudera.com> Author: nishkamravi2 <nishkamravi@gmail.com> Closes #3360 from nishkamravi2/master_nravi and squashes the following commits: 5c9a4cb [nishkamravi2] Update TaskSetManagerSuite.scala 535295a [nishkamravi2] Update TaskSetManager.scala 3e1b616 [Nishkam Ravi] Modify test for maxResultSize 9f6583e [Nishkam Ravi] Changes to maxResultSize code (improve error message and add condition to check if maxResultSize > 0) 5f8f9ed [Nishkam Ravi] Merge branch 'master' of https://github.com/apache/spark into master_nravi 636a9ff [nishkamravi2] Update YarnAllocator.scala 8f76c8b [Nishkam Ravi] Doc change for yarn memory overhead 35daa64 [Nishkam Ravi] Slight change in the doc for yarn memory overhead 5ac2ec1 [Nishkam Ravi] Remove out dac1047 [Nishkam Ravi] Additional documentation for yarn memory overhead issue 42c2c3d [Nishkam Ravi] Additional changes for yarn memory overhead issue 362da5e [Nishkam Ravi] Additional changes for yarn memory overhead c726bd9 [Nishkam Ravi] Merge branch 'master' of https://github.com/apache/spark into master_nravi f00fa31 [Nishkam Ravi] Improving logging for AM memoryOverhead 1cf2d1e [nishkamravi2] Update YarnAllocator.scala ebcde10 [Nishkam Ravi] Modify default YARN memory_overhead-- from an additive constant to a multiplier (redone to resolve merge conflicts) 2e69f11 [Nishkam Ravi] Merge branch 'master' of https://github.com/apache/spark into master_nravi efd688a [Nishkam Ravi] Merge branch 'master' of https://github.com/apache/spark 2b630f9 [nravi] Accept memory input as "30g", "512M" instead of an int value, to be consistent with rest of Spark 3bf8fad [nravi] Merge branch 'master' of https://github.com/apache/spark 5423a03 [nravi] Merge branch 'master' of https://github.com/apache/spark eb663ca [nravi] Merge branch 'master' of https://github.com/apache/spark df2aeb1 [nravi] Improved fix for ConcurrentModificationIssue (Spark-1097, Hadoop-10456) 6b840f0 [nravi] Undo the fix for SPARK-1758 (the problem is fixed) 5108700 [nravi] Fix in Spark for the Concurrent thread modification issue (SPARK-1097, HADOOP-10456) 681b36f [nravi] Fix for SPARK-1758: failing test org.apache.spark.JavaAPISuite.wholeTextFiles (cherry picked from commit 73fedf5a6e662b640dfe29936753721988bff6ea) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4478] Keep totalRegisteredExecutors up-to-dateAkshat Aranya2014-11-191-0/+2
| | | | | | | | | | | | | | | | This rebases PR 3368. This commit fixes totalRegisteredExecutors update [SPARK-4478], so that we can correctly keep track of number of registered executors. Author: Akshat Aranya <aaranya@quantcast.com> Closes #3373 from coolfrood/topic/SPARK-4478 and squashes the following commits: 8a4d1e4 [Akshat Aranya] Added comment 150ae93 [Akshat Aranya] [SPARK-4478] Keep totalRegisteredExecutors up-to-date (cherry picked from commit 9ccc53c72c5bcffcc121291710754e1e2d659341) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4495] Fix memory leak in JobProgressListenerJosh Rosen2014-11-192-43/+170
| | | | | | | | | | | | | | | | | | This commit fixes a memory leak in JobProgressListener that I introduced in SPARK-2321 and adds a testing framework to ensure that it’s very difficult to inadvertently introduce new memory leaks. This solution might be overkill, but the main idea is to partition JobProgressListener's state into three buckets: collections that should be empty once Spark is idle, collections that must obey some hard size limit, and collections that have a soft size limit (they can grow arbitrarily large when Spark is active but must shrink to fit within some bound after Spark becomes idle). Based on this, we can write fairly generic tests that run workloads that submit more than `spark.ui.retainedStages` stages and `spark.ui.retainedJobs` jobs then check that these various collections' sizes obey their contracts. Author: Josh Rosen <joshrosen@databricks.com> Closes #3372 from JoshRosen/SPARK-4495 and squashes the following commits: c73fab5 [Josh Rosen] "data structures" -> collections be72e81 [Josh Rosen] [SPARK-4495] Fix memory leaks in JobProgressListener (cherry picked from commit 04d462f648aba7b18fc293b7189b86af70e421bc) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* [SPARK-4470] Validate number of threads in local modeKenichi Maehashi2014-11-191-0/+3
| | | | | | | | | | | | | | | | | When running Spark locally, if number of threads is specified as 0 (e.g., `spark-submit --master local[0] ...`), the job got stuck and does not run at all. I think it's better to validate the parameter. Fix for [SPARK-4470](https://issues.apache.org/jira/browse/SPARK-4470). Author: Kenichi Maehashi <webmaster@kenichimaehashi.com> Closes #3337 from kmaehashi/spark-4470 and squashes the following commits: 3ad76f3 [Kenichi Maehashi] fix code style 7716734 [Kenichi Maehashi] SPARK-4470: Validate number of threads in local mode (cherry picked from commit eacc788346ccae232bd530dd880f801475a49734) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4467] fix elements read count for ExtrenalSorterTianshuo Deng2014-11-193-14/+12
| | | | | | | | | | | | | | | | the elementsRead variable should be reset to 0 after each spilling Author: Tianshuo Deng <tdeng@twitter.com> Closes #3302 from tsdeng/fix_external_sorter_record_count and squashes the following commits: 7b56ca0 [Tianshuo Deng] fix method signature 782c7de [Tianshuo Deng] make elementsRead private, fix comment bb7ff28 [Tianshuo Deng] update elemetsRead through addElementsRead method 74ca246 [Tianshuo Deng] fix elements read count (cherry picked from commit d75579d09912cfb1eeac0589d625ea0452701fa0) Signed-off-by: Andrew Or <andrew@databricks.com>
* [Spark-4432]close InStream after the block is accessedMingfei2014-11-181-0/+2
| | | | | | | | | | InStream is not closed after data is read from Tachyon. which makes the blocks in Tachyon locked after accessed. Author: Mingfei <mingfei.shi@intel.com> Closes #3290 from shimingfei/lockFix and squashes the following commits: fffe345 [Mingfei] close InStream after the block is accessed
* [SPARK-4441] Close Tachyon client when TachyonBlockManager is shutdownMingfei2014-11-181-0/+1
| | | | | | | | | | | | | Currently Tachyon client is not closed when TachyonBlockManager is shut down. which causes some resources in Tachyon not reclaimed Author: Mingfei <mingfei.shi@intel.com> Closes #3299 from shimingfei/closeClient and squashes the following commits: 0913fbd [Mingfei] close Tachyon client when TachyonBlockManager is shutdown (cherry picked from commit 67e9876b3e457b151c123fdb5ac2d8e8371e6acf) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4433] fix a racing condition in zipWithIndexXiangrui Meng2014-11-182-14/+22
| | | | | | | | | | | | | | | | | | | | | | | Spark hangs with the following code: ~~~ sc.parallelize(1 to 10).zipWithIndex.repartition(10).count() ~~~ This is because ZippedWithIndexRDD triggers a job in getPartitions and it causes a deadlock in DAGScheduler.getPreferredLocs (synced). The fix is to compute `startIndices` during construction. This should be applied to branch-1.0, branch-1.1, and branch-1.2. pwendell Author: Xiangrui Meng <meng@databricks.com> Closes #3291 from mengxr/SPARK-4433 and squashes the following commits: c284d9f [Xiangrui Meng] fix a racing condition in zipWithIndex (cherry picked from commit bb46046154a438df4db30a0e1fd557bd3399ee7b) Signed-off-by: Xiangrui Meng <meng@databricks.com>
* [SPARK-3721] [PySpark] broadcast objects larger than 2GDavies Liu2014-11-181-8/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch will bring support for broadcasting objects larger than 2G. pickle, zlib, FrameSerializer and Array[Byte] all can not support objects larger than 2G, so this patch introduce LargeObjectSerializer to serialize broadcast objects, the object will be serialized and compressed into small chunks, it also change the type of Broadcast[Array[Byte]]] into Broadcast[Array[Array[Byte]]]]. Testing for support broadcast objects larger than 2G is slow and memory hungry, so this is tested manually, could be added into SparkPerf. Author: Davies Liu <davies@databricks.com> Author: Davies Liu <davies.liu@gmail.com> Closes #2659 from davies/huge and squashes the following commits: 7b57a14 [Davies Liu] add more tests for broadcast 28acff9 [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge a2f6a02 [Davies Liu] bug fix 4820613 [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 5875c73 [Davies Liu] address comments 10a349b [Davies Liu] address comments 0c33016 [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 6182c8f [Davies Liu] Merge branch 'master' into huge d94b68f [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 2514848 [Davies Liu] address comments fda395b [Davies Liu] Merge branch 'master' of github.com:apache/spark into huge 1c2d928 [Davies Liu] fix scala style 091b107 [Davies Liu] broadcast objects larger than 2G (cherry picked from commit 4a377aff2d36b64a65b54192a987aba44b8f78e0) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* [SPARK-4463] Add (de)select all button for add'l metrics.Kay Ousterhout2014-11-182-7/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | This commit removes the behavior where when a user clicks "Show additional metrics" on the stage page, all of the additional metrics are automatically selected; now, collapsing and expanding the additional metrics has no effect on which options are selected. Instead, there's a "(De)select All" box at the top; checking this box checks all additional metrics (and similarly, unchecking it unchecks all additional metrics). This commit is intended to be backported to 1.2, so that the additional metrics behavior is not confusing to users. Now when a user clicks the "Show additional metrics" menu, this is what it looks like: ![image](https://cloud.githubusercontent.com/assets/1108612/5094347/1541ead6-6f15-11e4-8e8c-25a65ddbdfb2.png) Author: Kay Ousterhout <kayousterhout@gmail.com> Closes #3331 from kayousterhout/SPARK-4463 and squashes the following commits: 9e17cea [Kay Ousterhout] Added italics b731230 [Kay Ousterhout] [SPARK-4463] Add (de)select all button for add'l metrics. (cherry picked from commit 010bc86e40a0e54b6850b75abd6105e70eb1af10) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4017] show progress bar in consoleDavies Liu2014-11-185-1/+136
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The progress bar will look like this: ![1___spark_job__85_250_finished__4_are_running___java_](https://cloud.githubusercontent.com/assets/40902/4854813/a02f44ac-6099-11e4-9060-7c73a73151d6.png) In the right corner, the numbers are: finished tasks, running tasks, total tasks. After the stage has finished, it will disappear. The progress bar is only showed if logging level is WARN or higher (but progress in title is still showed), it can be turned off by spark.driver.showConsoleProgress. Author: Davies Liu <davies@databricks.com> Closes #3029 from davies/progress and squashes the following commits: 95336d5 [Davies Liu] Merge branch 'master' of github.com:apache/spark into progress fc49ac8 [Davies Liu] address commentse 2e90f75 [Davies Liu] show multiple stages in same time 0081bcc [Davies Liu] address comments 38c42f1 [Davies Liu] fix tests ab87958 [Davies Liu] disable progress bar during tests 30ac852 [Davies Liu] re-implement progress bar b3f34e5 [Davies Liu] Merge branch 'master' of github.com:apache/spark into progress 6fd30ff [Davies Liu] show progress bar if no task finished in 500ms e4e7344 [Davies Liu] refactor e1f524d [Davies Liu] revert unnecessary change a60477c [Davies Liu] Merge branch 'master' of github.com:apache/spark into progress 5cae3f2 [Davies Liu] fix style ea49fe0 [Davies Liu] address comments bc53d99 [Davies Liu] refactor e6bb189 [Davies Liu] fix logging in sparkshell 7e7d4e7 [Davies Liu] address commments 5df26bb [Davies Liu] fix style 9e42208 [Davies Liu] show progress bar in console and title (cherry picked from commit e34f38ff1a0dfbb0ffa4bd11071e03b1a58de998) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4404] remove sys.exit() in shutdown hookDavies Liu2014-11-181-1/+1
| | | | | | | | | | | | | | | | | If SparkSubmit die first, then bootstrapper will be blocked by shutdown hook. sys.exit() in a shutdown hook will cause some kind of dead lock. cc andrewor14 Author: Davies Liu <davies@databricks.com> Closes #3289 from davies/fix_bootstraper and squashes the following commits: ea5cdd1 [Davies Liu] Merge branch 'master' of github.com:apache/spark into fix_bootstraper e04b690 [Davies Liu] remove sys.exit in hook 4d11366 [Davies Liu] remove shutdown hook if subprocess die fist (cherry picked from commit 80f31778820586a93d73fa15279a204611cc3c60) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4075][SPARK-4434] Fix the URI validation logic for Application Jar name.Kousuke Saruta2014-11-182-3/+28
| | | | | | | | | | | | | | | | | | | | This PR adds a regression test for SPARK-4434. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #3326 from sarutak/add-triple-slash-testcase and squashes the following commits: 82bc9cc [Kousuke Saruta] Fixed wrong grammar in comment 9149027 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into add-triple-slash-testcase c1c80ca [Kousuke Saruta] Fixed style 4f30210 [Kousuke Saruta] Modified comments 9e09da2 [Kousuke Saruta] Fixed URI validation for jar file d4b99ef [Kousuke Saruta] [SPARK-4075] [Deploy] Jar url validation is not enough for Jar file ac79906 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into add-triple-slash-testcase 6d4f47e [Kousuke Saruta] Added a test case as a regression check for SPARK-4434 (cherry picked from commit bfebfd8b28eeb7e75292333f7885aa0830fcb5fe) Signed-off-by: Andrew Or <andrew@databricks.com>
* SPARK-4466: Provide support for publishing Scala 2.11 artifacts to MavenPatrick Wendell2014-11-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The maven release plug-in does not have support for publishing two separate sets of artifacts for a single release. Because of the way that Scala 2.11 support in Spark works, we have to write some customized code to do this. The good news is that the Maven release API is just a thin wrapper on doing git commits and pushing artifacts to the HTTP API of Apache's Sonatype server and this might overall make our deployment easier to understand. This was already used for the 1.2 snapshot, so I think it is working well. One other nice thing is this could be pretty easily extended to publish nightly snapshots. Author: Patrick Wendell <pwendell@gmail.com> Closes #3332 from pwendell/releases and squashes the following commits: 2fedaed [Patrick Wendell] Automate the opening and closing of Sonatype repos e2a24bb [Patrick Wendell] Fixing issue where we overrode non-spark version numbers 9df3a50 [Patrick Wendell] Adding TODO 1cc1749 [Patrick Wendell] Don't build the thriftserver for 2.11 933201a [Patrick Wendell] Make tagging of release commit eager d0388a6 [Patrick Wendell] Support Scala 2.11 build 4f4dc62 [Patrick Wendell] Change to 2.11 should not be included when committing new patch bf742e1 [Patrick Wendell] Minor fixes ffa1df2 [Patrick Wendell] Adding a Scala 2.11 package to test it 9ac4381 [Patrick Wendell] Addressing TODO b3105ff [Patrick Wendell] Removing commented out code d906803 [Patrick Wendell] Small fix 3f4d985 [Patrick Wendell] More work fcd54c2 [Patrick Wendell] Consolidating use of keys df2af30 [Patrick Wendell] Changes to release stuff (cherry picked from commit c6e0c2ab1c29c184a9302d23ad75e4ccd8060242) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4180] [Core] Prevent creation of multiple active SparkContextsJosh Rosen2014-11-174-24/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds error-detection logic to throw an exception when attempting to create multiple active SparkContexts in the same JVM, since this is currently unsupported and has been known to cause confusing behavior (see SPARK-2243 for more details). **The solution implemented here is only a partial fix.** A complete fix would have the following properties: 1. Only one SparkContext may ever be under construction at any given time. 2. Once a SparkContext has been successfully constructed, any subsequent construction attempts should fail until the active SparkContext is stopped. 3. If the SparkContext constructor throws an exception, then all resources created in the constructor should be cleaned up (SPARK-4194). 4. If a user attempts to create a SparkContext but the creation fails, then the user should be able to create new SparkContexts. This PR only provides 2) and 4); we should be able to provide all of these properties, but the correct fix will involve larger changes to SparkContext's construction / initialization, so we'll target it for a different Spark release. ### The correct solution: I think that the correct way to do this would be to move the construction of SparkContext's dependencies into a static method in the SparkContext companion object. Specifically, we could make the default SparkContext constructor `private` and change it to accept a `SparkContextDependencies` object that contains all of SparkContext's dependencies (e.g. DAGScheduler, ContextCleaner, etc.). Secondary constructors could call a method on the SparkContext companion object to create the `SparkContextDependencies` and pass the result to the primary SparkContext constructor. For example: ```scala class SparkContext private (deps: SparkContextDependencies) { def this(conf: SparkConf) { this(SparkContext.getDeps(conf)) } } object SparkContext( private[spark] def getDeps(conf: SparkConf): SparkContextDependencies = synchronized { if (anotherSparkContextIsActive) { throw Exception(...) } var dagScheduler: DAGScheduler = null try { dagScheduler = new DAGScheduler(...) [...] } catch { case e: Exception => Option(dagScheduler).foreach(_.stop()) [...] } SparkContextDependencies(dagScheduler, ....) } } ``` This gives us mutual exclusion and ensures that any resources created during the failed SparkContext initialization are properly cleaned up. This indirection is necessary to maintain binary compatibility. In retrospect, it would have been nice if SparkContext had no private constructors and could only be created through builder / factory methods on its companion object, since this buys us lots of flexibility and makes dependency injection easier. ### Alternative solutions: As an alternative solution, we could refactor SparkContext's primary constructor to perform all object creation in a giant `try-finally` block. Unfortunately, this will require us to turn a bunch of `vals` into `vars` so that they can be assigned from the `try` block. If we still want `vals`, we could wrap each `val` in its own `try` block (since the try block can return a value), but this will lead to extremely messy code and won't guard against the introduction of future code which doesn't properly handle failures. The more complex approach outlined above gives us some nice dependency injection benefits, so I think that might be preferable to a `var`-ification. ### This PR's solution: - At the start of the constructor, check whether some other SparkContext is active; if so, throw an exception. - If another SparkContext might be under construction (or has thrown an exception during construction), allow the new SparkContext to begin construction but log a warning (since resources might have been leaked from a failed creation attempt). - At the end of the SparkContext constructor, check whether some other SparkContext constructor has raced and successfully created an active context. If so, throw an exception. This guarantees that no two SparkContexts will ever be active and exposed to users (since we check at the very end of the constructor). If two threads race to construct SparkContexts, then one of them will win and another will throw an exception. This exception can be turned into a warning by setting `spark.driver.allowMultipleContexts = true`. The exception is disabled in unit tests, since there are some suites (such as Hive) that may require more significant refactoring to clean up their SparkContexts. I've made a few changes to other suites' test fixtures to properly clean up SparkContexts so that the unit test logs contain fewer warnings. Author: Josh Rosen <joshrosen@databricks.com> Closes #3121 from JoshRosen/SPARK-4180 and squashes the following commits: 23c7123 [Josh Rosen] Merge remote-tracking branch 'origin/master' into SPARK-4180 d38251b [Josh Rosen] Address latest round of feedback. c0987d3 [Josh Rosen] Accept boolean instead of SparkConf in methods. 85a424a [Josh Rosen] Incorporate more review feedback. 372d0d3 [Josh Rosen] Merge remote-tracking branch 'origin/master' into SPARK-4180 f5bb78c [Josh Rosen] Update mvn build, too. d809cb4 [Josh Rosen] Improve handling of failed SparkContext creation attempts. 79a7e6f [Josh Rosen] Fix commented out test a1cba65 [Josh Rosen] Merge remote-tracking branch 'origin/master' into SPARK-4180 7ba6db8 [Josh Rosen] Add utility to set system properties in tests. 4629d5c [Josh Rosen] Set spark.driver.allowMultipleContexts=true in tests. ed17e14 [Josh Rosen] Address review feedback; expose hack workaround for existing unit tests. 1c66070 [Josh Rosen] Merge remote-tracking branch 'origin/master' into SPARK-4180 06c5c54 [Josh Rosen] Add / improve SparkContext cleanup in streaming BasicOperationsSuite d0437eb [Josh Rosen] StreamingContext.stop() should stop SparkContext even if StreamingContext has not been started yet. c4d35a2 [Josh Rosen] Log long form of creation site to aid debugging. 918e878 [Josh Rosen] Document "one SparkContext per JVM" limitation. afaa7e3 [Josh Rosen] [SPARK-4180] Prevent creations of multiple active SparkContexts. (cherry picked from commit 0f3ceb56c78e7260725a09fba0e10aa193cbda4b) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* Revert "[SPARK-4075] [Deploy] Jar url validation is not enough for Jar file"Andrew Or2014-11-172-16/+1
| | | | This reverts commit 098f83c7ccd7dad9f9228596da69fe5f55711a52.
* SPARK-4445, Don't display storage level in toDebugString unless RDD is ↵Prashant Sharma2014-11-171-1/+1
| | | | | | | | | | | | | persisted. Author: Prashant Sharma <prashant.s@imaginea.com> Closes #3310 from ScrapCodes/SPARK-4445/rddDebugStringFix and squashes the following commits: 4e57c52 [Prashant Sharma] SPARK-4445, Don't display storage level in toDebugString unless RDD is persisted (cherry picked from commit 5c92d47ad2e3414f2ae089cb47f3c6daccba8d90) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* Preparing development version 1.2.1-SNAPSHOTUbuntu2014-11-172-2/+2
|
* Preparing Spark release v1.2.0-snapshot1Ubuntu2014-11-172-2/+2
|
* Revert "Preparing Spark release v1.2.0-snapshot0"Patrick Wendell2014-11-162-2/+2
| | | | This reverts commit bc09875799aa373f4320d38b02618173ffa4c96f.
* Revert "Preparing development version 1.2.1-SNAPSHOT"Patrick Wendell2014-11-162-3/+3
| | | | This reverts commit 6c6fd218c83a049c874b8a0ea737333c1899c94a.
* Preparing development version 1.2.1-SNAPSHOTUbuntu2014-11-172-3/+3
|
* Preparing Spark release v1.2.0-snapshot0Ubuntu2014-11-172-2/+2
|
* [SPARK-4393] Fix memory leak in ConnectionManager ACK timeout TimerTasks; ↵Josh Rosen2014-11-161-12/+35
| | | | | | | | | | | | | | | | | | | | | | | use HashedWheelTimer This patch is intended to fix a subtle memory leak in ConnectionManager's ACK timeout TimerTasks: in the old code, each TimerTask held a reference to the message being sent and a cancelled TimerTask won't necessarily be garbage-collected until it's scheduled to run, so this caused huge buildups of messages that weren't garbage collected until their timeouts expired, leading to OOMs. This patch addresses this problem by capturing only the message ID in the TimerTask instead of the whole message, and by keeping a WeakReference to the promise in the TimerTask. I've also modified this code to use Netty's HashedWheelTimer, whose performance characteristics should be better for this use-case. Thanks to cristianopris for narrowing down this issue! Author: Josh Rosen <joshrosen@databricks.com> Closes #3259 from JoshRosen/connection-manager-timeout-bugfix and squashes the following commits: afcc8d6 [Josh Rosen] Address rxin's review feedback. 2a2e92d [Josh Rosen] Keep only WeakReference to promise in TimerTask; 0f0913b [Josh Rosen] Spelling fix: timout => timeout 3200c33 [Josh Rosen] Use Netty HashedWheelTimer f847dd4 [Josh Rosen] Don't capture entire message in ACK timeout task. (cherry picked from commit 7850e0c707affd5eafd570fb43716753396cf479) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-2321] Several progress API improvements / refactoringsJosh Rosen2014-11-146-169/+266
| | | | | | | | | | | | | | | | | | | | | | | This PR refactors / extends the status API introduced in #2696. - Change StatusAPI from a mixin trait to a class. Before, the new status API methods were directly accessible through SparkContext, whereas now they're accessed through a `sc.statusAPI` field. As long as we were going to add these methods directly to SparkContext, the mixin trait seemed like a good idea, but this might be simpler to reason about and may avoid pitfalls that I've run into while attempting to refactor other parts of SparkContext to use mixins (see #3071, for example). - Change the name from SparkStatusAPI to SparkStatusTracker. - Make `getJobIdsForGroup(null)` return ids for jobs that aren't associated with any job group. - Add `getActiveStageIds()` and `getActiveJobIds()` methods that return the ids of whatever's currently active in this SparkContext. This should simplify davies's progress bar code. Author: Josh Rosen <joshrosen@databricks.com> Closes #3197 from JoshRosen/progress-api-improvements and squashes the following commits: 30b0afa [Josh Rosen] Rename SparkStatusAPI to SparkStatusTracker. d1b08d8 [Josh Rosen] Add missing newlines 2cc7353 [Josh Rosen] Add missing file. d5eab1f [Josh Rosen] Add getActive[Stage|Job]Ids() methods. a227984 [Josh Rosen] getJobIdsForGroup(null) should return jobs for default group c47e294 [Josh Rosen] Remove StatusAPI mixin trait. (cherry picked from commit 40eb8b6ef3a67e36d0d9492c044981a1da76351d) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4260] Httpbroadcast should set connection timeout.Kousuke Saruta2014-11-141-0/+2
| | | | | | | | | | | | | Httpbroadcast sets read timeout but doesn't set connection timeout. Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp> Closes #3122 from sarutak/httpbroadcast-timeout and squashes the following commits: c7f3a56 [Kousuke Saruta] Added Connection timeout for Http Connection to HttpBroadcast.scala (cherry picked from commit 60969b0336930449a826821a48f83f65337e8856) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4363][Doc] Update the Broadcast examplezsxwing2014-11-141-1/+1
| | | | | | | | | | | Author: zsxwing <zsxwing@gmail.com> Closes #3226 from zsxwing/SPARK-4363 and squashes the following commits: 8109914 [zsxwing] Update the Broadcast example (cherry picked from commit 861223ee5bea8e434a9ebb0d53f436ce23809f9c) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4379][Core] Change Exception to SparkException in checkpointzsxwing2014-11-141-1/+1
| | | | | | | | | | | | | It's better to change to SparkException. However, it's a breaking change since it will change the exception type. Author: zsxwing <zsxwing@gmail.com> Closes #3241 from zsxwing/SPARK-4379 and squashes the following commits: 409f3af [zsxwing] Change Exception to SparkException in checkpoint (cherry picked from commit dba14058230194122a715c219e35ab8eaa786321) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-4415] [PySpark] JVM should exit after Python exitDavies Liu2014-11-141-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When JVM is started in a Python process, it should exit once the stdin is closed. test: add spark.driver.memory in conf/spark-defaults.conf ``` daviesdm:~/work/spark$ cat conf/spark-defaults.conf spark.driver.memory 8g daviesdm:~/work/spark$ bin/pyspark >>> quit daviesdm:~/work/spark$ jps 4931 Jps 286 daviesdm:~/work/spark$ python wc.py 943738 0.719928026199 daviesdm:~/work/spark$ jps 286 4990 Jps ``` Author: Davies Liu <davies@databricks.com> Closes #3274 from davies/exit and squashes the following commits: df0e524 [Davies Liu] address comments ce8599c [Davies Liu] address comments 050651f [Davies Liu] JVM should exit after Python exit (cherry picked from commit 7fe08b43c78bf9e8515f671e72aa03a83ea782f8) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4404]SparkSubmitDriverBootstrapper should stop after its SparkSubmit ↵WangTao2014-11-141-0/+10
| | | | | | | | | | | | | | | | | | | | | | sub-proc... ...ess ends https://issues.apache.org/jira/browse/SPARK-4404 When we have spark.driver.extra* or spark.driver.memory in SPARK_SUBMIT_PROPERTIES_FILE, spark-class will use SparkSubmitDriverBootstrapper to launch driver. If we get process id of SparkSubmitDriverBootstrapper and wanna kill it during its running, we expect its SparkSubmit sub-process stop also. Author: WangTao <barneystinson@aliyun.com> Author: WangTaoTheTonic <barneystinson@aliyun.com> Closes #3266 from WangTaoTheTonic/killsubmit and squashes the following commits: e03eba5 [WangTaoTheTonic] add comments 57b5ca1 [WangTao] SparkSubmitDriverBootstrapper should stop after its SparkSubmit sub-process ends (cherry picked from commit 303a4e4d23e5cd93b541480cf88d5badb9cf9622) Signed-off-by: Andrew Or <andrew@databricks.com>
* SPARK-4214. With dynamic allocation, avoid outstanding requests for more...Sandy Ryza2014-11-142-9/+94
| | | | | | | | | | | | | | | | | | | ... executors than pending tasks need. WIP. Still need to add and fix tests. Author: Sandy Ryza <sandy@cloudera.com> Closes #3204 from sryza/sandy-spark-4214 and squashes the following commits: 35cf0e0 [Sandy Ryza] Add comment 13b53df [Sandy Ryza] Review feedback 067465f [Sandy Ryza] Whitespace fix 6ae080c [Sandy Ryza] Add tests and get num pending tasks from ExecutorAllocationListener 531e2b6 [Sandy Ryza] SPARK-4214. With dynamic allocation, avoid outstanding requests for more executors than pending tasks need. (cherry picked from commit ad42b283246b93654c5fd731cd618fee74d8c4da) Signed-off-by: Andrew Or <andrew@databricks.com>
* Update failed assert text to match code in SizeEstimatorSuiteJeff Hammerbacher2014-11-141-1/+1
| | | | | | | | | | | Author: Jeff Hammerbacher <jeff.hammerbacher@gmail.com> Closes #3242 from hammer/patch-1 and squashes the following commits: f88d635 [Jeff Hammerbacher] Update failed assert text to match code in SizeEstimatorSuite (cherry picked from commit c258db9ed4104b6eefe9f55f3e3959a3c46c2900) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4313][WebUI][Yarn] Fix link issue of the executor thread dump page in ↵zsxwing2014-11-143-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | yarn-cluster mode In yarn-cluster mode, the Web UI is running behind a yarn proxy server. Some features(or bugs?) of yarn proxy server will break the links for thread dump. 1. Yarn proxy server will do http redirect internally, so if opening `http://example.com:8088/cluster/app/application_1415344371838_0012/executors`, it will fetch `http://example.com:8088/cluster/app/application_1415344371838_0012/executors/` and return the content but won't change the link in the browser. Then when a user clicks `Thread Dump`, it will jump to `http://example.com:8088/proxy/application_1415344371838_0012/threadDump/?executorId=2`. This is a wrong link. The correct link should be `http://example.com:8088/proxy/application_1415344371838_0012/executors/threadDump/?executorId=2`. Adding "/" to the tab links will fix it. 2. Yarn proxy server has a bug about the URL encode/decode. When a user accesses `http://example.com:8088/proxy/application_1415344371838_0006/executors/threadDump/?executorId=%3Cdriver%3E`, the yarn proxy server will require `http://example.com:36429/executors/threadDump/?executorId=%25253Cdriver%25253E`. But Spark web server expects `http://example.com:36429/executors/threadDump/?executorId=%3Cdriver%3E`. Related to [YARN-2844](https://issues.apache.org/jira/browse/YARN-2844). For now, it's a tricky approach to bypass the yarn bug. ![threaddump](https://cloud.githubusercontent.com/assets/1000778/4972567/d1ccba64-68ad-11e4-983e-257530cef35a.png) Author: zsxwing <zsxwing@gmail.com> Closes #3183 from zsxwing/SPARK-4313 and squashes the following commits: 3379ca8 [zsxwing] Encode the executor id in the thread dump link and update the comment abfa063 [zsxwing] Fix link issue of the executor thread dump page in yarn-cluster mode (cherry picked from commit 156cf3333dcd93304eb5240f5a6466a3a0311957) Signed-off-by: Andrew Or <andrew@databricks.com>
* [Spark Core] SPARK-4380 Edit spilling log from MB to BHong Shen2014-11-141-2/+3
| | | | | | | | | | | | | | | | | https://issues.apache.org/jira/browse/SPARK-4380 Author: Hong Shen <hongshen@tencent.com> Closes #3243 from shenh062326/spark_change and squashes the following commits: 4653378 [Hong Shen] Edit spilling log from MB to B 21ee960 [Hong Shen] Edit spilling log from MB to B e9145e8 [Hong Shen] Edit spilling log from MB to B da761c2 [Hong Shen] Edit spilling log from MB to B 946351c [Hong Shen] Edit spilling log from MB to B (cherry picked from commit 0c56a039a9c5b871422f0fc55ff4394bc077fb34) Signed-off-by: Andrew Or <andrew@databricks.com>
* Revert "[SPARK-2703][Core]Make Tachyon related unit tests execute without ↵Patrick Wendell2014-11-142-16/+2
| | | | | | deploying a Tachyon system locally." This reverts commit c127ff8c87fc4f3aa6f09697928832dc6d37cc0f.
* [SPARK-4310][WebUI] Sort 'Submitted' column in Stage page by timezsxwing2014-11-131-1/+3
| | | | | | | | | | | Author: zsxwing <zsxwing@gmail.com> Closes #3179 from zsxwing/SPARK-4310 and squashes the following commits: b0d29f5 [zsxwing] Sort 'Submitted' column in Stage page by time (cherry picked from commit 825709a0b8f9b4bfb2718ecca8efc32be96c5a57) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4370] [Core] Limit number of Netty cores based on executor sizeAaron Davidson2014-11-1213-30/+60
| | | | | | | | | | | | Author: Aaron Davidson <aaron@databricks.com> Closes #3155 from aarondav/conf and squashes the following commits: 7045e77 [Aaron Davidson] Add mesos comment 4770f6e [Aaron Davidson] [SPARK-4370] [Core] Limit number of Netty cores based on executor size (cherry picked from commit b9e1c2eb9b6f7fb609718ef20048a8da452d881b) Signed-off-by: Reynold Xin <rxin@databricks.com>
* [SPARK-2672] support compressed file in wholeTextFileDavies Liu2014-11-123-13/+103
| | | | | | | | | | | | | | | | The wholeFile() can not read compressed files, it should be, just like textFile(). Author: Davies Liu <davies@databricks.com> Closes #3005 from davies/whole and squashes the following commits: a43fcfb [Davies Liu] remove semicolon c83571a [Davies Liu] remove = if return type is Unit 83c844f [Davies Liu] Merge branch 'master' of github.com:apache/spark into whole 22e8b3e [Davies Liu] support compressed file in wholeTextFile (cherry picked from commit d7d54a44e3ada0e50febe64e9b037dc2c8f6ff61) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
* [Test] Better exception message from SparkSubmitSuiteAndrew Or2014-11-121-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: ``` Exception in thread "main" java.lang.Exception: Could not load user defined classes inside of executors at org.apache.spark.deploy.JarCreationTest$.main(SparkSubmitSuite.scala:471) at org.apache.spark.deploy.JarCreationTest.main(SparkSubmitSuite.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ``` After: ``` Exception in thread "main" java.lang.Exception: Could not load user class from jar: java.lang.UnsupportedClassVersionError: SparkSubmitClassA : Unsupported major.minor version 51.0 java.lang.ClassLoader.defineClass1(Native Method) java.lang.ClassLoader.defineClass(ClassLoader.java:643) ... at org.apache.spark.deploy.JarCreationTest$.main(SparkSubmitSuite.scala:472) at org.apache.spark.deploy.JarCreationTest.main(SparkSubmitSuite.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ``` Author: Andrew Or <andrew@databricks.com> Closes #3212 from andrewor14/submit-suite-message and squashes the following commits: 7779248 [Andrew Or] Format exception 8fe6719 [Andrew Or] Better exception message from failed test (cherry picked from commit 6e3c5a296c90a551be5e6c7292a66f2e65338240) Signed-off-by: Andrew Or <andrew@databricks.com>
* Support cross building for Scala 2.11Prashant Sharma2014-11-113-19/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let's give this another go using a version of Hive that shades its JLine dependency. Author: Prashant Sharma <prashant.s@imaginea.com> Author: Patrick Wendell <pwendell@gmail.com> Closes #3159 from pwendell/scala-2.11-prashant and squashes the following commits: e93aa3e [Patrick Wendell] Restoring -Phive-thriftserver profile and cleaning up build script. f65d17d [Patrick Wendell] Fixing build issue due to merge conflict a8c41eb [Patrick Wendell] Reverting dev/run-tests back to master state. 7a6eb18 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into scala-2.11-prashant 583aa07 [Prashant Sharma] REVERT ME: removed hive thirftserver 3680e58 [Prashant Sharma] Revert "REVERT ME: Temporarily removing some Cli tests." 935fb47 [Prashant Sharma] Revert "Fixed by disabling a few tests temporarily." 925e90f [Prashant Sharma] Fixed by disabling a few tests temporarily. 2fffed3 [Prashant Sharma] Exclude groovy from sbt build, and also provide a way for such instances in future. 8bd4e40 [Prashant Sharma] Switched to gmaven plus, it fixes random failures observer with its predecessor gmaven. 5272ce5 [Prashant Sharma] SPARK_SCALA_VERSION related bugs. 2121071 [Patrick Wendell] Migrating version detection to PySpark b1ed44d [Patrick Wendell] REVERT ME: Temporarily removing some Cli tests. 1743a73 [Patrick Wendell] Removing decimal test that doesn't work with Scala 2.11 f5cad4e [Patrick Wendell] Add Scala 2.11 docs 210d7e1 [Patrick Wendell] Revert "Testing new Hive version with shaded jline" 48518ce [Patrick Wendell] Remove association of Hive and Thriftserver profiles. e9d0a06 [Patrick Wendell] Revert "Enable thritfserver for Scala 2.10 only" 67ec364 [Patrick Wendell] Guard building of thriftserver around Scala 2.10 check 8502c23 [Patrick Wendell] Enable thritfserver for Scala 2.10 only e22b104 [Patrick Wendell] Small fix in pom file ec402ab [Patrick Wendell] Various fixes 0be5a9d [Patrick Wendell] Testing new Hive version with shaded jline 4eaec65 [Prashant Sharma] Changed scripts to ignore target. 5167bea [Prashant Sharma] small correction a4fcac6 [Prashant Sharma] Run against scala 2.11 on jenkins. 80285f4 [Prashant Sharma] MAven equivalent of setting spark.executor.extraClasspath during tests. 034b369 [Prashant Sharma] Setting test jars on executor classpath during tests from sbt. d4874cb [Prashant Sharma] Fixed Python Runner suite. null check should be first case in scala 2.11. 6f50f13 [Prashant Sharma] Fixed build after rebasing with master. We should use ${scala.binary.version} instead of just 2.10 e56ca9d [Prashant Sharma] Print an error if build for 2.10 and 2.11 is spotted. 937c0b8 [Prashant Sharma] SCALA_VERSION -> SPARK_SCALA_VERSION cb059b0 [Prashant Sharma] Code review 0476e5e [Prashant Sharma] Scala 2.11 support with repl and all build changes. (cherry picked from commit daaca14c16dc2c1abc98f15ab8c6f7c14761b627) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* SPARK-2269 Refactor mesos scheduler resourceOffers and add unit testTimothy Chen2014-11-112-79/+152
| | | | | | | | | | | | | | Author: Timothy Chen <tnachen@gmail.com> Closes #1487 from tnachen/resource_offer_refactor and squashes the following commits: 4ea5dec [Timothy Chen] Rebase from master and address comments 9ccab09 [Timothy Chen] Address review comments e6494dc [Timothy Chen] Refactor class loading 8207428 [Timothy Chen] Refactor mesos scheduler resourceOffers and add unit test (cherry picked from commit a878660d2d7bb7ad9b5818a674e1e7c651077e78) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-4307] Initialize FileDescriptor lazily in FileRegion.Reynold Xin2014-11-115-7/+15
| | | | | | | | | | | | | | | | | Netty's DefaultFileRegion requires a FileDescriptor in its constructor, which means we need to have a opened file handle. In super large workloads, this could lead to too many open files due to the way these file descriptors are cleaned. This pull request creates a new LazyFileRegion that initializes the FileDescriptor when we are sending data for the first time. Author: Reynold Xin <rxin@databricks.com> Author: Reynold Xin <rxin@apache.org> Closes #3172 from rxin/lazyFD and squashes the following commits: 0bdcdc6 [Reynold Xin] Added reference to Netty's DefaultFileRegion d4564ae [Reynold Xin] Added SparkConf to the ctor argument of IndexShuffleBlockManager. 6ed369e [Reynold Xin] Code review feedback. 04cddc8 [Reynold Xin] [SPARK-4307] Initialize FileDescriptor lazily in FileRegion. (cherry picked from commit ef29a9a9aa85468869eb67ca67b66c65f508d0ee) Signed-off-by: Aaron Davidson <aaron@databricks.com>
* [SPARK-4169] [Core] Accommodate non-English Locales in unit testsNiklas Wilcke2014-11-102-12/+15
| | | | | | | | | | | | | | | | | | For me the core tests failed because there are two locale dependent parts in the code. Look at the Jira ticket for details. Why is it necessary to check the exception message in isBindCollision in https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/Utils.scala#L1686 ? Author: Niklas Wilcke <1wilcke@informatik.uni-hamburg.de> Closes #3036 from numbnut/core-test-fix and squashes the following commits: 1fb0d04 [Niklas Wilcke] Fixing locale dependend code and tests (cherry picked from commit ed8bf1eac548577c4bbad7ce3f7f301a2f52ef17) Signed-off-by: Andrew Or <andrew@databricks.com>
* [SPARK-2703][Core]Make Tachyon related unit tests execute without deploying ↵RongGu2014-11-092-2/+16
| | | | | | | | | | | | | | | a Tachyon system locally. Make Tachyon related unit tests execute without deploying a Tachyon system locally. Author: RongGu <gurongwalker@gmail.com> Closes #3030 from RongGu/SPARK-2703 and squashes the following commits: ad08827 [RongGu] Make Tachyon related unit tests execute without deploying a Tachyon system locally (cherry picked from commit bd86cb1738800a0aa4c88b9afdba2f97ac6cbf25) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* SPARK-3179. Add task OutputMetrics.Sandy Ryza2014-11-0916-31/+346
| | | | | | | | | | | | | | | | | | Author: Sandy Ryza <sandy@cloudera.com> This patch had conflicts when merged, resolved by Committer: Kay Ousterhout <kayousterhout@gmail.com> Closes #2968 from sryza/sandy-spark-3179 and squashes the following commits: dce4784 [Sandy Ryza] More review feedback 8d350d1 [Sandy Ryza] Fix test against Hadoop 2.5+ e7c74d0 [Sandy Ryza] More review feedback 6cff9c4 [Sandy Ryza] Review feedback fb2dde0 [Sandy Ryza] SPARK-3179 (cherry picked from commit 3c2cff4b9464f8d7535564fcd194631a8e5bb0a5) Signed-off-by: Kay Ousterhout <kayousterhout@gmail.com>
* SPARK-1209 [CORE] (Take 2) SparkHadoop{MapRed,MapReduce}Util should not use ↵Sean Owen2014-11-095-5/+22
| | | | | | | | | | | | | | | | | | | | | | | package org.apache.hadoop andrewor14 Another try at SPARK-1209, to address https://github.com/apache/spark/pull/2814#issuecomment-61197619 I successfully tested with `mvn -Dhadoop.version=1.0.4 -DskipTests clean package; mvn -Dhadoop.version=1.0.4 test` I assume that is what failed Jenkins last time. I also tried `-Dhadoop.version1.2.1` and `-Phadoop-2.4 -Pyarn -Phive` for more coverage. So this is why the class was put in `org.apache.hadoop` to begin with, I assume. One option is to leave this as-is for now and move it only when Hadoop 1.0.x support goes away. This is the other option, which adds a call to force the constructor to be public at run-time. It's probably less surprising than putting Spark code in `org.apache.hadoop`, but, does involve reflection. A `SecurityManager` might forbid this, but it would forbid a lot of stuff Spark does. This would also only affect Hadoop 1.0.x it seems. Author: Sean Owen <sowen@cloudera.com> Closes #3048 from srowen/SPARK-1209 and squashes the following commits: 0d48f4b [Sean Owen] For Hadoop 1.0.x, make certain constructors public, which were public in later versions 466e179 [Sean Owen] Disable MIMA warnings resulting from moving the class -- this was also part of the PairRDDFunctions type hierarchy though? eb61820 [Sean Owen] Move SparkHadoopMapRedUtil / SparkHadoopMapReduceUtil from org.apache.hadoop to org.apache.spark (cherry picked from commit f8e5732307dcb1482d9bcf1162a1090ef9a7b913) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* SPARK-1344 [DOCS] Scala API docs for top methodsSean Owen2014-11-092-12/+12
| | | | | | | | | | | | | Use "k" in javadoc of top and takeOrdered to avoid confusion with type K in pair RDDs. I think this resolves the discussion in SPARK-1344. Author: Sean Owen <sowen@cloudera.com> Closes #3168 from srowen/SPARK-1344 and squashes the following commits: 6963fcc [Sean Owen] Use "k" in javadoc of top and takeOrdered to avoid confusion with type K in pair RDDs (cherry picked from commit d1362659ef5d62db2c9ff0d2a24639abcef4e118) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
* [SPARK-4225][SQL] Resorts to SparkContext.version to inspect Spark versionCheng Lian2014-11-071-17/+7
| | | | | | | | | | | | | | | This PR resorts to `SparkContext.version` rather than META-INF/MANIFEST.MF in the assembly jar to inspect Spark version. Currently, when built with Maven, the MANIFEST.MF file in the assembly jar is incorrectly replaced by Guava 15.0 MANIFEST.MF, probably because of the assembly/shading tricks. Another related PR is #3103, which tries to fix the MANIFEST issue. Author: Cheng Lian <lian@databricks.com> Closes #3105 from liancheng/spark-4225 and squashes the following commits: d9585e1 [Cheng Lian] Resorts to SparkContext.version to inspect Spark version (cherry picked from commit 86e9eaa3f0ec23cb38bce67585adb2d5f484f4ee) Signed-off-by: Michael Armbrust <michael@databricks.com>