Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | SPARK-894 - Not all WebUI fields delivered VIA JSON | David McCauley | 2013-09-11 | 1 | -1/+3 |
| | |||||
* | Merge pull request #915 from ooyala/master | Matei Zaharia | 2013-09-09 | 1 | -1/+9 |
|\ | | | | | Get rid of / improve ugly NPE when Utils.deleteRecursively() fails | ||||
| * | Style fix: put body of if within curly braces | Evan Chan | 2013-09-09 | 1 | -1/+3 |
| | | |||||
| * | Print out more friendly error if listFiles() fails | Evan Chan | 2013-09-09 | 1 | -1/+7 |
| | | | | | | | | listFiles() could return null if the I/O fails, and this currently results in an ugly NPE which is hard to diagnose. | ||||
* | | Merge pull request #907 from stephenh/document_coalesce_shuffle | Matei Zaharia | 2013-09-09 | 2 | -4/+27 |
|\ \ | | | | | | | Add better docs for coalesce. | ||||
| * | | Use a set since shuffle could change order. | Stephen Haberman | 2013-09-09 | 1 | -1/+1 |
| | | | |||||
| * | | Reword 'evenly distributed' to 'distributed with a hash partitioner. | Stephen Haberman | 2013-09-09 | 1 | -2/+2 |
| | | | |||||
| * | | Add better docs for coalesce. | Stephen Haberman | 2013-09-08 | 2 | -4/+27 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Include the useful tip that if shuffle=true, coalesce can actually increase the number of partitions. This makes coalesce more like a generic `RDD.repartition` operation. (Ideally this `RDD.repartition` could automatically choose either a coalesce or a shuffle if numPartitions was either less than or greater than, respectively, the current number of partitions.) | ||||
* | | | Merge pull request #890 from mridulm/master | Matei Zaharia | 2013-09-08 | 3 | -2/+17 |
|\ \ \ | | | | | | | | | Fix hash bug | ||||
| * | | | Address review comments - rename toHash to nonNegativeHash | Mridul Muralidharan | 2013-09-04 | 3 | -3/+3 |
| | | | | |||||
| * | | | Fix hash bug - caused failure after 35k stages, sigh | Mridul Muralidharan | 2013-09-04 | 3 | -2/+17 |
| | | | | |||||
* | | | | Merge pull request #909 from mateiz/exec-id-fix | Reynold Xin | 2013-09-08 | 2 | -7/+7 |
|\ \ \ \ | | | | | | | | | | | Fix an instance where full standalone mode executor IDs were passed to | ||||
| * | | | | Fix an instance where full standalone mode executor IDs were passed to | Matei Zaharia | 2013-09-08 | 2 | -7/+7 |
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | StandaloneSchedulerBackend instead of the smaller IDs used within Spark (that lack the application name). This was reported by ClearStory in https://github.com/clearstorydata/spark/pull/9. Also fixed some messages that said slave instead of executor. | ||||
* | | | | Merge pull request #905 from mateiz/docs2 | Matei Zaharia | 2013-09-08 | 8 | -18/+19 |
|\ \ \ \ | | | | | | | | | | | Job scheduling and cluster mode docs | ||||
| * | | | | Fix unit test failure due to changed default | Matei Zaharia | 2013-09-08 | 1 | -1/+1 |
| | | | | | |||||
| * | | | | More fair scheduler docs and property names. | Matei Zaharia | 2013-09-08 | 6 | -12/+13 |
| | | | | | | | | | | | | | | | | | | | | | | | | | Also changed uses of "job" terminology to "application" when they referred to an entire Spark program, to avoid confusion. | ||||
| * | | | | Work in progress: | Matei Zaharia | 2013-09-08 | 4 | -5/+5 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add job scheduling docs - Rename some fair scheduler properties - Organize intro page better - Link to Apache wiki for "contributing to Spark" | ||||
* | | | | | Merge pull request #906 from pwendell/ganglia-sink | Patrick Wendell | 2013-09-08 | 9 | -28/+114 |
|\ \ \ \ \ | |_|/ / / |/| | | | | Clean-up of Metrics Code/Docs and Add Ganglia Sink | ||||
| * | | | | Adding sc name in metrics source | Patrick Wendell | 2013-09-08 | 5 | -9/+14 |
| | | | | | |||||
| * | | | | Adding more docs and some code cleanup | Patrick Wendell | 2013-09-08 | 3 | -19/+18 |
| | | | | | |||||
| * | | | | Ganglia sink | Patrick Wendell | 2013-09-08 | 1 | -0/+82 |
| | |/ / | |/| | | |||||
* | | | | Merge pull request #898 from ilikerps/660 | Matei Zaharia | 2013-09-08 | 1 | -3/+3 |
|\ \ \ \ | |_|/ / |/| | | | SPARK-660: Add StorageLevel support in Python | ||||
| * | | | Export StorageLevel and refactor | Aaron Davidson | 2013-09-07 | 1 | -3/+3 |
| | | | | |||||
| * | | | Remove reflection, hard-code StorageLevels | Aaron Davidson | 2013-09-07 | 1 | -11/+0 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sc.StorageLevel -> StorageLevel pathway is a bit janky, but otherwise the shell would have to call a private method of SparkContext. Having StorageLevel available in sc also doesn't seem like the end of the world. There may be a better solution, though. As for creating the StorageLevel object itself, this seems to be the best way in Python 2 for creating singleton, enum-like objects: http://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python | ||||
| * | | | Memoize StorageLevels read from JVM | Aaron Davidson | 2013-09-06 | 1 | -1/+1 |
| | | | | |||||
| * | | | SPARK-660: Add StorageLevel support in Python | Aaron Davidson | 2013-09-05 | 1 | -0/+11 |
| | | | | | | | | | | | | | | | | | | | | It uses reflection... I am not proud of that fact, but it at least ensures compatibility (sans refactoring of the StorageLevel stuff). | ||||
* | | | | Fixed the bug that ResultTask was not properly deserializing outputId. | Reynold Xin | 2013-09-07 | 1 | -2/+2 |
| |_|/ |/| | | |||||
* | | | Hot fix to resolve the compilation error caused by SPARK-821. | Reynold Xin | 2013-09-06 | 1 | -1/+1 |
| | | | |||||
* | | | Merge pull request #895 from ilikerps/821 | Patrick Wendell | 2013-09-05 | 7 | -7/+102 |
|\ \ \ | | | | | | | | | SPARK-821: Don't cache results when action run locally on driver | ||||
| * | | | Reynold's second round of comments | Aaron Davidson | 2013-09-05 | 2 | -17/+19 |
| | | | | |||||
| * | | | Add unit test and address comments | Aaron Davidson | 2013-09-05 | 5 | -6/+98 |
| | | | | |||||
| * | | | SPARK-821: Don't cache results when action run locally on driver | Aaron Davidson | 2013-09-05 | 4 | -4/+5 |
| |/ / | | | | | | | | | | | | | | | | Caching the results of local actions (e.g., rdd.first()) causes the driver to store entire partitions in its own memory, which may be highly constrained. This patch simply makes the CacheManager avoid caching the result of all locally-run computations. | ||||
* | | | Merge pull request #891 from xiajunluan/SPARK-864 | Matei Zaharia | 2013-09-05 | 1 | -1/+8 |
|\ \ \ | | | | | | | | | [SPARK-864]DAGScheduler Exception if we delete Worker and StandaloneExecutorBackend then add Worker | ||||
| * | | | Fix bug SPARK-864 | Andrew xia | 2013-09-05 | 1 | -1/+8 |
| | | | | |||||
* | | | | Merge pull request #893 from ilikerps/master | Patrick Wendell | 2013-09-04 | 1 | -0/+92 |
|\ \ \ \ | | |/ / | |/| / | |_|/ |/| | | SPARK-884: Add unit test to validate Spark JSON output | ||||
| * | | Fix line over 100 chars | Aaron Davidson | 2013-09-04 | 1 | -2/+2 |
| | | | |||||
| * | | Address Patrick's comments | Aaron Davidson | 2013-09-04 | 1 | -8/+15 |
| | | | |||||
| * | | SPARK-884: Add unit test to validate Spark JSON output | Aaron Davidson | 2013-09-04 | 1 | -0/+85 |
| |/ | | | | | | | | | This unit test simply validates that the outputs of the JsonProtocol methods are syntactically valid JSON. | ||||
* | | Minor spacing fix | Patrick Wendell | 2013-09-03 | 1 | -2/+4 |
| | | |||||
* | | Merge pull request #878 from tgravescs/yarnUILink | Patrick Wendell | 2013-09-03 | 9 | -26/+36 |
|\ \ | | | | | | | Link the Spark UI up to the Yarn UI | ||||
| * | | Update based on review comments. Change function to prependBaseUri and fix ↵ | Y.CORP.YAHOO.COM\tgraves | 2013-09-03 | 4 | -24/+23 |
| | | | | | | | | | | | | formatting. | ||||
| * | | Review comment changes and update to org.apache packaging | Y.CORP.YAHOO.COM\tgraves | 2013-09-03 | 6 | -32/+20 |
| | | | |||||
| * | | Merge remote-tracking branch 'mesos/master' into yarnUILink | Y.CORP.YAHOO.COM\tgraves | 2013-09-03 | 332 | -1488/+2661 |
| |\| | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/ui/UIUtils.scala core/src/main/scala/org/apache/spark/ui/jobs/PoolTable.scala core/src/main/scala/org/apache/spark/ui/jobs/StageTable.scala docs/running-on-yarn.md | ||||
| * | | fix up minor things | Y.CORP.YAHOO.COM\tgraves | 2013-08-30 | 2 | -5/+6 |
| | | | |||||
| * | | Link the Spark UI to the Yarn UI | Y.CORP.YAHOO.COM\tgraves | 2013-08-30 | 6 | -27/+50 |
| | | | |||||
* | | | Merge pull request #889 from alig/master | Matei Zaharia | 2013-09-03 | 3 | -5/+22 |
|\ \ \ | |_|/ |/| | | Return the port the WebUI is bound to (useful if port 0 was used) | ||||
| * | | Merge branch 'master' of https://github.com/alig/spark | Ali Ghodsi | 2013-09-03 | 2 | -5/+5 |
| |\ \ | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/deploy/master/Master.scala | ||||
| | * | | Sort order of imports to match project guidelines | Ali Ghodsi | 2013-09-02 | 1 | -5/+5 |
| | | | | |||||
| | * | | Reynold's comment fixed | Ali Ghodsi | 2013-09-02 | 1 | -1/+1 |
| | | | | |||||
| * | | | Using configured akka timeouts | Ali Ghodsi | 2013-09-03 | 1 | -3/+5 |
| |/ / |