Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Maven build now also works with YARN | Jey Kottalam | 2013-08-16 | 1 | -70/+0 |
| | |||||
* | Don't mark hadoop-client as 'provided' | Jey Kottalam | 2013-08-16 | 1 | -1/+0 |
| | |||||
* | Maven build now works with CDH hadoop-2.0.0-mr1 | Jey Kottalam | 2013-08-16 | 1 | -52/+0 |
| | |||||
* | Initial changes to make Maven build agnostic of hadoop version | Jey Kottalam | 2013-08-16 | 1 | -58/+5 |
| | |||||
* | Rename HadoopWriter to SparkHadoopWriter since it's outside of our package | Jey Kottalam | 2013-08-15 | 2 | -6/+6 |
| | |||||
* | Fix newTaskAttemptID to work under YARN | Jey Kottalam | 2013-08-15 | 1 | -1/+19 |
| | |||||
* | re-enable YARN support | Jey Kottalam | 2013-08-15 | 1 | -1/+13 |
| | |||||
* | SparkEnv isn't available this early, and not needed anyway | Jey Kottalam | 2013-08-15 | 2 | -25/+0 |
| | |||||
* | make SparkHadoopUtil a member of SparkEnv | Jey Kottalam | 2013-08-15 | 8 | -26/+31 |
| | |||||
* | rename HadoopMapRedUtil => SparkHadoopMapRedUtil, HadoopMapReduceUtil => ↵ | Jey Kottalam | 2013-08-15 | 5 | -6/+7 |
| | | | | SparkHadoopMapReduceUtil | ||||
* | add comment | Jey Kottalam | 2013-08-15 | 1 | -4/+4 |
| | |||||
* | dynamically detect hadoop version | Jey Kottalam | 2013-08-15 | 2 | -8/+48 |
| | |||||
* | remove core/src/hadoop{1,2} dirs | Jey Kottalam | 2013-08-15 | 6 | -104/+0 |
| | |||||
* | move yarn to its own directory | Jey Kottalam | 2013-08-15 | 10 | -1864/+0 |
| | |||||
* | More minor UI changes including code review feedback. | Reynold Xin | 2013-08-15 | 6 | -16/+39 |
| | |||||
* | Various UI improvements. | Reynold Xin | 2013-08-14 | 12 | -88/+83 |
| | |||||
* | Renamed setCurrentJobDescription to setJobDescription. | Reynold Xin | 2013-08-14 | 1 | -1/+1 |
| | |||||
* | A few small scheduler / job description changes. | Reynold Xin | 2013-08-14 | 4 | -70/+74 |
| | | | | | | | | 1. Renamed SparkContext.addLocalProperty to setLocalProperty. And allow this function to unset a property. 2. Renamed SparkContext.setDescription to setCurrentJobDescription. 3. Throw an exception if the fair scheduler allocation file is invalid. | ||||
* | Merge pull request #822 from pwendell/ui-features | Matei Zaharia | 2013-08-14 | 6 | -27/+54 |
|\ | | | | | Adding GC Stats to TaskMetrics (and three small fixes) | ||||
| * | Style cleanup based on Matei feedback | Patrick Wendell | 2013-08-14 | 3 | -5/+4 |
| | | |||||
| * | Small style clean-up | Patrick Wendell | 2013-08-13 | 2 | -2/+2 |
| | | |||||
| * | Correcting terminology in RDD page | Patrick Wendell | 2013-08-13 | 1 | -1/+1 |
| | | |||||
| * | Correct sorting order for stages | Patrick Wendell | 2013-08-13 | 2 | -10/+6 |
| | | |||||
| * | Capturing GC detials in TaskMetrics | Patrick Wendell | 2013-08-13 | 4 | -10/+37 |
| | | |||||
| * | Bug fix for display of shuffle read/write metrics. | Patrick Wendell | 2013-08-13 | 1 | -6/+11 |
| | | | | | | | | | | This fixes an error where empty cells are missing if a given task has no shuffle read/write. | ||||
* | | Fixed 2 bugs in executor UI. | Kay Ousterhout | 2013-08-13 | 1 | -12/+10 |
| | | | | | | | | | | | | 1) UI crashed if the executor UI was loaded before any tasks started. 2) The total tasks was incorrectly reported due to using string (rather than int) arithmetic. | ||||
* | | Merge pull request #821 from pwendell/print-launch-command | Matei Zaharia | 2013-08-13 | 1 | -1/+1 |
|\ \ | | | | | | | Print run command to stderr rather than stdout | ||||
| * | | Print run command to stderr rather than stdout | Patrick Wendell | 2013-08-13 | 1 | -1/+1 |
| | | | |||||
* | | | Reuse the set of failed states rather than creating a new object each time | Kay Ousterhout | 2013-08-13 | 1 | -1/+3 |
| | | | |||||
* | | | Properly account for killed tasks. | Kay Ousterhout | 2013-08-13 | 1 | -1/+1 |
| |/ |/| | | | | | | | | | | | The TaskState class's isFinished() method didn't return true for KILLED tasks, which means some resources are never reclaimed for tasks that are killed. This also made it inconsistent with the isFinished() method used by CoarseMesosSchedulerBackend. | ||||
* | | Slight change to pr-784 | Patrick Wendell | 2013-08-13 | 5 | -9/+10 |
| | | |||||
* | | Merge pull request #784 from jerryshao/dev-metrics-servlet | Patrick Wendell | 2013-08-13 | 14 | -35/+157 |
|\ \ | | | | | | | Add MetricsServlet for Spark metrics system | ||||
| * | | MetricsServlet code refactor according to comments | jerryshao | 2013-08-12 | 11 | -43/+35 |
| | | | |||||
| * | | Add MetricsServlet for Spark metrics system | jerryshao | 2013-08-12 | 12 | -28/+158 |
| | | | |||||
* | | | Merge pull request #807 from JoshRosen/guava-optional | Matei Zaharia | 2013-08-12 | 5 | -17/+87 |
|\ \ \ | |/ / |/| | | Change scala.Option to Guava Optional in Java APIs | ||||
| * | | Fix import organization. | Josh Rosen | 2013-08-12 | 1 | -2/+1 |
| | | | |||||
| * | | Change scala.Option to Guava Optional in Java APIs. | Josh Rosen | 2013-08-11 | 5 | -17/+88 |
| | | | |||||
* | | | Merge pull request #808 from pwendell/ui_compressed_bytes | Reynold Xin | 2013-08-11 | 1 | -1/+2 |
|\ \ \ | | | | | | | | | Report compressed bytes read when calculating TaskMetrics | ||||
| * | | | Report compressed bytes read when calculating TaskMetrics | Patrick Wendell | 2013-08-11 | 1 | -1/+2 |
| | | | | |||||
* | | | | Merge pull request #805 from woggle/hadoop-rdd-jobconf | Matei Zaharia | 2013-08-11 | 1 | -2/+2 |
|\ \ \ \ | |_|/ / |/| | | | Use new Configuration() instead of slower new JobConf() in SerializableWritable | ||||
| * | | | Use new Configuration() instead of new JobConf() for ObjectWritable. | Charles Reiss | 2013-08-10 | 1 | -2/+2 |
| | | | | | | | | | | | | | | | | | | | | | | | | JobConf's constructor loads default config files in some verisons of Hadoop, which is quite slow, and we only need the Configuration object to pass the correct ClassLoader. | ||||
* | | | | Merge pull request #795 from mridulm/master | Matei Zaharia | 2013-08-10 | 2 | -7/+42 |
|\ \ \ \ | | | | | | | | | | | Fix bug reported in PR 791 : a race condition in ConnectionManager and Connection | ||||
| * | | | | Change line size | Mridul Muralidharan | 2013-08-08 | 1 | -5/+9 |
| | | | | | |||||
| * | | | | Attempt to fix bug reported in PR 791 : a race condition in ↵ | Mridul Muralidharan | 2013-08-08 | 2 | -7/+38 |
| | | | | | | | | | | | | | | | | | | | | ConnectionManager and Connection | ||||
* | | | | | Merge remote-tracking branch 'origin/pr/792' | Matei Zaharia | 2013-08-10 | 10 | -259/+279 |
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/ui/jobs/IndexPage.scala core/src/main/scala/spark/ui/jobs/StagePage.scala | ||||
| * | | | | | Shortened names, as per Matei's suggestion | Kay Ousterhout | 2013-08-10 | 2 | -14/+13 |
| | | | | | | |||||
| * | | | | | Only print event queue full error message once | Kay Ousterhout | 2013-08-09 | 1 | -1/+3 |
| | | | | | | |||||
| * | | | | | Style fix: removing unnecessary return type | Kay Ousterhout | 2013-08-09 | 1 | -6/+6 |
| | | | | | | |||||
| * | | | | | Style fixes based on code review | Kay Ousterhout | 2013-08-09 | 3 | -132/+110 |
| | | | | | | |||||
| * | | | | | Refactored SparkListener to process all events asynchronously. | Kay Ousterhout | 2013-08-09 | 9 | -165/+196 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes issues where SparkListeners that take a while to process events slow the DAGScheduler. This commit also fixes a bug in the UI where if a user goes to a web page of a stage that does not exist, they can create a memory leak (granted, this is not an issue at small scale -- probably only an issue if someone actively tried to DOS the UI). |