aboutsummaryrefslogtreecommitdiff
path: root/core
Commit message (Collapse)AuthorAgeFilesLines
* fixed maven build for scala 2.10Prashant Sharma2013-09-261-17/+14
|
* Akka 2.2 migrationPrashant Sharma2013-09-2213-54/+78
|
* version changed 2.9.3 -> 2.10 in shell script.Prashant Sharma2013-09-151-1/+1
|
* Merge branch 'master' of git://github.com/mesos/spark into scala-2.10Prashant Sharma2013-09-1535-82/+329
|\ | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/SparkContext.scala project/SparkBuild.scala
| * Changed localProperties to use ThreadLocal (not DynamicVariable).Kay Ousterhout2013-09-111-9/+9
| | | | | | | | | | | | The fact that DynamicVariable uses an InheritableThreadLocal can cause problems where the properties end up being shared across threads in certain circumstances.
| * Merge pull request #919 from mateiz/jets3tPatrick Wendell2013-09-111-0/+5
| |\ | | | | | | Add explicit jets3t dependency, which is excluded in hadoop-client
| | * Add explicit jets3t dependency, which is excluded in hadoop-clientMatei Zaharia2013-09-101-0/+5
| | |
| * | Merge pull request #922 from pwendell/port-changePatrick Wendell2013-09-112-2/+2
| |\ \ | | | | | | | | Change default port number from 3030 to 4030.
| | * | Change port from 3030 to 4040Patrick Wendell2013-09-112-2/+2
| | |/
| * / SPARK-894 - Not all WebUI fields delivered VIA JSONDavid McCauley2013-09-111-1/+3
| |/
| * Merge pull request #915 from ooyala/masterMatei Zaharia2013-09-091-1/+9
| |\ | | | | | | Get rid of / improve ugly NPE when Utils.deleteRecursively() fails
| | * Style fix: put body of if within curly bracesEvan Chan2013-09-091-1/+3
| | |
| | * Print out more friendly error if listFiles() failsEvan Chan2013-09-091-1/+7
| | | | | | | | | | | | listFiles() could return null if the I/O fails, and this currently results in an ugly NPE which is hard to diagnose.
| * | Merge pull request #907 from stephenh/document_coalesce_shuffleMatei Zaharia2013-09-092-4/+27
| |\ \ | | | | | | | | Add better docs for coalesce.
| | * | Use a set since shuffle could change order.Stephen Haberman2013-09-091-1/+1
| | | |
| | * | Reword 'evenly distributed' to 'distributed with a hash partitioner.Stephen Haberman2013-09-091-2/+2
| | | |
| | * | Add better docs for coalesce.Stephen Haberman2013-09-082-4/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Include the useful tip that if shuffle=true, coalesce can actually increase the number of partitions. This makes coalesce more like a generic `RDD.repartition` operation. (Ideally this `RDD.repartition` could automatically choose either a coalesce or a shuffle if numPartitions was either less than or greater than, respectively, the current number of partitions.)
| * | | Add metrics-ganglia to core pom fileY.CORP.YAHOO.COM\tgraves2013-09-091-0/+4
| | | |
| * | | Merge pull request #890 from mridulm/masterMatei Zaharia2013-09-083-2/+17
| |\ \ \ | | | | | | | | | | Fix hash bug
| | * | | Address review comments - rename toHash to nonNegativeHashMridul Muralidharan2013-09-043-3/+3
| | | | |
| | * | | Fix hash bug - caused failure after 35k stages, sighMridul Muralidharan2013-09-043-2/+17
| | | | |
| * | | | Merge pull request #909 from mateiz/exec-id-fixReynold Xin2013-09-082-7/+7
| |\ \ \ \ | | | | | | | | | | | | Fix an instance where full standalone mode executor IDs were passed to
| | * | | | Fix an instance where full standalone mode executor IDs were passed toMatei Zaharia2013-09-082-7/+7
| | | |/ / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | StandaloneSchedulerBackend instead of the smaller IDs used within Spark (that lack the application name). This was reported by ClearStory in https://github.com/clearstorydata/spark/pull/9. Also fixed some messages that said slave instead of executor.
| * | | | Merge pull request #905 from mateiz/docs2Matei Zaharia2013-09-088-18/+19
| |\ \ \ \ | | | | | | | | | | | | Job scheduling and cluster mode docs
| | * | | | Fix unit test failure due to changed defaultMatei Zaharia2013-09-081-1/+1
| | | | | |
| | * | | | More fair scheduler docs and property names.Matei Zaharia2013-09-086-12/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also changed uses of "job" terminology to "application" when they referred to an entire Spark program, to avoid confusion.
| | * | | | Work in progress:Matei Zaharia2013-09-084-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add job scheduling docs - Rename some fair scheduler properties - Organize intro page better - Link to Apache wiki for "contributing to Spark"
| * | | | | Merge pull request #906 from pwendell/ganglia-sinkPatrick Wendell2013-09-089-28/+114
| |\ \ \ \ \ | | |_|/ / / | |/| | | | Clean-up of Metrics Code/Docs and Add Ganglia Sink
| | * | | | Adding sc name in metrics sourcePatrick Wendell2013-09-085-9/+14
| | | | | |
| | * | | | Adding more docs and some code cleanupPatrick Wendell2013-09-083-19/+18
| | | | | |
| | * | | | Ganglia sinkPatrick Wendell2013-09-081-0/+82
| | | |/ / | | |/| |
| * | | | Merge pull request #898 from ilikerps/660Matei Zaharia2013-09-081-3/+3
| |\ \ \ \ | | |_|/ / | |/| | | SPARK-660: Add StorageLevel support in Python
| | * | | Export StorageLevel and refactorAaron Davidson2013-09-071-3/+3
| | | | |
| | * | | Remove reflection, hard-code StorageLevelsAaron Davidson2013-09-071-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sc.StorageLevel -> StorageLevel pathway is a bit janky, but otherwise the shell would have to call a private method of SparkContext. Having StorageLevel available in sc also doesn't seem like the end of the world. There may be a better solution, though. As for creating the StorageLevel object itself, this seems to be the best way in Python 2 for creating singleton, enum-like objects: http://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python
| | * | | Memoize StorageLevels read from JVMAaron Davidson2013-09-061-1/+1
| | | | |
| | * | | SPARK-660: Add StorageLevel support in PythonAaron Davidson2013-09-051-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | It uses reflection... I am not proud of that fact, but it at least ensures compatibility (sans refactoring of the StorageLevel stuff).
| * | | | Fixed the bug that ResultTask was not properly deserializing outputId.Reynold Xin2013-09-071-2/+2
| | |_|/ | |/| |
| * | | Hot fix to resolve the compilation error caused by SPARK-821.Reynold Xin2013-09-061-1/+1
| | | |
| * | | Merge pull request #895 from ilikerps/821Patrick Wendell2013-09-057-7/+102
| |\ \ \ | | | | | | | | | | SPARK-821: Don't cache results when action run locally on driver
| | * | | Reynold's second round of commentsAaron Davidson2013-09-052-17/+19
| | | | |
| | * | | Add unit test and address commentsAaron Davidson2013-09-055-6/+98
| | | | |
| | * | | SPARK-821: Don't cache results when action run locally on driverAaron Davidson2013-09-054-4/+5
| | |/ / | | | | | | | | | | | | | | | | | | | | Caching the results of local actions (e.g., rdd.first()) causes the driver to store entire partitions in its own memory, which may be highly constrained. This patch simply makes the CacheManager avoid caching the result of all locally-run computations.
| * | | Merge pull request #891 from xiajunluan/SPARK-864Matei Zaharia2013-09-051-1/+8
| |\ \ \ | | | | | | | | | | [SPARK-864]DAGScheduler Exception if we delete Worker and StandaloneExecutorBackend then add Worker
| | * | | Fix bug SPARK-864Andrew xia2013-09-051-1/+8
| | | | |
* | | | | Merged with masterPrashant Sharma2013-09-06456-8683/+15777
|\| | | |
| * | | | Merge pull request #893 from ilikerps/masterPatrick Wendell2013-09-041-0/+92
| |\ \ \ \ | | | |/ / | | |/| / | | |_|/ | |/| | SPARK-884: Add unit test to validate Spark JSON output
| | * | Fix line over 100 charsAaron Davidson2013-09-041-2/+2
| | | |
| | * | Address Patrick's commentsAaron Davidson2013-09-041-8/+15
| | | |
| | * | SPARK-884: Add unit test to validate Spark JSON outputAaron Davidson2013-09-041-0/+85
| | |/ | | | | | | | | | | | | This unit test simply validates that the outputs of the JsonProtocol methods are syntactically valid JSON.
| * | Minor spacing fixPatrick Wendell2013-09-031-2/+4
| | |