aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Fixed some whitespaceMatei Zaharia2010-10-163-14/+14
|
* Added support for generic Hadoop InputFormats and refactored textFile toMatei Zaharia2010-10-162-28/+111
| | | | use this. Closes #12.
* Renamed HdfsFile to HadoopFileMatei Zaharia2010-10-162-8/+9
|
* Simplified UnionRDD slightly and added a SparkContext.union method for ↵Matei Zaharia2010-10-162-28/+22
| | | | efficiently union-ing a large number of RDDs
* Removed setSparkHome method on SparkContext in favor of having anMatei Zaharia2010-10-162-16/+7
| | | | | optional constructor parameter, so that the scheduler is guaranteed that a Spark home has been set when it first builds its executor arg.
* Added the ability to specify a list of JAR files when creating aMatei Zaharia2010-10-166-116/+244
| | | | SparkContext and have the master node serve those to workers.
* Keep track of tasks in each job so that they can be removed when the job exitsMatei Zaharia2010-10-161-6/+12
|
* Further clarified some codeMatei Zaharia2010-10-162-10/+22
|
* Fixed some log messagesMatei Zaharia2010-10-161-2/+2
|
* Bug fixes and improvements for MesosScheduler and SimpleJobMatei Zaharia2010-10-163-25/+46
|
* Moved Spark home detection to SparkContext and added a setSparkHomeMatei Zaharia2010-10-162-51/+81
| | | | method for setting it programatically.
* Bug fix in passing env vars to executorsMatei Zaharia2010-10-161-1/+1
|
* Added code so that Spark jobs can be launched from outside the SparkMatei Zaharia2010-10-151-2/+29
| | | | | | directory by setting SPARK_HOME and locating the executor relative to that. Entries on SPARK_CLASSPATH and SPARK_LIBRARY_PATH are also passed along to worker nodes.
* Moved ClassServer out of repl packaged and renamed it to HttpServer.Matei Zaharia2010-10-152-12/+12
|
* Abort jobs if a task fails more than a limited number of timesMatei Zaharia2010-10-153-23/+44
|
* A couple of improvements to ReplSuite:Matei Zaharia2010-10-151-26/+30
| | | | | - Use collect instead of toArray - Disable the "running on Mesos" test when MESOS_HOME is not set
* Made locality scheduling constant-time and added support for changingMatei Zaharia2010-10-151-24/+79
| | | | CPU and memory requested per task.
* Moved Job and SimpleJob to new filesMatei Zaharia2010-10-073-183/+206
|
* Merge branch 'master' into matei-schedulingMatei Zaharia2010-10-074-11/+23
|\
| * Added a getId method to split to force classes to specify a unique IDMatei Zaharia2010-10-074-11/+23
| | | | | | | | | | | | | | for each split. This replaces the previous method of calling split.toString, which would produce different results for the same split each time it is deserialized (because the default implementation returns the Java object's address).
* | Merge branch 'master' into matei-schedulingMatei Zaharia2010-10-074-10/+21
|\|
| * got rid of unnecessary lineJustin Ma2010-10-071-1/+0
| |
| * Merge branch 'master' into jtma-accumulatorJustin Ma2010-10-0713-124/+372
| |\
| | * Added toString() methods to UnionSplit, SeededSplit and CartesianSplit toJustin Ma2010-10-071-2/+11
| | | | | | | | | | | | ensure that the proper keys will be generated when they cached.
| * | changes to accumulator to add objects in-place.Justin Ma2010-09-254-8/+11
| | |
* | | Merge branch 'master' into matei-schedulingMatei Zaharia2010-10-053-3/+64
|\ \ \ | | |/ | |/|
| * | Added splitWords function in UtilsMatei Zaharia2010-10-041-1/+26
| | |
| * | Added reduceByKey operation for RDDs containing pairsalpha-0.1Matei Zaharia2010-10-032-2/+38
| | |
* | | Merge branch 'master' into matei-schedulingMatei Zaharia2010-10-032-0/+2
|\| |
| * | Fixed a rather bad bug in HDFS files that has been in for a while:root2010-10-032-0/+2
| | | | | | | | | | | | | | | caching was not working because Split objects did not have a consistent toString value
* | | Renamed ParallelOperation to JobMatei Zaharia2010-10-031-42/+42
|/ /
* | Merge branch 'matei-logging'Matei Zaharia2010-09-2911-100/+169
|\ \
| * | Made task-finished log messages slightly nicerMatei Zaharia2010-09-291-6/+8
| | |
| * | A couple of minor fixes:Matei Zaharia2010-09-292-9/+16
| | | | | | | | | | | | | | | - Don't include trailing $'s in class names of Scala objects - Report errors using logError instead of printStackTrace
| * | Changed printlns to log statements and fixed a bug in run that was causing ↵Matei Zaharia2010-09-2810-93/+109
| | | | | | | | | | | | it to fail on a Mesos cluster
| * | Added Logging traitMatei Zaharia2010-09-281-0/+44
| | |
* | | Increase default locality wait to 3s. Fixes #20.Matei Zaharia2010-09-291-1/+1
|/ /
* | Merge branch 'http-repl-class-serving'Matei Zaharia2010-09-284-24/+131
|\ \
| * | More work on HTTP class loadingMatei Zaharia2010-09-283-24/+57
| | |
| * | Modified the interpreter to serve classes to the executors using a JettyMatei Zaharia2010-09-281-0/+74
| | | | | | | | | | | | HTTP server instead of a shared (NFS) file system.
* | | fixed typo in printing which task is already finishedJustin Ma2010-09-281-1/+1
| |/ |/|
* | Let's use future instead of actorsJustin Ma2010-09-132-38/+24
| |
* | Added fork()/join() operations for SparkContext, as well as corresponding ↵Justin Ma2010-09-122-49/+91
| | | | | | | | changes to MesosScheduler to support multiple ParallelOperations.
* | round robin scheduling of tasks has been addedJustin Ma2010-09-073-13/+25
| |
* | now adding the Split object.Justin Ma2010-09-011-0/+3
| |
* | - Got rid of 'Split' type parameter in RDDJustin Ma2010-08-317-59/+104
| | | | | | | | | | | | - Added SampledRDD, SplitRDD and CartesianRDD - Made Split a class rather than a type parameter - Added numCores() to Scheduler to help set default level of parallelism
* | now we have sampling with replacement (at least on a per-split basis)Justin Ma2010-08-181-3/+17
| |
* | HdfsFile.scala: added a try/catch block to exit gracefully for correupted ↵Justin Ma2010-08-183-2/+35
|/ | | | | | | | gzip files MesosScheduler.scala: formatted the slaveOffer() output to include the serialized task size RDD.scala: added support for aggregating RDDs on a per-split basis (aggregateSplit()) as well as for sampling without replacement (sample())
* Modified Scala interpreter to have it avoid computing string versions ofMatei Zaharia2010-08-151-1/+3
| | | | | | all results when :silent is enabled, so that it is easier to work with large arrays in Spark. (The string version of an array of numbers might not fit in memory even though the array itself does.)
* Bug fix from JustinMatei Zaharia2010-08-131-1/+1
|