| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
core/pom.xml
core/src/main/scala/spark/MapOutputTracker.scala
core/src/main/scala/spark/RDD.scala
core/src/main/scala/spark/RDDCheckpointData.scala
core/src/main/scala/spark/SparkContext.scala
core/src/main/scala/spark/Utils.scala
core/src/main/scala/spark/api/python/PythonRDD.scala
core/src/main/scala/spark/deploy/client/Client.scala
core/src/main/scala/spark/deploy/master/MasterWebUI.scala
core/src/main/scala/spark/deploy/worker/Worker.scala
core/src/main/scala/spark/deploy/worker/WorkerWebUI.scala
core/src/main/scala/spark/rdd/BlockRDD.scala
core/src/main/scala/spark/rdd/ZippedRDD.scala
core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
core/src/main/scala/spark/storage/BlockManager.scala
core/src/main/scala/spark/storage/BlockManagerMaster.scala
core/src/main/scala/spark/storage/BlockManagerMasterActor.scala
core/src/main/scala/spark/storage/BlockManagerUI.scala
core/src/main/scala/spark/util/AkkaUtils.scala
core/src/test/scala/spark/SizeEstimatorSuite.scala
pom.xml
project/SparkBuild.scala
repl/src/main/scala/spark/repl/SparkILoop.scala
repl/src/test/scala/spark/repl/ReplSuite.scala
streaming/src/main/scala/spark/streaming/StreamingContext.scala
streaming/src/main/scala/spark/streaming/api/java/JavaStreamingContext.scala
streaming/src/main/scala/spark/streaming/dstream/KafkaInputDStream.scala
streaming/src/main/scala/spark/streaming/util/MasterFailureTest.scala
|
| |\
| | |
| | | |
Removing incorrect test statement
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Creating these seems to take a while and clutters the output with Akka
stuff, so it would be nice to share them.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Split SPARK_JAVA_OPTS into multiple command-line arguments if it
contains spaces; this splitting follows quoting rules in bash
- Add the Scala JARs to the classpath if they're not in the CLASSPATH
variable because the ExecutorRunner is launched with "scala" (this can
happen when using local-cluster URLs in spark-shell)
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The old version reused the object within each task, leading to
overwriting of the object when a mutable type is used, which is expected
to be common in fold.
Conflicts:
core/src/test/scala/spark/ShuffleSuite.scala
|
| |\ \
| | |/
| |/| |
Shuffle fixes and cleanup
|
| | | |
|
| | |\ |
|
| | | | |
|
| |\ \ \
| | | | |
| | | | | |
add Joblogger to Spark (on new Spark code)
|
| | | | | |
|
| | | | | |
|
| | |\| | |
|
| | | |/
| | |/|
| | | |
| | | | |
do joblogger's work
|
| |\ \ \
| | | | |
| | | | | |
Bug fix: Zero-length partitions result in NaN for overall mean & variance
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
PartitioningSuite.scala
|
| | |/ /
| | | |
| | | |
| | | | |
handle empty partitions without incorrectly returning NaN
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | | |
Conflicts:
core/src/main/scala/spark/scheduler/cluster/TaskSetManager.scala
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
2. Move localTaskSetManager to a new file
|
| | | | | |
|
| |\ \ \ \ |
|
| | |\ \ \ \
| | | | | | |
| | | | | | | |
Add top K method to RDD using a bounded priority queue
|
| | | | |_|/
| | | |/| | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | |/ / / |
|
| | |\ \ \
| | | |_|/
| | |/| | |
[Spark-753] Fix ClusterSchedulSuite unit test failed
|
| | | |/ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Implemented a removeRdd method in BlockManager, and use that to
implement RDD.unpersist. Previously, unpersist needs to send B akka
messages, where B = number of blocks. Now unpersist only needs to send W
akka messages, where W = the number of workers.
|
| | | | |
|
| | |\ \
| | | | |
| | | | |
| | | | |
| | | | | |
Conflicts:
core/src/test/scala/spark/ShuffleSuite.scala
|
| | | | |
| | | | |
| | | | |
| | | | | |
Also unify splitLocalRemoteBlocks for netty/nio and add a test case
|
| | | | |
| | | | |
| | | | |
| | | | | |
Also convert the local-cluster test case to check for non-zero block sizes
|
| | |/ /
| | | |
| | | |
| | | |
| | | |
| | | | |
a. Fix the port number by reading it from the bound channel
b. Fix the shutdown sequence to make sure we actually block on the channel
c. Fix the unit test to use two JVMs.
|
| | | | |
|
| | | | |
|
| |/ / |
|
| |\|
| | |
| | | |
[SPARK-663]Implement Fair Scheduler in Spark Cluster Scheduler
|
| | | |
|
| | |\
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
core/src/main/scala/spark/SparkContext.scala
core/src/main/scala/spark/scheduler/cluster/ClusterScheduler.scala
core/src/main/scala/spark/scheduler/cluster/TaskSetManager.scala
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
serializable.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
local-cluster pass. Previously they were failing because Netty was
trying to bind to the same port for all processes.
Pair programmed with @shivaram.
|