| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
As part of this, changed our Scala 2.9.2 Kafka library to be available
as a local Maven repository, following the example in
(http://blog.dub.podval.org/2010/01/maven-in-project-repository.html)
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
core/src/main/scala/spark/rdd/CoGroupedRDD.scala
core/src/main/scala/spark/rdd/FilteredRDD.scala
docs/_layouts/global.html
docs/index.md
run
|
| |\
| | |
| | | |
JSON support added to WebUI
|
| | | |
|
| | |
| | |
| | |
| | | |
Added the io.spray JSON library
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
fix SequenceFileRDDFunctions to pick the right type conversion across Hadoop
versions
|
|\ \ \ |
|
| |\ \ \
| | | | |
| | | | | |
Adding a Twitter InputDStream with an example
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| |/ / / /
|/| | / /
| | |/ /
| |/| | |
|
| |\ \ \
| | | | |
| | | | | |
Kryo2 update against Spark master
|
| | | | | |
|
|\| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Conflicts:
core/src/main/scala/spark/MapOutputTracker.scala
core/src/main/scala/spark/PairRDDFunctions.scala
core/src/main/scala/spark/ParallelCollection.scala
core/src/main/scala/spark/RDD.scala
core/src/main/scala/spark/rdd/BlockRDD.scala
core/src/main/scala/spark/rdd/CartesianRDD.scala
core/src/main/scala/spark/rdd/CoGroupedRDD.scala
core/src/main/scala/spark/rdd/CoalescedRDD.scala
core/src/main/scala/spark/rdd/FilteredRDD.scala
core/src/main/scala/spark/rdd/FlatMappedRDD.scala
core/src/main/scala/spark/rdd/GlommedRDD.scala
core/src/main/scala/spark/rdd/HadoopRDD.scala
core/src/main/scala/spark/rdd/MapPartitionsRDD.scala
core/src/main/scala/spark/rdd/MapPartitionsWithSplitRDD.scala
core/src/main/scala/spark/rdd/MappedRDD.scala
core/src/main/scala/spark/rdd/PipedRDD.scala
core/src/main/scala/spark/rdd/SampledRDD.scala
core/src/main/scala/spark/rdd/ShuffledRDD.scala
core/src/main/scala/spark/rdd/UnionRDD.scala
core/src/main/scala/spark/storage/BlockManager.scala
core/src/main/scala/spark/storage/BlockManagerId.scala
core/src/main/scala/spark/storage/BlockManagerMaster.scala
core/src/main/scala/spark/storage/StorageLevel.scala
core/src/main/scala/spark/util/MetadataCleaner.scala
core/src/main/scala/spark/util/TimeStampedHashMap.scala
core/src/test/scala/spark/storage/BlockManagerSuite.scala
run
|
| | |_|/
| |/| | |
|
| |_|/
|/| | |
|
| | | |
|
| | | |
|
|\| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
core/src/main/scala/spark/BlockStoreShuffleFetcher.scala
core/src/main/scala/spark/KryoSerializer.scala
core/src/main/scala/spark/MapOutputTracker.scala
core/src/main/scala/spark/RDD.scala
core/src/main/scala/spark/SparkContext.scala
core/src/main/scala/spark/executor/Executor.scala
core/src/main/scala/spark/network/Connection.scala
core/src/main/scala/spark/network/ConnectionManagerTest.scala
core/src/main/scala/spark/rdd/BlockRDD.scala
core/src/main/scala/spark/rdd/NewHadoopRDD.scala
core/src/main/scala/spark/scheduler/ShuffleMapTask.scala
core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
core/src/main/scala/spark/storage/BlockManager.scala
core/src/main/scala/spark/storage/BlockMessage.scala
core/src/main/scala/spark/storage/BlockStore.scala
core/src/main/scala/spark/storage/StorageLevel.scala
core/src/main/scala/spark/util/AkkaUtils.scala
project/SparkBuild.scala
run
|
| | | |
|
| | | |
|
| |/ |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
By default - I'm leaving this commented out. This is because
there is a bug in the PGP signing plugin which causes it to active
even duing a publish-local. So we'll just uncomment when we decide
to publish.
|
| |
| |
| |
| |
| |
| | |
the build file so that mesos-0.9.0-incubating.jar (which contains the
same class files, but has a silightly different name) will be pulled
down from Maven Central instead.
|
| | |
|
| | |
|
| |
| |
| |
| | |
This reverts commit 42e0a68082327c78dbd0fd313145124d9b8a9d98.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
assembly option for streaming project.
|
|\| |
|
| |\ |
|
| | |\
| | | |
| | | |
| | | |
| | | | |
Conflicts:
project/SparkBuild.scala
|
| | | |
| | | |
| | | |
| | | | |
Cleans up #158 / 509b721.
|
| | | | |
|
| | |\| |
|
| | | | |
|
| | |/
| |/|
| | |
| | | |
Heavily inspired by Hadoop cluster scripts ;-)
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
This resolves an issue where running Spark from
the assembly jar would cause a "No configuration
setting found for key 'akka.version'" exception.
This solution is from the Akka Team Blog:
http://letitcrash.com/post/21025950392/
|
|/ |
|