Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | fixed maven build for scala 2.10 | Prashant Sharma | 2013-09-26 | 1 | -3/+3 |
| | |||||
* | Move some classes to more appropriate packages: | Matei Zaharia | 2013-09-01 | 1 | -19/+19 |
| | | | | | | * RDD, *RDDFunctions -> org.apache.spark.rdd * Utils, ClosureCleaner, SizeEstimator -> org.apache.spark.util * JavaSerializer, KryoSerializer -> org.apache.spark.serializer | ||||
* | Fix some URLs | Matei Zaharia | 2013-09-01 | 1 | -1/+1 |
| | |||||
* | Initial work to rename package to org.apache.spark | Matei Zaharia | 2013-09-01 | 2 | -96/+96 |
| | |||||
* | Merge remote-tracking branch 'jey/hadoop-agnostic' | Matei Zaharia | 2013-08-20 | 1 | -117/+10 |
|\ | | | | | | | | | Conflicts: core/src/main/scala/spark/PairRDDFunctions.scala | ||||
| * | Remove redundant dependencies from POMs | Jey Kottalam | 2013-08-18 | 1 | -5/+0 |
| | | |||||
| * | Maven build now also works with YARN | Jey Kottalam | 2013-08-16 | 1 | -46/+0 |
| | | |||||
| * | Maven build now works with CDH hadoop-2.0.0-mr1 | Jey Kottalam | 2013-08-16 | 1 | -33/+0 |
| | | |||||
| * | Initial changes to make Maven build agnostic of hadoop version | Jey Kottalam | 2013-08-16 | 1 | -38/+15 |
| | | |||||
* | | Made PairRDDFunctions taking only Tuple2, but made the rest of the shuffle ↵ | Reynold Xin | 2013-08-19 | 1 | -1/+1 |
| | | | | | | | | code path working with general Product2. | ||||
* | | Allow subclasses of Product2 in all key-value related classes ↵ | Reynold Xin | 2013-08-18 | 1 | -2/+4 |
|/ | | | | (ShuffleDependency, PairRDDFunctions, etc). | ||||
* | Change scala.Option to Guava Optional in Java APIs. | Josh Rosen | 2013-08-11 | 1 | -15/+19 |
| | |||||
* | Added missing scalatest dependency | Mark Hamstra | 2013-07-26 | 1 | -0/+8 |
| | |||||
* | Fix Maven build errors after previous commits | Matei Zaharia | 2013-07-24 | 1 | -14/+119 |
| | |||||
* | Remove annotation code that broke build. | Josh Rosen | 2013-07-22 | 1 | -5/+0 |
| | |||||
* | Add JavaAPICompletenessChecker. | Josh Rosen | 2013-07-22 | 2 | -0/+422 |
This is used to find methods in the Scala API that need to be ported to the Java API. To use it: ./run spark.tools.JavaAPICompletenessChecker Conflicts: project/SparkBuild.scala run run2.cmd |