aboutsummaryrefslogtreecommitdiff
path: root/project
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'master' of git://github.com/mesos/spark into scala-2.10Prashant Sharma2013-09-152-6/+31
|\ | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/SparkContext.scala project/SparkBuild.scala
| * Merge pull request #919 from mateiz/jets3tPatrick Wendell2013-09-111-0/+1
| |\ | | | | | | Add explicit jets3t dependency, which is excluded in hadoop-client
| | * Add explicit jets3t dependency, which is excluded in hadoop-clientMatei Zaharia2013-09-101-0/+1
| | |
| * | Fix HDFS access bug with assembly build.Patrick Wendell2013-09-101-0/+1
| |/ | | | | | | | | | | | | | | | | Due to this change in HDFS: https://issues.apache.org/jira/browse/HADOOP-7549 there is a bug when using the new assembly builds. The symptom is that any HDFS access results in an exception saying "No filesystem for scheme 'hdfs'". This adds a merge strategy in the assembly build which fixes the problem.
| * Merge pull request #906 from pwendell/ganglia-sinkPatrick Wendell2013-09-081-0/+1
| |\ | | | | | | Clean-up of Metrics Code/Docs and Add Ganglia Sink
| | * Ganglia sinkPatrick Wendell2013-09-081-0/+1
| | |
| * | Merge pull request #908 from pwendell/masterMatei Zaharia2013-09-081-1/+7
| |\ \ | | | | | | | | Fix target JVM version in scala build
| | * | Fix target JVM version in scala buildPatrick Wendell2013-09-081-1/+7
| | |/
| * | Merge pull request #904 from pwendell/masterPatrick Wendell2013-09-071-1/+18
| |\| | | | | | | Adding Apache license to two files
| | * Adding Apache license to two filesPatrick Wendell2013-09-071-1/+18
| | |
| * | Minor YARN build cleanupsJey Kottalam2013-09-061-2/+2
| |/
* | Fixed repl suitePrashant Sharma2013-09-151-5/+5
| |
* | Merged with masterPrashant Sharma2013-09-063-69/+142
|\|
| * Add Apache parent POMMatei Zaharia2013-09-021-0/+5
| |
| * Fix some URLsMatei Zaharia2013-09-011-2/+2
| |
| * Initial work to rename package to org.apache.sparkMatei Zaharia2013-09-011-4/+8
| |
| * Update Maven build to create assemblies expected by new scriptsMatei Zaharia2013-08-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This includes the following changes: - The "assembly" package now builds in Maven by default, and creates an assembly containing both hadoop-client and Spark, unlike the old BigTop distribution assembly that skipped hadoop-client - There is now a bigtop-dist package to build the old BigTop assembly - The repl-bin package is no longer built by default since the scripts don't reply on it; instead it can be enabled with -Prepl-bin - Py4J is now included in the assembly/lib folder as a local Maven repo, so that the Maven package can link to it - run-example now adds the original Spark classpath as well because the Maven examples assembly lists spark-core and such as provided - The various Maven projects add a spark-yarn dependency correctly
| * Provide more memory for testsMatei Zaharia2013-08-291-1/+1
| |
| * Change build and run instructions to use assembliesMatei Zaharia2013-08-293-22/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit makes Spark invocation saner by using an assembly JAR to find all of Spark's dependencies instead of adding all the JARs in lib_managed. It also packages the examples into an assembly and uses that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script with two better-named scripts: "run-examples" for examples, and "spark-class" for Spark internal classes (e.g. REPL, master, etc). This is also designed to minimize the confusion people have in trying to use "run" to run their own classes; it's not meant to do that, but now at least if they look at it, they can modify run-examples to do a decent job for them. As part of this, Bagel's examples are also now properly moved to the examples package instead of bagel.
| * Revert "Merge pull request #841 from rxin/json"Reynold Xin2013-08-261-0/+1
| | | | | | | | | | This reverts commit 1fb1b0992838c8cdd57eec45793e67a0490f1a52, reversing changes made to c69c48947d5102c81a9425cb380d861c3903685c.
| * Upgrade SBT IDE project generatorsJey Kottalam2013-08-231-2/+2
| |
| * Fix SBT generation of IDE project filesJey Kottalam2013-08-231-5/+12
| |
| * Re-add removed dependency on 'commons-daemon'Jey Kottalam2013-08-221-0/+1
| | | | | | | | Fixes SBT build under Hadoop 0.23.9 and 2.0.4
| * Merge pull request #855 from jey/update-build-docsMatei Zaharia2013-08-221-4/+3
| |\ | | | | | | Update build docs
| | * Remove references to unsupported Hadoop versionsJey Kottalam2013-08-211-4/+3
| | |
| * | Merge pull request #854 from markhamstra/pomUpdateMatei Zaharia2013-08-221-4/+1
| |\ \ | | |/ | |/| Synced sbt and maven builds to use the same dependencies, etc.
| | * Synced sbt and maven buildsMark Hamstra2013-08-211-4/+1
| | |
| * | Downgraded default build hadoop version to 1.0.4.Reynold Xin2013-08-211-1/+1
| |/
| * Merge remote-tracking branch 'jey/hadoop-agnostic'Matei Zaharia2013-08-201-41/+35
| |\ | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/PairRDDFunctions.scala
| | * Update SBT build to use simpler fix for Hadoop 0.23.9Jey Kottalam2013-08-191-11/+2
| | |
| | * Rename YARN build flag to SPARK_WITH_YARNJey Kottalam2013-08-161-5/+7
| | |
| | * Fix SBT build under Hadoop 0.23.xJey Kottalam2013-08-161-0/+11
| | |
| | * Fix repl/assembly when YARN enabledJey Kottalam2013-08-161-3/+4
| | |
| | * Allow make-distribution.sh to specify Hadoop version usedJey Kottalam2013-08-161-6/+22
| | |
| | * Update default version of Hadoop to 1.2.1Jey Kottalam2013-08-151-1/+1
| | |
| | * yarn supportJey Kottalam2013-08-151-6/+6
| | |
| | * yarn sbtJey Kottalam2013-08-151-13/+15
| | |
| | * dynamically detect hadoop versionJey Kottalam2013-08-151-32/+3
| | |
| * | Use the JSON formatter from Scala library and removed dependency on lift-json.Reynold Xin2013-08-151-1/+0
| |/ | | | | | | It made the JSON creation slightly more complicated, but reduces one external dependency. The scala library also properly escape "/" (which lift-json doesn't).
| * Update to Mesos 0.12.1Matei Zaharia2013-08-131-1/+1
| |
| * Add MetricsServlet for Spark metrics systemjerryshao2013-08-121-0/+1
| |
| * Merge pull request #800 from dlyubimov/HBASE_VERSIONMatei Zaharia2013-08-091-1/+4
| |\ | | | | | | Pull HBASE_VERSION in the head of sbt build
| | * fewer wordsDmitriy Lyubimov2013-08-091-1/+1
| | |
| | * Pull HBASE_VERSION in the head of sbt buildDmitriy Lyubimov2013-08-091-1/+4
| | |
| * | Merge pull request #786 from shivaram/mllib-javaMatei Zaharia2013-08-091-1/+1
| |\ \ | | |/ | |/| Java fixes, tests and examples for ALS, KMeans
| | * Java examples, tests for KMeans and ALSShivaram Venkataraman2013-08-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Changes ALS to accept RDD[Rating] instead of (Int, Int, Double) making it easier to call from Java - Renames class methods from `train` to `run` to enable static methods to be called from Java. - Add unit tests which check if both static / class methods can be called. - Also add examples which port the main() function in ALS, KMeans to the examples project. Couple of minor changes to existing code: - Add a toJavaRDD method in RDD to convert scala RDD to java RDD easily - Workaround a bug where using double[] from Java leads to class cast exception in KMeans init
| * | Update to Chill 0.3.1Matei Zaharia2013-08-081-2/+2
| |/
| * Revert Mesos version to 0.9 since the 0.12 artifact has target Java 7Matei Zaharia2013-08-011-1/+1
| |
| * Merge pull request #753 from shivaram/glm-refactorMatei Zaharia2013-07-311-2/+4
| |\ | | | | | | Build changes for ML lib
| | * Add mllib, bagel to repl dependenciesShivaram Venkataraman2013-07-301-3/+3
| | | | | | | | | | | | Also don't build an assembly jar for them