aboutsummaryrefslogtreecommitdiff
path: root/pom.xml
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'master' into akka-bug-fixPrashant Sharma2013-12-111-9/+52
|\ | | | | | | | | | | | | | | | | | | Conflicts: core/pom.xml core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala pom.xml project/SparkBuild.scala streaming/pom.xml yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala
| * Fix pom.xml for maven buildRaymond Liu2013-12-031-9/+52
| |
* | Style fixes and addressed review comments at #221Prashant Sharma2013-12-101-9/+8
| |
* | Incorporated Patrick's feedback comment on #211 and made maven ↵Prashant Sharma2013-12-071-51/+5
| | | | | | | | build/dep-resolution atleast a bit faster.
* | Merge branch 'master' into scala-2.10-wipPrashant Sharma2013-11-251-0/+5
|\| | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/rdd/RDD.scala project/SparkBuild.scala
| * Fix Maven build for metrics-graphiteLiGuoqiang2013-11-251-0/+5
| |
* | Merge branch 'master' into scala-2.10Raymond Liu2013-11-141-0/+6
|\|
| * Allow spark on yarn to be run from HDFS. Allows the spark.jar, app.jar, and ↵tgravescs2013-11-041-0/+6
| | | | | | | | log4j.properties to be put into hdfs.
* | Merge branch 'master' into scala-2.10Raymond Liu2013-11-131-45/+81
|\|
| * Fix Maven build to use MQTT repositoryMatei Zaharia2013-10-231-0/+11
| |
| * Exclusion rules for Maven build files.Reynold Xin2013-10-191-44/+30
| |
| * Update pom.xml to use version 13 of the ASF parent pom and add mailingLists ↵Henry Saputra2013-10-141-1/+24
| | | | | | | | element.
| * Merge pull request #19 from aarondav/master-zkMatei Zaharia2013-10-101-0/+11
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Standalone Scheduler fault tolerance using ZooKeeper This patch implements full distributed fault tolerance for standalone scheduler Masters. There is only one master Leader at a time, which is actively serving scheduling requests. If this Leader crashes, another master will eventually be elected, reconstruct the state from the first Master, and continue serving scheduling requests. Leader election is performed using the ZooKeeper leader election pattern. We try to minimize the use of ZooKeeper and the assumptions about ZooKeeper's behavior, so there is a layer of retries and session monitoring on top of the ZooKeeper client. Master failover follows directly from the single-node Master recovery via the file system (patch d5a96fe), save that the Master state is stored in ZooKeeper instead. Configuration: By default, no recovery mechanism is enabled (spark.deploy.recoveryMode = NONE). By setting spark.deploy.recoveryMode to ZOOKEEPER and setting spark.deploy.zookeeper.url to an appropriate ZooKeeper URL, ZooKeeper recovery mode is enabled. By setting spark.deploy.recoveryMode to FILESYSTEM and setting spark.deploy.recoveryDirectory to an appropriate directory accessible by the Master, we will keep the behavior of from d5a96fe. Additionally, places where a Master could be specificied by a spark:// url can now take comma-delimited lists to specify backup masters. Note that this is only used for registration of NEW Workers and application Clients. Once a Worker or Client has registered with the Master Leader, it is "in the system" and will never need to register again.
| | * Standalone Scheduler fault tolerance using ZooKeeperAaron Davidson2013-09-261-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements full distributed fault tolerance for standalone scheduler Masters. There is only one master Leader at a time, which is actively serving scheduling requests. If this Leader crashes, another master will eventually be elected, reconstruct the state from the first Master, and continue serving scheduling requests. Leader election is performed using the ZooKeeper leader election pattern. We try to minimize the use of ZooKeeper and the assumptions about ZooKeeper's behavior, so there is a layer of retries and session monitoring on top of the ZooKeeper client. Master failover follows directly from the single-node Master recovery via the file system (patch 194ba4b8), save that the Master state is stored in ZooKeeper instead. Configuration: By default, no recovery mechanism is enabled (spark.deploy.recoveryMode = NONE). By setting spark.deploy.recoveryMode to ZOOKEEPER and setting spark.deploy.zookeeper.url to an appropriate ZooKeeper URL, ZooKeeper recovery mode is enabled. By setting spark.deploy.recoveryMode to FILESYSTEM and setting spark.deploy.recoveryDirectory to an appropriate directory accessible by the Master, we will keep the behavior of from 194ba4b8. Additionally, places where a Master could be specificied by a spark:// url can now take comma-delimited lists to specify backup masters. Note that this is only used for registration of NEW Workers and application Clients. Once a Worker or Client has registered with the Master Leader, it is "in the system" and will never need to register again. Forthcoming: Documentation, tests (! - only ad hoc testing has been performed so far) I do not intend for this commit to be merged until tests are added, but this patch should still be mostly reviewable until then.
* | | Merge branch 'scala-2.10' of github.com:ScrapCodes/spark into scala-2.10Prashant Sharma2013-10-101-3/+3
|\ \ \ | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/scheduler/cluster/ClusterTaskSetManager.scala project/SparkBuild.scala
| * | | Merge branch 'master' into wip-merge-masterPrashant Sharma2013-10-081-1/+2
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: bagel/pom.xml core/pom.xml core/src/test/scala/org/apache/spark/ui/UISuite.scala examples/pom.xml mllib/pom.xml pom.xml project/SparkBuild.scala repl/pom.xml streaming/pom.xml tools/pom.xml In scala 2.10, a shorter representation is used for naming artifacts so changed to shorter scala version for artifacts and made it a property in pom.
| | * | Merging build changes in from 0.8Patrick Wendell2013-10-051-3/+4
| | | |
| * | | Merge branch 'master' into scala-2.10Prashant Sharma2013-10-011-2/+1
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/ui/jobs/JobProgressUI.scala docs/_config.yml project/SparkBuild.scala repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala
| | * | Removed scala -optimize flag.Reynold Xin2013-09-261-1/+0
| | |/
| | * Update build version in masterPatrick Wendell2013-09-241-1/+1
| | |
* | | scala 2.10 requires Java 1.6,Martin Weindel2013-10-051-3/+9
|/ / | | | | | | using Scala 2.10.3, resolved maven-scala-plugin warning
* | Sync with master and some build fixesPrashant Sharma2013-09-261-1/+2
|\|
| * Bumping Mesos version to 0.13.0Patrick Wendell2013-09-151-1/+1
| |
* | fixed maven build for scala 2.10Prashant Sharma2013-09-261-24/+18
| |
* | version changed 2.9.3 -> 2.10 in shell script.Prashant Sharma2013-09-151-8/+0
| |
* | Merge branch 'master' of git://github.com/mesos/spark into scala-2.10Prashant Sharma2013-09-151-127/+107
|\| | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/SparkContext.scala project/SparkBuild.scala
| * Use different Hadoop version for YARN artifacts.Patrick Wendell2013-09-131-5/+6
| | | | | | | | | | This uses a seperate Hadoop version for YARN artifact. This means when people link against spark-yarn, things will resolve correctly.
| * Add git scm url for publishingPatrick Wendell2013-09-121-0/+1
| |
| * Add explicit jets3t dependency, which is excluded in hadoop-clientMatei Zaharia2013-09-101-0/+5
| |
| * Merge pull request #911 from pwendell/ganglia-sinkMatei Zaharia2013-09-091-0/+5
| |\ | | | | | | Adding Manen dependency for Ganglia
| | * Adding Manen dependencyPatrick Wendell2013-09-091-0/+5
| | |
| * | Fix YARN assembly generation under MavenJey Kottalam2013-09-061-125/+93
| |/
* | Merged with masterPrashant Sharma2013-09-061-90/+254
|\|
| * Add Apache parent POMMatei Zaharia2013-09-021-0/+5
| |
| * Fix some URLsMatei Zaharia2013-09-011-2/+2
| |
| * Initial work to rename package to org.apache.sparkMatei Zaharia2013-09-011-7/+7
| |
| * Update Maven build to create assemblies expected by new scriptsMatei Zaharia2013-08-291-13/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This includes the following changes: - The "assembly" package now builds in Maven by default, and creates an assembly containing both hadoop-client and Spark, unlike the old BigTop distribution assembly that skipped hadoop-client - There is now a bigtop-dist package to build the old BigTop assembly - The repl-bin package is no longer built by default since the scripts don't reply on it; instead it can be enabled with -Prepl-bin - Py4J is now included in the assembly/lib folder as a local Maven repo, so that the Maven package can link to it - run-example now adds the original Spark classpath as well because the Maven examples assembly lists spark-core and such as provided - The various Maven projects add a spark-yarn dependency correctly
| * Provide more memory for testsMatei Zaharia2013-08-291-1/+1
| |
| * Revert "Merge pull request #841 from rxin/json"Reynold Xin2013-08-261-0/+5
| | | | | | | | | | This reverts commit 1fb1b0992838c8cdd57eec45793e67a0490f1a52, reversing changes made to c69c48947d5102c81a9425cb380d861c3903685c.
| * Merge pull request #855 from jey/update-build-docsMatei Zaharia2013-08-221-6/+6
| |\ | | | | | | Update build docs
| | * Use "hadoop.version" property when specifying Hadoop YARN version tooJey Kottalam2013-08-211-6/+6
| | |
| | * Downgraded default build hadoop version to 1.0.4.Reynold Xin2013-08-211-1/+1
| | |
| * | Synced sbt and maven buildsMark Hamstra2013-08-211-5/+11
| |/
| * Merge remote-tracking branch 'jey/hadoop-agnostic'Matei Zaharia2013-08-201-66/+158
| |\ | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/PairRDDFunctions.scala
| | * Fix Maven build with Hadoop 0.23.9Jey Kottalam2013-08-181-11/+0
| | |
| | * Maven build now also works with YARNJey Kottalam2013-08-161-0/+128
| | |
| | * Maven build now works with CDH hadoop-2.0.0-mr1Jey Kottalam2013-08-161-35/+20
| | |
| | * Initial changes to make Maven build agnostic of hadoop versionJey Kottalam2013-08-161-21/+11
| | |
| | * Update default version of Hadoop to 1.2.1Jey Kottalam2013-08-151-1/+1
| | |
| * | Use the JSON formatter from Scala library and removed dependency on lift-json.Reynold Xin2013-08-151-5/+0
| |/ | | | | | | It made the JSON creation slightly more complicated, but reduces one external dependency. The scala library also properly escape "/" (which lift-json doesn't).