| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
core/src/main/scala/org/apache/spark/scheduler/cluster/ClusterTaskSetManager.scala
project/SparkBuild.scala
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
bagel/pom.xml
core/pom.xml
core/src/test/scala/org/apache/spark/ui/UISuite.scala
examples/pom.xml
mllib/pom.xml
pom.xml
project/SparkBuild.scala
repl/pom.xml
streaming/pom.xml
tools/pom.xml
In scala 2.10, a shorter representation is used for naming artifacts
so changed to shorter scala version for artifacts and made it a property in pom.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Resolving package conflicts with hadoop 0.23.9
Hadoop 0.23.9 is having a package conflict with easymock's dependencies.
(cherry picked from commit 023e3fdf008b3194a36985a07923df9aaf64e520)
Signed-off-by: Reynold Xin <rxin@apache.org>
|
| |\|
| | |
| | |
| | |
| | |
| | | |
Conflicts:
core/src/test/scala/org/apache/spark/DistributedSuite.scala
project/SparkBuild.scala
|
| | | |
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressUI.scala
docs/_config.yml
project/SparkBuild.scala
repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala
|
| | | |
|
| | |\
| | | |
| | | | |
Add mapPartitionsWithIndex
|
| | | | |
|
| | | | |
|
|/ / /
| | |
| | |
| | | |
using Scala 2.10.3,
resolved maven-scala-plugin warning
|
|\| | |
|
| |/ |
|
| | |
|
| | |
|
|\|
| |
| |
| |
| |
| | |
Conflicts:
core/src/main/scala/org/apache/spark/SparkContext.scala
project/SparkBuild.scala
|
| |\
| | |
| | | |
Add explicit jets3t dependency, which is excluded in hadoop-client
|
| | | |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
Due to this change in HDFS:
https://issues.apache.org/jira/browse/HADOOP-7549
there is a bug when using the new assembly builds. The symptom is that any HDFS access
results in an exception saying "No filesystem for scheme 'hdfs'". This adds a merge
strategy in the assembly build which fixes the problem.
|
| |\
| | |
| | | |
Clean-up of Metrics Code/Docs and Add Ganglia Sink
|
| | | |
|
| |\ \
| | | |
| | | | |
Fix target JVM version in scala build
|
| | |/ |
|
| |\|
| | |
| | | |
Adding Apache license to two files
|
| | | |
|
| |/ |
|
| | |
|
|\| |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This includes the following changes:
- The "assembly" package now builds in Maven by default, and creates an
assembly containing both hadoop-client and Spark, unlike the old
BigTop distribution assembly that skipped hadoop-client
- There is now a bigtop-dist package to build the old BigTop assembly
- The repl-bin package is no longer built by default since the scripts
don't reply on it; instead it can be enabled with -Prepl-bin
- Py4J is now included in the assembly/lib folder as a local Maven repo,
so that the Maven package can link to it
- run-example now adds the original Spark classpath as well because the
Maven examples assembly lists spark-core and such as provided
- The various Maven projects add a spark-yarn dependency correctly
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit makes Spark invocation saner by using an assembly JAR to
find all of Spark's dependencies instead of adding all the JARs in
lib_managed. It also packages the examples into an assembly and uses
that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script
with two better-named scripts: "run-examples" for examples, and
"spark-class" for Spark internal classes (e.g. REPL, master, etc). This
is also designed to minimize the confusion people have in trying to use
"run" to run their own classes; it's not meant to do that, but now at
least if they look at it, they can modify run-examples to do a decent
job for them.
As part of this, Bagel's examples are also now properly moved to the
examples package instead of bagel.
|
| |
| |
| |
| |
| | |
This reverts commit 1fb1b0992838c8cdd57eec45793e67a0490f1a52, reversing
changes made to c69c48947d5102c81a9425cb380d861c3903685c.
|
| | |
|
| | |
|
| |
| |
| |
| | |
Fixes SBT build under Hadoop 0.23.9 and 2.0.4
|
| |\
| | |
| | | |
Update build docs
|
| | | |
|
| |\ \
| | |/
| |/| |
Synced sbt and maven builds to use the same dependencies, etc.
|
| | | |
|
| |/ |
|
| |\
| | |
| | |
| | |
| | | |
Conflicts:
core/src/main/scala/spark/PairRDDFunctions.scala
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|