| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| | |
Conflicts:
core/src/main/scala/org/apache/spark/scheduler/cluster/ClusterTaskSetManager.scala
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
DRY out the POMs with scala.binary.version
...instead of hard-coding 2.10 repeatedly.
As long as it's not a `<project>`-level `<artifactId>`, I think that we are okay parameterizing these.
|
| | | |
|
| |\ \
| | |/
| |/|
| | |
| | |
| | | |
Fix the --name option for Spark on Yarn
Looks like the --name option accidentally got broken in one of the merges. The Client hangs if the --name option is used right now.
|
| | | |
|
| |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
core/pom.xml
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
pom.xml
project/SparkBuild.scala
streaming/pom.xml
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
build/dep-resolution atleast a bit faster.
|
| | | |
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
core/src/main/scala/org/apache/spark/rdd/MapPartitionsRDD.scala
core/src/main/scala/org/apache/spark/rdd/MapPartitionsWithContextRDD.scala
core/src/main/scala/org/apache/spark/rdd/RDD.scala
python/pyspark/rdd.py
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Conflicts:
core/src/main/scala/org/apache/spark/util/collection/PrimitiveVector.scala
streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaStreamingContext.scala
|
| |\ \ \ \ |
|
| |\ \ \ \ \ |
|
| |\ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Conflicts:
bagel/pom.xml
core/pom.xml
core/src/test/scala/org/apache/spark/ui/UISuite.scala
examples/pom.xml
mllib/pom.xml
pom.xml
project/SparkBuild.scala
repl/pom.xml
streaming/pom.xml
tools/pom.xml
In scala 2.10, a shorter representation is used for naming artifacts
so changed to shorter scala version for artifacts and made it a property in pom.
|
| |\ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Conflicts:
core/src/test/scala/org/apache/spark/DistributedSuite.scala
project/SparkBuild.scala
|
| |\ \ \ \ \ \ \ \
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Conflicts:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressUI.scala
docs/_config.yml
project/SparkBuild.scala
repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala
|
| | | | | | | | | | |
|
| | | | | | | | | | |
|
|\ \ \ \ \ \ \ \ \ \
| | |_|_|_|_|_|_|_|/
| |/| | | | | | | | |
|
| |\ \ \ \ \ \ \ \ \
| | | |_|_|_|_|_|_|/
| | |/| | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Conflicts:
yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
|
| | | |_|_|_|_|_|/
| | |/| | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
With this scheduler, the user application is launched locally,
While the executor will be launched by YARN on remote nodes.
This enables spark-shell to run upon YARN.
|
| | | | | | | | | |
|
| |\| | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Conflicts:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
yarn/src/main/scala/org/apache/spark/deploy/yarn/WorkerRunnable.scala
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala
|
| | |\ \ \ \ \ \ \ |
|
| | | | | | | | | | |
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Passed the sbt/sbt compile and test
|
| | |/ / / / / / /
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Also remove unused imports as I found them along the way.
Remove return statements when returning value in the Scala code.
Passing compile and tests.
|
| |/ / / / / / / |
|
| | |_|_|_|_|/
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
-) Remove some of unused imports as I found them
-) Remove ";" in the imports statements
-) Remove () at the end of method call like size that does not have size effect.
|
| | | | | | | |
|
|\| | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Conflicts:
core/src/main/scala/org/apache/spark/scheduler/ClusterScheduler.scala
|
| | |_|_|_|/
| |/| | | |
| | | | | |
| | | | | | |
log4j.properties to be put into hdfs.
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The code in LocalScheduler/LocalTaskSetManager was nearly identical
to the code in ClusterScheduler/ClusterTaskSetManager. The redundancy
made making updating the schedulers unnecessarily painful and error-
prone. This commit combines the two into a single TaskScheduler/
TaskSetManager.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
be explicit
about inclusion of spark.jar and app.jar
|
| | | | | |
|
|\ \ \ \ \
| |_|_|_|/
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Closes #11
Conflicts:
docs/running-on-yarn.md
yarn/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
cleanup
the classpaths
|
| |/ / /
| | | |
| | | |
| | | | |
cleanup the staging directory on exit
|
| |_|/
|/| | |
|
| | | |
|
| |/
|/| |
|
| | |
|
|/ |
|
|
|
|
|
| |
This uses a seperate Hadoop version for YARN artifact. This means when people link against
spark-yarn, things will resolve correctly.
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
core/src/main/scala/org/apache/spark/ui/UIUtils.scala
core/src/main/scala/org/apache/spark/ui/jobs/PoolTable.scala
core/src/main/scala/org/apache/spark/ui/jobs/StageTable.scala
docs/running-on-yarn.md
|
| | |
|
| | |
|