| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | | | | | | | |
|
| |\ \ \ \ \ \ \ \
| | | | | | | | | |
| | | | | | | | | | |
Add "org.apache." prefix to packages in spark-class
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Lacking this, the if/case statements never trigger on Spark 0.8.0+.
|
| |\ \ \ \ \ \ \ \ \
| | |/ / / / / / / /
| |/| | | | | | | | |
After unit tests, clear port properties unconditionally
|
| |/ / / / / / / /
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
In MapOutputTrackerSuite, the "remote fetch" test sets spark.driver.port
and spark.hostPort, assuming that they will be cleared by
LocalSparkContext. However, the test never sets sc, so it remains null,
causing LocalSparkContext to skip clearing these properties. Subsequent
tests therefore fail with java.net.BindException: "Address already in
use".
This commit makes LocalSparkContext clear the properties even if sc is
null.
|
|\ \ \ \ \ \ \ \ \
| | |/ / / / / / /
| |/| / / / / / /
| |_|/ / / / / /
|/| | | | | | | |
|
| | | | | | | | |
|
| | | | |_|_|/
| | | |/| | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This change requires adding an extra failure mode: tasks can complete
successfully, but the result gets lost or flushed from the block manager
before it's been fetched.
|
| |_|/| | | |
|/| |/ / / / |
|
| |/| | | |
| | | | | |
| | | | | | |
Fix issue with spark_ec2 seeing empty security groups
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Under unknown, but occasional, circumstances, reservation.groups is empty
despite reservation.instances each having groups. This means that the
spark_ec2 get_existing_clusters() method would fail to find any instances.
To fix it, we simply use the instances' groups as the source of truth.
Note that this is actually just a revival of PR #827, now that the issue
has been reproduced.
|
| |\ \ \ \
| | | | | |
| | | | | | |
Fix typo in Maven build docs
|
| |/ / / / |
|
| |\ \ \ \
| | | | | |
| | | | | | |
Bumping Mesos version to 0.13.0
|
| | | | | | |
|
| |\ \ \ \ \
| | |_|_|/ /
| |/| | | | |
Explain yarn.version in Maven build docs
|
| | |/ / / |
|
| |\| | |
| | | | |
| | | | | |
Use different Hadoop version for YARN artifacts.
|
| |/ / /
| | | |
| | | |
| | | |
| | | | |
This uses a seperate Hadoop version for YARN artifact. This means when people link against
spark-yarn, things will resolve correctly.
|
| |/ / |
|
| |\ \
| | | |
| | | | |
Changed localProperties to use ThreadLocal (not DynamicVariable).
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The fact that DynamicVariable uses an InheritableThreadLocal
can cause problems where the properties end up being shared
across threads in certain circumstances.
|
| |\ \ \
| | |/ /
| |/| | |
Updated Spark on Mesos documentation.
|
| | | | |
|
| |/ / |
|
| |\ \
| | | |
| | | | |
Add explicit jets3t dependency, which is excluded in hadoop-client
|
| | | | |
|
| |\ \ \
| | | | |
| | | | | |
Change default port number from 3030 to 4030.
|
| | | | | |
|
| |\ \ \ \
| | | | | |
| | | | | | |
SPARK-894 - Not all WebUI fields delivered VIA JSON
|
| | | |/ /
| | |/| | |
|
| |\ \ \ \
| | | | | |
| | | | | | |
fix run-example script
|
| | | | | | |
|
| | | |/ /
| | |/| | |
|
| |\ \ \ \
| | |/ / /
| |/| | | |
Fix HDFS access bug with assembly build.
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Due to this change in HDFS:
https://issues.apache.org/jira/browse/HADOOP-7549
there is a bug when using the new assembly builds. The symptom is that any HDFS access
results in an exception saying "No filesystem for scheme 'hdfs'". This adds a merge
strategy in the assembly build which fixes the problem.
|
| |/ / |
|
| |\ \
| | | |
| | | | |
Document libgfortran dependency for MLBase
|
| |/ / |
|
| |\ \
| | | |
| | | | |
Get rid of / improve ugly NPE when Utils.deleteRecursively() fails
|
| | | | |
|
| | |/
| | |
| | |
| | | |
listFiles() could return null if the I/O fails, and this currently results in an ugly NPE which is hard to diagnose.
|
| |\ \
| | | |
| | | | |
Fix copy issue in https://github.com/mesos/spark/pull/899
|
| |/ / |
|
| |\ \
| | | |
| | | | |
Add better docs for coalesce.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Include the useful tip that if shuffle=true, coalesce can actually
increase the number of partitions.
This makes coalesce more like a generic `RDD.repartition` operation.
(Ideally this `RDD.repartition` could automatically choose either a coalesce or
a shuffle if numPartitions was either less than or greater than, respectively,
the current number of partitions.)
|
| |\ \ \
| | | | |
| | | | | |
Add metrics-ganglia to core pom file
|
| |/ / / |
|