| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
Adding Scala version of PageRank example
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Fix bug reported in PR 791 : a race condition in ConnectionManager and Connection
|
| | | |
|
| | |
| | |
| | |
| | | |
ConnectionManager and Connection
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
core/src/main/scala/spark/ui/jobs/IndexPage.scala
core/src/main/scala/spark/ui/jobs/StagePage.scala
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This commit fixes issues where SparkListeners that take a while to
process events slow the DAGScheduler.
This commit also fixes a bug in the UI where if a user goes to a
web page of a stage that does not exist, they can create a memory
leak (granted, this is not an issue at small scale -- probably only
an issue if someone actively tried to DOS the UI).
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
One bug caused the UI to crash if you try to look at a job's status
before any of the tasks have finished.
The second bug was a concurrency issue where two different threads
(the scheduling thread and a UI thread) could be reading/updating
the data structures in JobProgressListener concurrently.
The third bug mis-used an Option, also causing the UI to crash
under certain conditions.
|
|\ \ \
| | | |
| | | | |
expose HDFS file system stats via Executor metrics
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Pull HBASE_VERSION in the head of sbt build
|
| | | | | |
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Java fixes, tests and examples for ALS, KMeans
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The scala constructor works for native type java types. Modify examples
to match this.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Also workaround a bug where double[][] class cast fails
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
- Changes ALS to accept RDD[Rating] instead of (Int, Int, Double) making it
easier to call from Java
- Renames class methods from `train` to `run` to enable static methods to be
called from Java.
- Add unit tests which check if both static / class methods can be called.
- Also add examples which port the main() function in ALS, KMeans to the
examples project.
Couple of minor changes to existing code:
- Add a toJavaRDD method in RDD to convert scala RDD to java RDD easily
- Workaround a bug where using double[] from Java leads to class cast exception in
KMeans init
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Remove extra synchronization in ResultTask
|
| | |/ / / /
| |/| | | | |
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
Log the launch command for Spark daemons
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
For debugging and analysis purposes, it's nice to have the exact command
used to launch Spark contained within the logs. This adds the necessary
hooks to make that possible.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
For standalone mode, add worker local env setting of SPARK_JAVA_OPTS as ...
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Signed-off-by: shane-huang <shengsheng.huang@intel.com>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
default and let application env override default options if applicable
Signed-off-by: shane-huang <shengsheng.huang@intel.com>
|
| | | | | | | |
|
| | | | | | | |
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Update to Chill 0.3.1
|
| | | | | | | | |
|
|/ / / / / / / |
|
|\ \ \ \ \ \ \
| |_|_|_|_|/ /
|/| | | | | | |
Bootstrap re-design
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| |\ \ \ \ \ \
| |/ / / / / /
|/| | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Conflicts:
core/src/main/scala/spark/ui/UIUtils.scala
core/src/main/scala/spark/ui/jobs/IndexPage.scala
core/src/main/scala/spark/ui/storage/RDDPage.scala
|
|\ \ \ \ \ \ \
| |_|_|_|_|_|/
|/| | | | | | |
Fixed issue in UI that decreased scheduler throughput by 5x or more
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Removal of items from ArrayBuffers in the UI code was slow and
significantly impacted scheduler throughput. This commit
improves scheduler throughput by 5x.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Update the Python logistic regression example
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
batch input records for more efficient NumPy computations
|
|\ \ \ \ \ \ \
| |_|_|/ / / /
|/| | | | | | |
SPARK-826: fold(), reduce(), collect() always attempt to use java serialization
|