Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Merge branch 'mesos' | haitao.yao | 2013-02-20 | 4 | -2/+22 |
|\ | |||||
| * | Merge pull request #484 from andyk/master | Matei Zaharia | 2013-02-19 | 2 | -1/+2 |
| |\ | | | | | | | Fixes a broken link in documentation to issue tracker | ||||
| | * | Fixes link to issue tracker in documentation page "Contributing to Spark". | Andy Konwinski | 2013-02-19 | 2 | -1/+2 |
| | | | |||||
| * | | Merge pull request #483 from rxin/splitpruningrdd2 | Matei Zaharia | 2013-02-19 | 1 | -0/+12 |
| |\ \ | | |/ | |/| | Added a method to create PartitionPruningRDD. | ||||
| | * | Added a method to create PartitionPruningRDD. | Reynold Xin | 2013-02-19 | 1 | -0/+12 |
| |/ | |||||
| * | Merge pull request #477 from shivaram/ganglia-port-change | Matei Zaharia | 2013-02-18 | 1 | -1/+8 |
| |\ | | | | | | | Ganglia port change | ||||
| | * | Print cluster url after setup completes | Shivaram Venkataraman | 2013-02-18 | 1 | -0/+5 |
| | | | |||||
| | * | Print ganglia url after setup | Shivaram Venkataraman | 2013-02-18 | 1 | -0/+2 |
| | | | |||||
| | * | Use port 5080 for httpd/ganglia | Shivaram Venkataraman | 2013-02-18 | 1 | -1/+1 |
| |/ | |||||
* | | Merge branch 'mesos' | haitao.yao | 2013-02-19 | 88 | -749/+836 |
|\| | |||||
| * | Rename "jobs" to "applications" in the standalone cluster | Matei Zaharia | 2013-02-17 | 34 | -295/+299 |
| | | |||||
| * | Renamed "splits" to "partitions" | Matei Zaharia | 2013-02-17 | 48 | -390/+405 |
| | | |||||
| * | Clean up EC2 script options a bit | Matei Zaharia | 2013-02-17 | 1 | -9/+12 |
| | | |||||
| * | Change EC2 script to use 0.6 AMIs by default, for now | Matei Zaharia | 2013-02-17 | 1 | -5/+5 |
| | | |||||
| * | Merge pull request #421 from shivaram/spark-ec2-change | Matei Zaharia | 2013-02-17 | 2 | -15/+59 |
| |\ | | | | | | | Switch spark_ec2.py to use the new spark-ec2 scripts. | ||||
| | * | Turn on ganglia by default | Shivaram Venkataraman | 2013-01-31 | 1 | -1/+1 |
| | | | |||||
| | * | Add an option to use the old scripts | Shivaram Venkataraman | 2013-01-28 | 1 | -13/+30 |
| | | | |||||
| | * | Add option to start ganglia. Also enable Hadoop ports even if cluster type is | Shivaram Venkataraman | 2013-01-27 | 1 | -8/+15 |
| | | | | | | | | | | | | not mesos | ||||
| | * | Fix swap variable name | Shivaram Venkataraman | 2013-01-27 | 1 | -1/+1 |
| | | | |||||
| | * | Update spark_ec2.py to use new spark-ec2 scripts | Shivaram Venkataraman | 2013-01-26 | 2 | -12/+32 |
| | | | |||||
| * | | Merge pull request #471 from stephenh/parallelrdd | Matei Zaharia | 2013-02-16 | 3 | -34/+29 |
| |\ \ | | | | | | | | | Move ParallelCollection into spark.rdd package. | ||||
| | * | | Move ParallelCollection into spark.rdd package. | Stephen Haberman | 2013-02-16 | 3 | -34/+29 |
| | | | | |||||
| * | | | Merge pull request #470 from stephenh/morek | Matei Zaharia | 2013-02-16 | 6 | -10/+10 |
| |\ \ \ | | | | | | | | | | | Make CoGroupedRDDs explicitly have the same key type. | ||||
| | * | | | Make CoGroupedRDDs explicitly have the same key type. | Stephen Haberman | 2013-02-16 | 6 | -10/+10 |
| | |/ / | |||||
| * | | | Merge pull request #469 from stephenh/samepartitionercombine | Matei Zaharia | 2013-02-16 | 2 | -1/+26 |
| |\ \ \ | | |/ / | |/| | | If combineByKey is using the same partitioner, skip the shuffle. | ||||
| | * | | Add assertion about dependencies. | Stephen Haberman | 2013-02-16 | 2 | -4/+14 |
| | | | | |||||
| | * | | Avoid a shuffle if combineByKey is passed the same partitioner. | Stephen Haberman | 2013-02-16 | 2 | -1/+16 |
| |/ / | |||||
| * | | Merge pull request #467 from squito/executor_job_id | Matei Zaharia | 2013-02-15 | 2 | -3/+4 |
| |\ \ | | | | | | | | | include jobid in Executor commandline args | ||||
| | * | | use appid instead of frameworkid; simplify stupid condition | Imran Rashid | 2013-02-13 | 1 | -2/+2 |
| | | | | |||||
| | * | | include jobid in Executor commandline args | Imran Rashid | 2013-02-13 | 2 | -3/+4 |
| | | | | |||||
* | | | | support customized java options for master, worker, executor, repl shell | haitao.yao | 2013-02-16 | 1 | -0/+20 |
| | | | | |||||
* | | | | Merge branch 'mesos' | haitao.yao | 2013-02-16 | 60 | -207/+629 |
|\| | | | |||||
| * | | | Merge pull request #466 from pwendell/java-stream-transform | Tathagata Das | 2013-02-14 | 2 | -2/+77 |
| |\ \ \ | | | | | | | | | | | STREAMING-50: Support transform workaround in JavaPairDStream | ||||
| | * | | | STREAMING-50: Support transform workaround in JavaPairDStream | Patrick Wendell | 2013-02-12 | 2 | -2/+77 |
| | |/ / | | | | | | | | | | | | | | | | | | | | | This ports a useful workaround (the `transform` function) to JavaPairDStream. It is necessary to do things like sorting which are not supported yet in the core streaming API. | ||||
| * | | | Merge pull request #461 from JoshRosen/fix/issue-tracker-link | Matei Zaharia | 2013-02-13 | 1 | -1/+1 |
| |\ \ \ | | |/ / | |/| | | Update issue tracker link in contributing guide | ||||
| | * | | Update issue tracker link in contributing guide. | Josh Rosen | 2013-02-10 | 1 | -1/+1 |
| | | | | |||||
| * | | | Merge pull request #464 from pwendell/java-type-fix | Matei Zaharia | 2013-02-11 | 3 | -9/+168 |
| |\ \ \ | | | | | | | | | | | SPARK-694: All references to [K, V] in JavaDStreamLike should be changed to [K2, V2] | ||||
| | * | | | Using tuple swap() | Patrick Wendell | 2013-02-11 | 1 | -2/+2 |
| | | | | | |||||
| | * | | | small fix | Patrick Wendell | 2013-02-11 | 1 | -2/+2 |
| | | | | | |||||
| | * | | | Fix for MapPartitions | Patrick Wendell | 2013-02-11 | 2 | -17/+54 |
| | | | | | |||||
| | * | | | Fix for flatmap | Patrick Wendell | 2013-02-11 | 2 | -2/+44 |
| | | | | | |||||
| | * | | | Indentation fix | Patrick Wendell | 2013-02-11 | 1 | -10/+10 |
| | | | | | |||||
| | * | | | Initial cut at replacing K, V in Java files | Patrick Wendell | 2013-02-11 | 3 | -2/+82 |
| | |/ / | |||||
| * | | | Merge pull request #465 from pwendell/java-sort-fix | Matei Zaharia | 2013-02-11 | 1 | -1/+1 |
| |\ \ \ | | | | | | | | | | | SPARK-696: sortByKey should use 'ascending' parameter | ||||
| | * | | | SPARK-696: sortByKey should use 'ascending' parameter | Patrick Wendell | 2013-02-11 | 1 | -1/+1 |
| | |/ / | |||||
| * | | | Formatting fixes | Matei Zaharia | 2013-02-11 | 1 | -13/+9 |
| | | | | |||||
| * | | | Fixed an exponential recursion that could happen with doCheckpoint due | Matei Zaharia | 2013-02-11 | 2 | -12/+37 |
| | | | | | | | | | | | | | | | | to lack of memoization | ||||
| * | | | Some bug and formatting fixes to FT | Matei Zaharia | 2013-02-10 | 5 | -16/+21 |
| | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala | ||||
| * | | | Detect hard crashes of workers using a heartbeat mechanism. | root | 2013-02-10 | 8 | -7/+62 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also fixes some issues in the rest of the code with detecting workers this way. Conflicts: core/src/main/scala/spark/deploy/master/Master.scala core/src/main/scala/spark/deploy/worker/Worker.scala core/src/main/scala/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala core/src/main/scala/spark/scheduler/cluster/StandaloneClusterMessage.scala core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala | ||||
| * | | | Use a separate memory setting for standalone cluster daemons | Matei Zaharia | 2013-02-10 | 3 | -1/+29 |
| | | | | | | | | | | | | | | | | | | | | Conflicts: docs/_config.yml |