aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'mesos'haitao.yao2013-02-204-2/+22
|\
| * Merge pull request #484 from andyk/masterMatei Zaharia2013-02-192-1/+2
| |\ | | | | | | Fixes a broken link in documentation to issue tracker
| | * Fixes link to issue tracker in documentation page "Contributing to Spark".Andy Konwinski2013-02-192-1/+2
| | |
| * | Merge pull request #483 from rxin/splitpruningrdd2Matei Zaharia2013-02-191-0/+12
| |\ \ | | |/ | |/| Added a method to create PartitionPruningRDD.
| | * Added a method to create PartitionPruningRDD.Reynold Xin2013-02-191-0/+12
| |/
| * Merge pull request #477 from shivaram/ganglia-port-changeMatei Zaharia2013-02-181-1/+8
| |\ | | | | | | Ganglia port change
| | * Print cluster url after setup completesShivaram Venkataraman2013-02-181-0/+5
| | |
| | * Print ganglia url after setupShivaram Venkataraman2013-02-181-0/+2
| | |
| | * Use port 5080 for httpd/gangliaShivaram Venkataraman2013-02-181-1/+1
| |/
* | Merge branch 'mesos'haitao.yao2013-02-1988-749/+836
|\|
| * Rename "jobs" to "applications" in the standalone clusterMatei Zaharia2013-02-1734-295/+299
| |
| * Renamed "splits" to "partitions"Matei Zaharia2013-02-1748-390/+405
| |
| * Clean up EC2 script options a bitMatei Zaharia2013-02-171-9/+12
| |
| * Change EC2 script to use 0.6 AMIs by default, for nowMatei Zaharia2013-02-171-5/+5
| |
| * Merge pull request #421 from shivaram/spark-ec2-changeMatei Zaharia2013-02-172-15/+59
| |\ | | | | | | Switch spark_ec2.py to use the new spark-ec2 scripts.
| | * Turn on ganglia by defaultShivaram Venkataraman2013-01-311-1/+1
| | |
| | * Add an option to use the old scriptsShivaram Venkataraman2013-01-281-13/+30
| | |
| | * Add option to start ganglia. Also enable Hadoop ports even if cluster type isShivaram Venkataraman2013-01-271-8/+15
| | | | | | | | | | | | not mesos
| | * Fix swap variable nameShivaram Venkataraman2013-01-271-1/+1
| | |
| | * Update spark_ec2.py to use new spark-ec2 scriptsShivaram Venkataraman2013-01-262-12/+32
| | |
| * | Merge pull request #471 from stephenh/parallelrddMatei Zaharia2013-02-163-34/+29
| |\ \ | | | | | | | | Move ParallelCollection into spark.rdd package.
| | * | Move ParallelCollection into spark.rdd package.Stephen Haberman2013-02-163-34/+29
| | | |
| * | | Merge pull request #470 from stephenh/morekMatei Zaharia2013-02-166-10/+10
| |\ \ \ | | | | | | | | | | Make CoGroupedRDDs explicitly have the same key type.
| | * | | Make CoGroupedRDDs explicitly have the same key type.Stephen Haberman2013-02-166-10/+10
| | |/ /
| * | | Merge pull request #469 from stephenh/samepartitionercombineMatei Zaharia2013-02-162-1/+26
| |\ \ \ | | |/ / | |/| | If combineByKey is using the same partitioner, skip the shuffle.
| | * | Add assertion about dependencies.Stephen Haberman2013-02-162-4/+14
| | | |
| | * | Avoid a shuffle if combineByKey is passed the same partitioner.Stephen Haberman2013-02-162-1/+16
| |/ /
| * | Merge pull request #467 from squito/executor_job_idMatei Zaharia2013-02-152-3/+4
| |\ \ | | | | | | | | include jobid in Executor commandline args
| | * | use appid instead of frameworkid; simplify stupid conditionImran Rashid2013-02-131-2/+2
| | | |
| | * | include jobid in Executor commandline argsImran Rashid2013-02-132-3/+4
| | | |
* | | | support customized java options for master, worker, executor, repl shellhaitao.yao2013-02-161-0/+20
| | | |
* | | | Merge branch 'mesos'haitao.yao2013-02-1660-207/+629
|\| | |
| * | | Merge pull request #466 from pwendell/java-stream-transformTathagata Das2013-02-142-2/+77
| |\ \ \ | | | | | | | | | | STREAMING-50: Support transform workaround in JavaPairDStream
| | * | | STREAMING-50: Support transform workaround in JavaPairDStreamPatrick Wendell2013-02-122-2/+77
| | |/ / | | | | | | | | | | | | | | | | | | | | This ports a useful workaround (the `transform` function) to JavaPairDStream. It is necessary to do things like sorting which are not supported yet in the core streaming API.
| * | | Merge pull request #461 from JoshRosen/fix/issue-tracker-linkMatei Zaharia2013-02-131-1/+1
| |\ \ \ | | |/ / | |/| | Update issue tracker link in contributing guide
| | * | Update issue tracker link in contributing guide.Josh Rosen2013-02-101-1/+1
| | | |
| * | | Merge pull request #464 from pwendell/java-type-fixMatei Zaharia2013-02-113-9/+168
| |\ \ \ | | | | | | | | | | SPARK-694: All references to [K, V] in JavaDStreamLike should be changed to [K2, V2]
| | * | | Using tuple swap()Patrick Wendell2013-02-111-2/+2
| | | | |
| | * | | small fixPatrick Wendell2013-02-111-2/+2
| | | | |
| | * | | Fix for MapPartitionsPatrick Wendell2013-02-112-17/+54
| | | | |
| | * | | Fix for flatmapPatrick Wendell2013-02-112-2/+44
| | | | |
| | * | | Indentation fixPatrick Wendell2013-02-111-10/+10
| | | | |
| | * | | Initial cut at replacing K, V in Java filesPatrick Wendell2013-02-113-2/+82
| | |/ /
| * | | Merge pull request #465 from pwendell/java-sort-fixMatei Zaharia2013-02-111-1/+1
| |\ \ \ | | | | | | | | | | SPARK-696: sortByKey should use 'ascending' parameter
| | * | | SPARK-696: sortByKey should use 'ascending' parameterPatrick Wendell2013-02-111-1/+1
| | |/ /
| * | | Formatting fixesMatei Zaharia2013-02-111-13/+9
| | | |
| * | | Fixed an exponential recursion that could happen with doCheckpoint dueMatei Zaharia2013-02-112-12/+37
| | | | | | | | | | | | | | | | to lack of memoization
| * | | Some bug and formatting fixes to FTMatei Zaharia2013-02-105-16/+21
| | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
| * | | Detect hard crashes of workers using a heartbeat mechanism.root2013-02-108-7/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also fixes some issues in the rest of the code with detecting workers this way. Conflicts: core/src/main/scala/spark/deploy/master/Master.scala core/src/main/scala/spark/deploy/worker/Worker.scala core/src/main/scala/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala core/src/main/scala/spark/scheduler/cluster/StandaloneClusterMessage.scala core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
| * | | Use a separate memory setting for standalone cluster daemonsMatei Zaharia2013-02-103-1/+29
| | | | | | | | | | | | | | | | | | | | Conflicts: docs/_config.yml