aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Improve docs for GraphOpsAnkur Dave2014-01-101-53/+25
|
* Remove duplicate method in GraphLoader and improve docsAnkur Dave2014-01-101-50/+13
|
* Improve docs for EdgeRDD, EdgeTriplet, and GraphLabAnkur Dave2014-01-103-35/+24
|
* Remove commented-out perf filesAnkur Dave2014-01-102-151/+0
|
* Remove some commented codeAnkur Dave2014-01-102-5/+0
|
* Finish cleaning up Graph docsAnkur Dave2014-01-101-98/+82
|
* Start cleaning up Scaladocs in Graph and EdgeRDDAnkur Dave2014-01-102-35/+27
|
* Generate GraphX docsAnkur Dave2014-01-101-1/+1
|
* Add back Bagel links to docs, but mark them supersededAnkur Dave2014-01-105-14/+21
|
* Remove EdgeTriplet.{src,dst}Stale, which were unusedAnkur Dave2014-01-101-3/+0
|
* Remove commented code from AnalyticsAnkur Dave2014-01-101-430/+0
|
* Update graphx/pom.xml to mirror mllib/pom.xmlAnkur Dave2014-01-101-69/+7
|
* Merge pull request #1 from jegonzal/graphxAnkur Dave2014-01-1012-159/+134
|\ | | | | ProgrammingGuide
| * WIP. Updating figures and cleaning up initial skeleton for GraphX ↵Joseph E. Gonzalez2014-01-1012-159/+134
| | | | | | | | Programming guide.
* | Undo 8b6b8ac87f6ffb92b3395344bf2696d5c7fb3798Ankur Dave2014-01-101-7/+3
| | | | | | | | Getting unpersist right in GraphLab is tricky.
* | graph -> graphx in log4j.propertiesAnkur Dave2014-01-101-1/+1
| |
* | Avoid recomputation by caching all multiply-used RDDsAnkur Dave2014-01-1011-53/+67
| |
* | Unpersist previous iterations in GraphLabAnkur Dave2014-01-101-6/+10
| |
* | Add Graph.unpersistVertices()Ankur Dave2014-01-093-8/+18
| |
* | Unpersist previous iterations in PregelAnkur Dave2014-01-096-7/+41
| |
* | graph -> graphx in bin/compute-classpath.shAnkur Dave2014-01-091-2/+2
| |
* | Add implicit algorithm methods for Graph; remove standalone PageRankAnkur Dave2014-01-0910-85/+99
| |
* | graph -> graphxAnkur Dave2014-01-0950-111/+111
| |
* | Svdpp -> SVDPlusPlusAnkur Dave2014-01-092-11/+11
| |
* | Pid -> PartitionIDAnkur Dave2014-01-098-35/+36
| |
* | Vid -> VertexIDAnkur Dave2014-01-0931-221/+234
| |
* | Unwrap Graph.mapEdges signatureAnkur Dave2014-01-091-3/+1
| |
* | Revert changes to examples/.../PageRankUtils.scalaAnkur Dave2014-01-091-3/+3
| | | | | | | | Reverts to 04d83fc37f9eef89c20331c85291a0a169f75e6d:examples/src/main/scala/org/apache/spark/examples/bagel/PageRankUtils.scala.
* | Make GraphImpl serializable to work around captureAnkur Dave2014-01-091-1/+1
|/
* Start fixing formatting of graphx-programming-guideAnkur Dave2014-01-091-7/+6
|
* Add docs/graphx-programming-guide.md from ↵Ankur Dave2014-01-091-0/+197
| | | | 7210257ba3038d5e22d4b60fe9c3113dc45c3dff:README.md
* Removed Kryo dependency and graphx-shellAnkur Dave2014-01-097-131/+8
|
* Remove GraphX READMEAnkur Dave2014-01-081-131/+53
|
* Fix AbstractMethodError by inlining zip{Edge,Vertex}PartitionsAnkur Dave2014-01-083-49/+35
| | | | | | | | The zip{Edge,Vertex}Partitions methods created doubly-nested closures and passed them to zipPartitions. For some reason this caused an AbstractMethodError when zipPartitions tried to invoke the closure. This commit works around the problem by inlining these methods wherever they are called, eliminating the doubly-nested closure.
* Take SparkConf in constructor of Serializer subclassesAnkur Dave2014-01-082-19/+26
|
* Manifest -> Tag in variable namesAnkur Dave2014-01-083-15/+15
|
* ClassManifest -> ClassTagAnkur Dave2014-01-0819-111/+129
|
* Fix mis-merge in 44fd30d3fbcf830deecbe8ea3e8ea165e74e6eddAnkur Dave2014-01-081-0/+5
|
* Merge remote-tracking branch 'spark-upstream/master' into HEADAnkur Dave2014-01-08496-8303/+17474
|\ | | | | | | | | | | | | | | | | | | | | Conflicts: README.md core/src/main/scala/org/apache/spark/util/collection/OpenHashMap.scala core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala core/src/main/scala/org/apache/spark/util/collection/PrimitiveKeyOpenHashMap.scala pom.xml project/SparkBuild.scala repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala
| * Merge pull request #360 from witgo/masterReynold Xin2014-01-081-1/+1
| |\ | | | | | | | | | fix make-distribution.sh show version: command not found
| | * fix make-distribution.sh show version: command not foundliguoqiang2014-01-091-1/+1
| | |
| * | Merge pull request #357 from hsaputra/set_boolean_paramnameReynold Xin2014-01-082-3/+4
| |\ \ | | | | | | | | | | | | | | | | | | | | Set boolean param name for call to SparkHadoopMapReduceUtil.newTaskAttemptID Set boolean param name for call to SparkHadoopMapReduceUtil.newTaskAttemptID to make it clear which param being set.
| | * | Resolve PR review over 100 charsHenry Saputra2014-01-081-1/+2
| | | |
| | * | Set boolean param name for two files call to ↵Henry Saputra2014-01-072-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | SparkHadoopMapReduceUtil.newTaskAttemptID to make it clear which param being set.
| * | | Merge pull request #358 from pwendell/add-cdhPatrick Wendell2014-01-081-0/+5
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | Add CDH Repository to Maven Build At some point this was removed from the Maven build... so I'm adding it back. It's needed for the Hadoop2 tests we run on Jenkins and it's also included in the SBT build.
| | * | | Add CDH Repository to Maven BuildPatrick Wendell2014-01-081-0/+5
| | | | |
| * | | | Merge pull request #356 from hsaputra/remove_deprecated_cleanup_methodReynold Xin2014-01-082-6/+0
| |\ \ \ \ | | |_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove calls to deprecated mapred's OutputCommitter.cleanupJob Since Hadoop 1.0.4 the mapred OutputCommitter.commitJob should do cleanup job via call to OutputCommitter.cleanupJob, Remove SparkHadoopWriter.cleanup since it is used only by PairRDDFunctions. In fact the implementation of mapred OutputCommitter.commitJob looks like this: public void commitJob(JobContext jobContext) throws IOException { cleanupJob(jobContext); }
| | * | | Remove calls to deprecated mapred's OutputCommitter.cleanupJob because since ↵Henry Saputra2014-01-072-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hadoop 1.0.4 the mapred OutputCommitter.commitJob should do cleanup job. In fact the implementation of mapred OutputCommitter.commitJob looks like this: public void commitJob(JobContext jobContext) throws IOException { cleanupJob(jobContext); } (The jobContext input argument is type of org.apache.hadoop.mapred.JobContext)
| * | | | Merge pull request #345 from colorant/yarnThomas Graves2014-01-084-3/+7
| |\ \ \ \ | | |_|/ / | |/| | | | | | | | | | | | | | | | | | support distributing extra files to worker for yarn client mode So that user doesn't need to package all dependency into one assemble jar as spark app jar
| | * | | Export --file for YarnClient mode to support sending extra files to worker ↵Raymond Liu2014-01-072-1/+5
| | | | | | | | | | | | | | | | | | | | on yarn cluster