aboutsummaryrefslogtreecommitdiff
path: root/examples
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'master' into streamingMatei Zaharia2013-01-201-39/+20
|\ | | | | | | | | Conflicts: core/src/main/scala/spark/api/python/PythonRDD.scala
| * Minor formatting fixesMatei Zaharia2013-01-201-2/+2
| |
| * Use only one update function and pass in transpose of ratings matrix where ↵Nick Pentreath2013-01-171-29/+3
| | | | | | | | appropriate
| * Fixed index error missing first argumentNick Pentreath2013-01-171-1/+1
| |
| * Adding default command line args to SparkALSNick Pentreath2013-01-171-10/+17
| |
* | Merge branch 'mesos-streaming' into streamingTathagata Das2013-01-204-1/+175
|\ \ | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/api/java/JavaRDDLike.scala core/src/main/scala/spark/api/java/JavaSparkContext.scala core/src/test/scala/spark/JavaAPISuite.java
| * | NetworkWordCount examplePatrick Wendell2013-01-172-1/+63
| | |
| * | Adding queueStream and some slight refactoringPatrick Wendell2013-01-171-0/+62
| | |
| * | Small doc fixPatrick Wendell2013-01-171-1/+1
| | |
| * | Two changes:Patrick Wendell2013-01-141-2/+2
| | | | | | | | | | | | | | | - Updating countByX() types based on bug fix - Porting new documentation to Java
| * | Flume example and bug fixPatrick Wendell2013-01-141-0/+50
| | |
* | | Merge branch 'master' into streamingTathagata Das2013-01-152-1/+12
|\ \ \ | |/ / |/| / | |/ | | | | | | | | | | Conflicts: core/src/main/scala/spark/rdd/CoGroupedRDD.scala core/src/main/scala/spark/rdd/FilteredRDD.scala docs/_layouts/global.html docs/index.md run
| * Update examples/src/main/scala/spark/examples/LocalLR.scalaEric Zhang2013-01-131-1/+1
| | | | | | fix spelling mistake
| * Rename environment variable for hadoop profiles to hadoopVersionShivaram Venkataraman2013-01-121-2/+2
| |
| * Activate hadoop2 profile in pom.xml with -Dhadoop=2Shivaram Venkataraman2013-01-101-0/+6
| |
| * Activate hadoop1 if property hadoop is missing. hadoop2 can be activated nowShivaram Venkataraman2013-01-081-1/+3
| | | | | | | | by using -Dhadoop -Phadoop2.
| * Activate hadoop1 profile by default for maven buildsShivaram Venkataraman2013-01-071-0/+3
| |
* | Removed stream id from the constructor of NetworkReceiver to make it easier ↵Tathagata Das2013-01-131-7/+8
| | | | | | | | for PluggableNetworkInputDStream.
* | Making the Twitter example distributed.Patrick Wendell2013-01-072-37/+62
| | | | | | | | | | | | This adds a distributed (receiver-based) implementation of the Twitter dstream. It also changes the example to perform a distributed sort rather than collecting the dataset at one node.
* | Moved Twitter example to the where the other examples are.Tathagata Das2013-01-072-0/+105
| |
* | Renamed examples and added documentation.Tathagata Das2013-01-0710-274/+97
| |
* | Moved Spark Streaming examples to examples sub-project.Tathagata Das2013-01-0612-0/+615
|/
* Mark hadoop dependencies provided in all library artifactsThomas Dudziak2012-12-101-0/+3
|
* Use the same output directories that SBT had in subprojectsMatei Zaharia2012-12-101-1/+3
| | | | This will make it easier to make the "run" script work with a Maven build
* Updated versions in the pom.xml files to match current masterThomas Dudziak2012-11-271-1/+1
|
* Addressed code review commentsThomas Dudziak2012-11-271-0/+1
|
* Added maven and debian build filesThomas Dudziak2012-11-201-0/+100
|
* Fix K-means example a littleroot2012-11-101-16/+11
|
* Some doc and usability improvements:Matei Zaharia2012-10-122-2/+2
| | | | | | | - Added a StorageLevels class for easy access to StorageLevel constants in Java - Added doc comments on Function classes in Java - Updated Accumulator and HadoopWriter docs slightly
* Conflict fixedMosharaf Chowdhury2012-10-0210-10/+10
|\
| * More updates to documentationMatei Zaharia2012-09-2510-10/+10
| |
* | Bug fix. Fixed log messages. Updated BroadcastTest example to have iterations.Mosharaf Chowdhury2012-08-301-3/+7
|/
* Cache points in SparkLR example.Josh Rosen2012-08-261-2/+2
|
* Renamed apply() to call() in Java API and allowed it to throw ExceptionsMatei Zaharia2012-08-124-21/+25
|
* move Vector class into core and spark.util packageImran Rashid2012-07-287-88/+6
|
* Remove StringOps.split() from Java WordCount.Josh Rosen2012-07-251-5/+2
|
* Minor cleanup and optimizations in Java API.Josh Rosen2012-07-242-10/+13
| | | | | | - Add override keywords. - Cache RDDs and counts in TC example. - Clean up JavaRDDLike's abstract methods.
* Improve Java API examplesJosh Rosen2012-07-225-198/+143
| | | | | | - Replace JavaLR example with JavaHdfsLR example. - Use anonymous classes in JavaWordCount; add options. - Remove @Override annotations.
* Add Java APIJosh Rosen2012-07-185-0/+355
| | | | | | Add distinct() method to RDD. Fix bug in DoubleRDDFunctions.
* Add System.exit(0) at the end of all the example programs.Matei Zaharia2012-06-0512-0/+19
|
* Format the code as coding style agreed by Matei/TD/Haoyuanhaoyuan2012-02-091-1/+1
|
* Some fixes to the examples (mostly to use functional API)Matei Zaharia2012-01-314-76/+72
|
* Merge pull request #103 from edisontung/masterMatei Zaharia2012-01-132-56/+142
|\ | | | | Made improvements to takeSample. Also changed SparkLocalKMeans to SparkKMeans
| * Revert de01b6deaaee1b43321e0aac330f4a98c0ea61c6^..HEADEdison Tung2011-12-011-73/+0
| |
| * Renamed SparkLocalKMeans to SparkKMeansEdison Tung2011-12-011-56/+62
| |
| * Added KMeans examplesEdison Tung2011-11-212-0/+153
| | | | | | | | | | LocalKMeans runs locally with a randomly generated dataset. SparkLocalKMeans takes an input file and runs KMeans on it.
* | Merge commit 'ad4ebff42c1b738746b2b9ecfbb041b6d06e3e16'Matei Zaharia2011-12-141-0/+18
|\ \
| * | Report errors in tasks to the driver via a Mesos status updateAnkur Dave2011-11-141-0/+18
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a task throws an exception, the Spark executor previously just logged it to a local file on the slave and exited. This commit causes Spark to also report the exception back to the driver using a Mesos status update, so the user doesn't have to look through a log file on the slave. Here's what the reporting currently looks like: # ./run spark.examples.ExceptionHandlingTest master@203.0.113.1:5050 [...] 11/10/26 21:04:13 INFO spark.SimpleJob: Lost TID 1 (task 0:1) 11/10/26 21:04:13 INFO spark.SimpleJob: Loss was due to java.lang.Exception: Testing exception handling [...] 11/10/26 21:04:16 INFO spark.SparkContext: Job finished in 5.988547328 s
* / Fixed LocalFileLR to deal with a change in Scala IO sourcesMatei Zaharia2011-12-011-1/+1
|/ | | | (you can no longer iterate over a Source multiple times).
* K-means exampleMatei Zaharia2011-11-012-3/+86
|