Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Merge branch 'master' into removesemicolonscala | Henry Saputra | 2013-11-19 | 2 | -11/+14 |
|\ | |||||
| * | Enable the Broadcast examples to work in a cluster setting | Aaron Davidson | 2013-11-18 | 2 | -11/+14 |
| | | | | | | | | | | Since they rely on println to display results, we need to first collect those results to the driver to have them actually display locally. | ||||
* | | Remove the semicolons at the end of Scala code to make it more pure Scala code. | Henry Saputra | 2013-11-19 | 4 | -5/+5 |
|/ | | | | | | | Also remove unused imports as I found them along the way. Remove return statements when returning value in the Scala code. Passing compile and tests. | ||||
* | fix sparkhdfs lr test | tgravescs | 2013-10-29 | 1 | -1/+2 |
| | |||||
* | Makes Spark SIMR ready. | Ali Ghodsi | 2013-10-24 | 1 | -1/+1 |
| | |||||
* | Merge pull request #64 from prabeesh/master | Matei Zaharia | 2013-10-23 | 1 | -0/+107 |
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MQTT Adapter for Spark Streaming MQTT is a machine-to-machine (M2M)/Internet of Things connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. You may read more about it here http://mqtt.org/ Message Queue Telemetry Transport (MQTT) is an open message protocol for M2M communications. It enables the transfer of telemetry-style data in the form of messages from devices like sensors and actuators, to mobile phones, embedded systems on vehicles, or laptops and full scale computers. The protocol was invented by Andy Stanford-Clark of IBM, and Arlen Nipper of Cirrus Link Solutions This protocol enables a publish/subscribe messaging model in an extremely lightweight way. It is useful for connections with remote locations where line of code and network bandwidth is a constraint. MQTT is one of the widely used protocol for 'Internet of Things'. This protocol is getting much attraction as anything and everything is getting connected to internet and they all produce data. Researchers and companies predict some 25 billion devices will be connected to the internet by 2015. Plugin/Support for MQTT is available in popular MQs like RabbitMQ, ActiveMQ etc. Support for MQTT in Spark will help people with Internet of Things (IoT) projects to use Spark Streaming for their real time data processing needs (from sensors and other embedded devices etc). | ||||
| * | Update MQTTWordCount.scala | Prabeesh K | 2013-10-22 | 1 | -6/+1 |
| | | |||||
| * | Update MQTTWordCount.scala | Prabeesh K | 2013-10-22 | 1 | -3/+4 |
| | | |||||
| * | Update MQTTWordCount.scala | Prabeesh K | 2013-10-18 | 1 | -15/+14 |
| | | |||||
| * | added mqtt adapter wordcount example | prabeesh | 2013-10-16 | 1 | -0/+112 |
| | | |||||
* | | Merge pull request #56 from jerryshao/kafka-0.8-dev | Matei Zaharia | 2013-10-21 | 1 | -13/+15 |
|\ \ | | | | | | | | | | | | | | | | | | | Upgrade Kafka 0.7.2 to Kafka 0.8.0-beta1 for Spark Streaming Conflicts: streaming/pom.xml | ||||
| * | | Upgrade Kafka 0.7.2 to Kafka 0.8.0-beta1 for Spark Streaming | jerryshao | 2013-10-12 | 1 | -13/+15 |
| | | | |||||
* | | | BroadcastTest2 --> BroadcastTest | Mosharaf Chowdhury | 2013-10-16 | 2 | -62/+12 |
| | | | |||||
* | | | Default blockSize is 4MB. | Mosharaf Chowdhury | 2013-10-16 | 1 | -0/+59 |
| | | | | | | | | | | | | BroadcastTest2 example added for testing broadcasts. | ||||
* | | | Fixing spark streaming example and a bug in examples build. | Patrick Wendell | 2013-10-15 | 1 | -4/+9 |
|/ / | | | | | | | | | | | - Examples assembly included a log4j.properties which clobbered Spark's - Example had an error where some classes weren't serializable - Did some other clean-up in this example | ||||
* / | Remove unnecessary mutable imports | Neal Wiggins | 2013-10-11 | 1 | -2/+0 |
|/ | |||||
* | Add missing license headers found with RAT | Matei Zaharia | 2013-09-02 | 1 | -0/+17 |
| | |||||
* | Move some classes to more appropriate packages: | Matei Zaharia | 2013-09-01 | 5 | -13/+11 |
| | | | | | | * RDD, *RDDFunctions -> org.apache.spark.rdd * Utils, ClosureCleaner, SizeEstimator -> org.apache.spark.util * JavaSerializer, KryoSerializer -> org.apache.spark.serializer | ||||
* | Initial work to rename package to org.apache.spark | Matei Zaharia | 2013-09-01 | 39 | -129/+129 |
| | |||||
* | Fix finding of assembly JAR, as well as some pointers to ./run | Matei Zaharia | 2013-08-29 | 8 | -13/+13 |
| | |||||
* | Change build and run instructions to use assemblies | Matei Zaharia | 2013-08-29 | 3 | -0/+447 |
| | | | | | | | | | | | | | | | | This commit makes Spark invocation saner by using an assembly JAR to find all of Spark's dependencies instead of adding all the JARs in lib_managed. It also packages the examples into an assembly and uses that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script with two better-named scripts: "run-examples" for examples, and "spark-class" for Spark internal classes (e.g. REPL, master, etc). This is also designed to minimize the confusion people have in trying to use "run" to run their own classes; it's not meant to do that, but now at least if they look at it, they can modify run-examples to do a decent job for them. As part of this, Bagel's examples are also now properly moved to the examples package instead of bagel. | ||||
* | make SparkHadoopUtil a member of SparkEnv | Jey Kottalam | 2013-08-15 | 1 | -2/+1 |
| | |||||
* | Optimize Scala PageRank to use reduceByKey | Matei Zaharia | 2013-08-10 | 1 | -8/+4 |
| | |||||
* | Style changes as per Matei's comments | Nick Pentreath | 2013-08-08 | 1 | -9/+8 |
| | |||||
* | Adding Scala version of PageRank example | Nick Pentreath | 2013-08-07 | 1 | -0/+51 |
| | |||||
* | Add Apache license headers and LICENSE and NOTICE files | Matei Zaharia | 2013-07-16 | 35 | -1/+596 |
| | |||||
* | Merge pull request #577 from skumargithub/master | Matei Zaharia | 2013-06-29 | 1 | -0/+50 |
|\ | | | | | Example of cumulative counting using updateStateByKey | ||||
| * | Removed unused code, clarified intent of the program, batch size to 1 second | unknown | 2013-05-06 | 1 | -5/+3 |
| | | |||||
| * | Modified as per TD's suggestions | unknown | 2013-04-30 | 1 | -17/+6 |
| | | |||||
| * | Examaple of cumulative counting using updateStateByKey | unknown | 2013-04-22 | 1 | -0/+63 |
| | | |||||
* | | Merge remote-tracking branch 'mrpotes/master' | Matei Zaharia | 2013-06-29 | 3 | -15/+12 |
|\ \ | |||||
| * | | Fix usage and parameter extraction | James Phillpotts | 2013-06-25 | 3 | -12/+9 |
| | | | |||||
| * | | Include a default OAuth implementation, and update examples and ↵ | James Phillpotts | 2013-06-25 | 3 | -3/+3 |
| | | | | | | | | | | | | JavaStreamingContext | ||||
* | | | Merge branch 'master' into streaming | Tathagata Das | 2013-06-24 | 31 | -155/+441 |
|\| | | | | | | | | | | | | | | Conflicts: .gitignore | ||||
| * | | Merge remote-tracking branch 'milliondreams/casdemo' | Matei Zaharia | 2013-06-18 | 1 | -0/+196 |
| |\ \ | | | | | | | | | | | | | | | | | Conflicts: project/SparkBuild.scala | ||||
| | * | | Fixing the style as per feedback | Rohit Rai | 2013-06-13 | 1 | -35/+37 |
| | | | | |||||
| | * | | Example to write the output to cassandra | Rohit Rai | 2013-06-03 | 1 | -5/+43 |
| | | | | |||||
| | * | | A better way to read column value if you are sure the column exists in every ↵ | Rohit Rai | 2013-06-03 | 1 | -2/+4 |
| | | | | | | | | | | | | | | | | row. | ||||
| | * | | Removing infix call | Rohit Rai | 2013-06-02 | 1 | -3/+3 |
| | | | | |||||
| | * | | Adding example to make Spark RDD from Cassandra | Rohit Rai | 2013-06-01 | 1 | -0/+154 |
| | | | | |||||
| * | | | Add hBase example | Ethan Jewett | 2013-05-09 | 1 | -0/+35 |
| | | | | |||||
| * | | | Revert "Merge pull request #596 from esjewett/master" because the | Reynold Xin | 2013-05-09 | 1 | -35/+0 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dependency on hbase introduces netty-3.2.2 which conflicts with netty-3.5.3 already in Spark. This caused multiple test failures. This reverts commit 0f1b7a06e1f6782711170234f105f1b277e3b04c, reversing changes made to aacca1b8a85bd073ce185a06d6470b070761b2f4. | ||||
| * | | | Switch to using SparkContext method to create RDD | Ethan Jewett | 2013-05-07 | 1 | -2/+2 |
| | | | | |||||
| * | | | Fix indents and mention other configuration options | Ethan Jewett | 2013-05-04 | 1 | -2/+5 |
| | | | | |||||
| * | | | Remove unnecessary column family config | Ethan Jewett | 2013-05-04 | 1 | -4/+2 |
| | | | | |||||
| * | | | HBase example | Ethan Jewett | 2013-05-04 | 1 | -0/+34 |
| |/ / | |||||
| * | | Attempt at fixing merge conflict | Mridul Muralidharan | 2013-04-24 | 5 | -77/+77 |
| |\| | |||||
| | * | Uniform whitespace across scala examples | Andrew Ash | 2013-04-09 | 4 | -76/+76 |
| | | | |||||
| | * | Corrected order of CountMinSketchMonoid arguments | Erik van oosten | 2013-04-02 | 1 | -1/+1 |
| | | | |||||
| * | | Fix review comments, add a new api to SparkHadoopUtil to create appropriate ↵ | Mridul Muralidharan | 2013-04-22 | 1 | -2/+8 |
| |/ | | | | | | | Configuration. Modify an example to show how to use SplitInfo |