aboutsummaryrefslogtreecommitdiff
path: root/streaming
Commit message (Collapse)AuthorAgeFilesLines
* Merge remote-tracking branch 'jey/bump-development-version-to-0.8.0'Matei Zaharia2013-04-081-1/+1
|\ | | | | | | | | | | Conflicts: docs/_config.yml project/SparkBuild.scala
| * Bump development version to 0.8.0Jey Kottalam2013-03-281-1/+1
| |
* | Move streaming test initialization into 'before' blocksJey Kottalam2013-03-282-4/+8
|/
* method first in trait IterableLike is deprecated: use `head' insteadHolden Karau2013-03-241-1/+1
|
* Add a log4j compile dependency to fix build in IntelliJMikhail Bautin2013-03-151-1/+1
| | | | | Also rename parent project to spark-parent (otherwise it shows up as "parent" in IntelliJ, which is very confusing).
* Forgot equals.Stephen Haberman2013-03-121-1/+1
|
* More quickly call close in HadoopRDD.Stephen Haberman2013-03-111-35/+9
| | | | | This also refactors out the common "gotNext" iterator pattern into a shared utility class.
* Instead of failing to bind to a fixed, already-in-use port, let the OS ↵Mark Hamstra2013-03-011-8/+10
| | | | choose an available port for TestServer.
* bump version to 0.7.1-SNAPSHOT in the subproject poms to keep the maven ↵Mark Hamstra2013-02-281-1/+1
| | | | build building.
* Merge pull request #503 from pwendell/bug-fixMatei Zaharia2013-02-251-1/+1
|\ | | | | createNewSparkContext should use sparkHome/jars/environment.
| * createNewSparkContext should use sparkHome/jars/environment.Patrick Wendell2013-02-251-1/+1
| | | | | | | | This fixes a bug introduced by Matei's recent change.
* | Pass a code JAR to SparkContext in our examples. Fixes SPARK-594.Matei Zaharia2013-02-251-0/+17
|/
* Changed Flume test to use the same port as other tests, so that can be ↵Tathagata Das2013-02-251-2/+2
| | | | controlled centrally.
* Fixed something that was reported as a compile error in ScalaDoc.Matei Zaharia2013-02-252-3/+3
| | | | | | For some reason, ScalaDoc complained about no such constructor for StreamingContext; it doesn't seem like an actual Scala error but it prevented sbt publish and from working because docs weren't built.
* Allow passing sparkHome and JARs to StreamingContext constructorMatei Zaharia2013-02-254-10/+68
| | | | | Also warns if spark.cleaner.ttl is not set in the version where you pass your own SparkContext.
* Fixed class paths and dependencies based on Matei's comments.Tathagata Das2013-02-241-11/+0
|
* Added back the initial spark job before starting streaming receiversTathagata Das2013-02-241-1/+1
|
* Fixed missing dependencies in streaming/pom.xmlTathagata Das2013-02-241-4/+14
|
* Updated streaming programming guide with Java API info, and comments from ↵Tathagata Das2013-02-232-3/+2
| | | | Patrick.
* Fixed differences in APIs of StreamingContext and JavaStreamingContext. ↵Tathagata Das2013-02-234-25/+231
| | | | Change rawNetworkStream to rawSocketStream, and added twitter, actor, zeroMQ streams to JavaStreamingContext. Also added them to JavaAPISuite.
* Merge branch 'mesos-streaming' into streamingTathagata Das2013-02-223-2/+66
|\
| * Merge pull request #480 from MLnick/streaming-eg-algebirdTathagata Das2013-02-221-0/+10
| |\ | | | | | | [Streaming] Examples using Twitter's Algebird library
| | * Merge remote-tracking branch 'upstream/streaming' into streaming-eg-algebirdNick Pentreath2013-02-2111-43/+306
| | |\
| | * | Dependencies and refactoring for streaming HLL example, and using ↵Nick Pentreath2013-02-191-0/+10
| | | | | | | | | | | | | | | | context.twitterStream method
| * | | Merge pull request #479 from ScrapCodes/zeromq-streamingTathagata Das2013-02-222-2/+56
| |\ \ \ | | |_|/ | |/| | Zeromq streaming
| | * | fixes corresponding to review feedback at pull request #479Prashant Sharma2013-02-201-1/+1
| | | |
| | * | example for demonstrating ZeroMQ streamPrashant Sharma2013-02-191-8/+7
| | | |
| | * | ZeroMQ stream as receiverPrashant Sharma2013-02-192-0/+55
| | |/
* | / Fixed condition in InputDStream isTimeValid.Tathagata Das2013-02-211-1/+1
|/ /
* | Merge branch 'mesos-streaming' into streamingTathagata Das2013-02-206-49/+102
|\ \ | | | | | | | | | | | | Conflicts: streaming/src/test/java/spark/streaming/JavaAPISuite.java
| * | Small changes that were missing in mergePatrick Wendell2013-02-192-0/+2
| | |
| * | Use RDD type for `slice` operator in Java.Patrick Wendell2013-02-191-2/+2
| | | | | | | | | | | | | | | | | | This commit uses the RDD type in `slice`, making it available to both normal and pair RDD's in java. It also updates the signature for `slice` to match changes in the Scala API.
| * | Use RDD type for `transform` operator in Java.Patrick Wendell2013-02-192-7/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is an improved implementation of the `transform` operator in Java. The main difference is that this allows all four possible types of transform functions 1. JavaRDD -> JavaRDD 2. JavaRDD -> JavaPairRDD 3. JavaPairRDD -> JavaPairRDD 4. JavaPairRDD -> JavaRDD whereas previously only (1) and (3) were possible. Conflicts: streaming/src/test/java/spark/streaming/JavaAPISuite.java
| * | Use RDD type for `foreach` operator in Java.Patrick Wendell2013-02-196-11/+21
| |/
* | Merge branch 'mesos-master' into streamingTathagata Das2013-02-209-24/+234
|\ \ | |/ |/| | | | | | | Conflicts: core/src/main/scala/spark/rdd/CheckpointRDD.scala streaming/src/main/scala/spark/streaming/dstream/ReducedWindowedDStream.scala
| * Rename "jobs" to "applications" in the standalone clusterMatei Zaharia2013-02-173-9/+9
| |
| * Make CoGroupedRDDs explicitly have the same key type.Stephen Haberman2013-02-163-3/+3
| |
| * STREAMING-50: Support transform workaround in JavaPairDStreamPatrick Wendell2013-02-122-2/+77
| | | | | | | | | | | | This ports a useful workaround (the `transform` function) to JavaPairDStream. It is necessary to do things like sorting which are not supported yet in the core streaming API.
| * Using tuple swap()Patrick Wendell2013-02-111-2/+2
| |
| * small fixPatrick Wendell2013-02-111-2/+2
| |
| * Fix for MapPartitionsPatrick Wendell2013-02-112-17/+54
| |
| * Fix for flatmapPatrick Wendell2013-02-112-2/+44
| |
| * Indentation fixPatrick Wendell2013-02-111-10/+10
| |
| * Initial cut at replacing K, V in Java filesPatrick Wendell2013-02-112-2/+58
| |
* | Merge branch 'streaming' into ScrapCodes-streaming-actorTathagata Das2013-02-1946-1056/+2026
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: docs/plugin-custom-receiver.md streaming/src/main/scala/spark/streaming/StreamingContext.scala streaming/src/main/scala/spark/streaming/dstream/KafkaInputDStream.scala streaming/src/main/scala/spark/streaming/dstream/PluggableInputDStream.scala streaming/src/main/scala/spark/streaming/receivers/ActorReceiver.scala streaming/src/test/scala/spark/streaming/InputStreamsSuite.scala
| * | Changed networkStream to socketStream and pluggableNetworkStream to become ↵Tathagata Das2013-02-184-21/+22
| | | | | | | | | | | | networkStream as a way to create streams from arbitrary network receiver.
| * | Merge branch 'streaming' into ScrapCode-streamingTathagata Das2013-02-1845-1027/+2015
| |\ \ | | | | | | | | | | | | | | | | | | | | Conflicts: streaming/src/main/scala/spark/streaming/dstream/KafkaInputDStream.scala streaming/src/main/scala/spark/streaming/dstream/NetworkInputDStream.scala
| | * | Added checkpointing and fault-tolerance semantics to the programming guide. ↵Tathagata Das2013-02-186-6/+11
| | | | | | | | | | | | | | | | Fixed default checkpoint interval to being a multiple of slide duration. Fixed visibility of some classes and objects to clean up docs.
| | * | Many changes to ensure better 2nd recovery if 2nd failure happens whileTathagata Das2013-02-1718-97/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | recovering from 1st failure - Made the scheduler to checkpoint after clearing old metadata which ensures that a new checkpoint is written as soon as at least one batch gets computed while recovering from a failure. This ensures that if there is a 2nd failure while recovering from 1st failure, the system start 2nd recovery from a newer checkpoint. - Modified Checkpoint writer to write checkpoint in a different thread. - Added a check to make sure that compute for InputDStreams gets called only for strictly increasing times. - Changed implementation of slice to call getOrCompute on parent DStream in time-increasing order. - Added testcase to test slice. - Fixed testGroupByKeyAndWindow testcase in JavaAPISuite to verify results with expected output in an order-independent manner.
| | * | Made MasterFailureTest more robust.Tathagata Das2013-02-151-4/+22
| | | |