aboutsummaryrefslogtreecommitdiff
path: root/docs/streaming-programming-guide.md
Commit message (Collapse)AuthorAgeFilesLines
* Updated docs for SparkConf and handled review commentsMatei Zaharia2013-12-301-2/+2
|
* Various broken links in documentationPatrick Wendell2013-12-071-4/+4
|
* Add a `repartition` operator.Patrick Wendell2013-10-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds an operator called repartition with more straightforward semantics than the current `coalesce` operator. There are a few use cases where this operator is useful: 1. If a user wants to increase the number of partitions in the RDD. This is more common now with streaming. E.g. a user is ingesting data on one node but they want to add more partitions to ensure parallelism of subsequent operations across threads or the cluster. Right now they have to call rdd.coalesce(numSplits, shuffle=true) - that's super confusing. 2. If a user has input data where the number of partitions is not known. E.g. > sc.textFile("some file").coalesce(50).... This is both vague semantically (am I growing or shrinking this RDD) but also, may not work correctly if the base RDD has fewer than 50 partitions. The new operator forces shuffles every time, so it will always produce exactly the number of new partitions. It also throws an exception rather than silently not-working if a bad input is passed. I am currently adding streaming tests (requires refactoring some of the test suite to allow testing at partition granularity), so this is not ready for merge yet. But feedback is welcome.
* Add docs for standalone scheduler fault toleranceAaron Davidson2013-10-081-3/+2
| | | | Also fix a couple HTML/Markdown issues in other files.
* More fixesMatei Zaharia2013-09-011-2/+10
|
* Fix more URLs in docsMatei Zaharia2013-09-011-3/+3
|
* Update docs for new packageMatei Zaharia2013-09-011-14/+14
|
* Change build and run instructions to use assembliesMatei Zaharia2013-08-291-2/+2
| | | | | | | | | | | | | | | | This commit makes Spark invocation saner by using an assembly JAR to find all of Spark's dependencies instead of adding all the JARs in lib_managed. It also packages the examples into an assembly and uses that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script with two better-named scripts: "run-examples" for examples, and "spark-class" for Spark internal classes (e.g. REPL, master, etc). This is also designed to minimize the confusion people have in trying to use "run" to run their own classes; it's not meant to do that, but now at least if they look at it, they can modify run-examples to do a decent job for them. As part of this, Bagel's examples are also now properly moved to the examples package instead of bagel.
* Linking custom receiver guidePrashant Sharma2013-08-231-0/+3
|
* Fixes typos in Spark Streaming Programming GuideAndy Konwinski2013-07-121-2/+2
| | | These typos were reported on the spark-users mailing list, see: https://groups.google.com/d/msg/spark-users/SyLGgJlKCrI/LpeBypOkSMUJ
* Typos: cluser -> clusterAndrew Ash2013-04-101-2/+2
|
* More doc tweaksMatei Zaharia2013-02-261-0/+1
|
* Merge pull request #500 from pwendell/streaming-docsTathagata Das2013-02-251-2/+2
|\ | | | | Minor changes based on feedback
| * meta-dataPatrick Wendell2013-02-251-1/+1
| |
| * One more change done with TDPatrick Wendell2013-02-251-1/+1
| |
| * Minor changes based on feedbackPatrick Wendell2013-02-251-2/+2
| |
* | Merge branch 'master' of github.com:mesos/sparkMatei Zaharia2013-02-251-4/+6
|\|
| * Some changes to streaming failure docs.Patrick Wendell2013-02-251-4/+6
| | | | | | | | | | | | TD gave me the go-ahead to just make these changes: - Define stateful dstream - Some minor wording fixes
* | Allow passing sparkHome and JARs to StreamingContext constructorMatei Zaharia2013-02-251-7/+3
| | | | | | | | | | Also warns if spark.cleaner.ttl is not set in the version where you pass your own SparkContext.
* | Some tweaks to docsMatei Zaharia2013-02-251-5/+5
|/
* Fixed class paths and dependencies based on Matei's comments.Tathagata Das2013-02-241-3/+3
|
* Updated streaming programming guide with Java API info, and comments from ↵Tathagata Das2013-02-231-11/+74
| | | | Patrick.
* Change spark.cleaner.delay to spark.cleaner.ttl. Updated docs.Tathagata Das2013-02-231-1/+1
|
* Changed networkStream to socketStream and pluggableNetworkStream to become ↵Tathagata Das2013-02-181-5/+5
| | | | networkStream as a way to create streams from arbitrary network receiver.
* Added checkpointing and fault-tolerance semantics to the programming guide. ↵Tathagata Das2013-02-181-52/+194
| | | | Fixed default checkpoint interval to being a multiple of slide duration. Fixed visibility of some classes and objects to clean up docs.
* Added documentation for PairDStreamFunctions.Tathagata Das2013-01-131-20/+25
|
* Renamed examples and added documentation.Tathagata Das2013-01-071-7/+7
|
* Updated Streaming Programming Guide.Tathagata Das2013-01-011-13/+154
|
* Improved jekyll and scala docs. Made many classes and method private to ↵Tathagata Das2012-12-291-26/+30
| | | | remove them from scala docs.
* Streaming programming guide. STREAMING-2 #resolvePatrick Wendell2012-11-131-0/+163