diff options
author | Patrick Wendell <pwendell@gmail.com> | 2013-10-23 22:13:49 -0700 |
---|---|---|
committer | Patrick Wendell <pwendell@gmail.com> | 2013-10-24 14:31:33 -0700 |
commit | 08c1a42d7d9edef02a24a3bc5045b2dce035a93b (patch) | |
tree | 19c5f8e3be71eac820320aa84b7e89df27ef26a7 /docs | |
parent | 1dc776b863663af713920d18cecaf57762c2fd77 (diff) | |
download | spark-08c1a42d7d9edef02a24a3bc5045b2dce035a93b.tar.gz spark-08c1a42d7d9edef02a24a3bc5045b2dce035a93b.tar.bz2 spark-08c1a42d7d9edef02a24a3bc5045b2dce035a93b.zip |
Add a `repartition` operator.
This patch adds an operator called repartition with more straightforward
semantics than the current `coalesce` operator. There are a few use cases
where this operator is useful:
1. If a user wants to increase the number of partitions in the RDD. This
is more common now with streaming. E.g. a user is ingesting data on one
node but they want to add more partitions to ensure parallelism of
subsequent operations across threads or the cluster.
Right now they have to call rdd.coalesce(numSplits, shuffle=true) - that's
super confusing.
2. If a user has input data where the number of partitions is not known. E.g.
> sc.textFile("some file").coalesce(50)....
This is both vague semantically (am I growing or shrinking this RDD) but also,
may not work correctly if the base RDD has fewer than 50 partitions.
The new operator forces shuffles every time, so it will always produce exactly
the number of new partitions. It also throws an exception rather than silently
not-working if a bad input is passed.
I am currently adding streaming tests (requires refactoring some of the test
suite to allow testing at partition granularity), so this is not ready for
merge yet. But feedback is welcome.
Diffstat (limited to 'docs')
-rw-r--r-- | docs/streaming-programming-guide.md | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md index 835b257238..851e30fe76 100644 --- a/docs/streaming-programming-guide.md +++ b/docs/streaming-programming-guide.md @@ -73,6 +73,10 @@ DStreams support many of the transformations available on normal Spark RDD's: Iterator[T] => Iterator[U] when running on an DStream of type T. </td> </tr> <tr> + <td> <b>repartition</b>(<i>numPartitions</i>) </td> + <td> Changes the level of parallelism in this DStream by creating more or fewer partitions. </td> +</tr> +<tr> <td> <b>union</b>(<i>otherStream</i>) </td> <td> Return a new DStream that contains the union of the elements in the source DStream and the argument DStream. </td> </tr> |