aboutsummaryrefslogtreecommitdiff
path: root/sql/core
diff options
context:
space:
mode:
authorbuzhihuojie <ren.weiluo@gmail.com>2016-11-02 11:36:20 -0700
committerReynold Xin <rxin@databricks.com>2016-11-02 11:36:20 -0700
commit742e0fea5391857964e90d396641ecf95cac4248 (patch)
tree3fbd75cfa136e7ef9991f9431d88b99183a791f4 /sql/core
parent4af0ce2d96de3397c9bc05684cad290a52486577 (diff)
downloadspark-742e0fea5391857964e90d396641ecf95cac4248.tar.gz
spark-742e0fea5391857964e90d396641ecf95cac4248.tar.bz2
spark-742e0fea5391857964e90d396641ecf95cac4248.zip
[SPARK-17895] Improve doc for rangeBetween and rowsBetween
## What changes were proposed in this pull request? Copied description for row and range based frame boundary from https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowExec.scala#L56 Added examples to show different behavior of rangeBetween and rowsBetween when involving duplicate values. Please review https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before opening a pull request. Author: buzhihuojie <ren.weiluo@gmail.com> Closes #15727 from david-weiluo-ren/improveDocForRangeAndRowsBetween.
Diffstat (limited to 'sql/core')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala55
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala55
2 files changed, 110 insertions, 0 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala b/sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala
index 0b26d863ca..327bc379d4 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/expressions/Window.scala
@@ -121,6 +121,32 @@ object Window {
* and [[Window.currentRow]] to specify special boundary values, rather than using integral
* values directly.
*
+ * A row based boundary is based on the position of the row within the partition.
+ * An offset indicates the number of rows above or below the current row, the frame for the
+ * current row starts or ends. For instance, given a row based sliding frame with a lower bound
+ * offset of -1 and a upper bound offset of +2. The frame for row with index 5 would range from
+ * index 4 to index 6.
+ *
+ * {{{
+ * import org.apache.spark.sql.expressions.Window
+ * val df = Seq((1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b"))
+ * .toDF("id", "category")
+ * df.withColumn("sum",
+ * sum('id) over Window.partitionBy('category).orderBy('id).rowsBetween(0,1))
+ * .show()
+ *
+ * +---+--------+---+
+ * | id|category|sum|
+ * +---+--------+---+
+ * | 1| b| 3|
+ * | 2| b| 5|
+ * | 3| b| 3|
+ * | 1| a| 2|
+ * | 1| a| 3|
+ * | 2| a| 2|
+ * +---+--------+---+
+ * }}}
+ *
* @param start boundary start, inclusive. The frame is unbounded if this is
* the minimum long value ([[Window.unboundedPreceding]]).
* @param end boundary end, inclusive. The frame is unbounded if this is the
@@ -144,6 +170,35 @@ object Window {
* and [[Window.currentRow]] to specify special boundary values, rather than using integral
* values directly.
*
+ * A range based boundary is based on the actual value of the ORDER BY
+ * expression(s). An offset is used to alter the value of the ORDER BY expression, for
+ * instance if the current order by expression has a value of 10 and the lower bound offset
+ * is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a
+ * number of constraints on the ORDER BY expressions: there can be only one expression and this
+ * expression must have a numerical data type. An exception can be made when the offset is 0,
+ * because no value modification is needed, in this case multiple and non-numeric ORDER BY
+ * expression are allowed.
+ *
+ * {{{
+ * import org.apache.spark.sql.expressions.Window
+ * val df = Seq((1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b"))
+ * .toDF("id", "category")
+ * df.withColumn("sum",
+ * sum('id) over Window.partitionBy('category).orderBy('id).rangeBetween(0,1))
+ * .show()
+ *
+ * +---+--------+---+
+ * | id|category|sum|
+ * +---+--------+---+
+ * | 1| b| 3|
+ * | 2| b| 5|
+ * | 3| b| 3|
+ * | 1| a| 4|
+ * | 1| a| 4|
+ * | 2| a| 2|
+ * +---+--------+---+
+ * }}}
+ *
* @param start boundary start, inclusive. The frame is unbounded if this is
* the minimum long value ([[Window.unboundedPreceding]]).
* @param end boundary end, inclusive. The frame is unbounded if this is the
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala b/sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala
index 1e85b6e788..4a8ce695bd 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala
@@ -89,6 +89,32 @@ class WindowSpec private[sql](
* and [[Window.currentRow]] to specify special boundary values, rather than using integral
* values directly.
*
+ * A row based boundary is based on the position of the row within the partition.
+ * An offset indicates the number of rows above or below the current row, the frame for the
+ * current row starts or ends. For instance, given a row based sliding frame with a lower bound
+ * offset of -1 and a upper bound offset of +2. The frame for row with index 5 would range from
+ * index 4 to index 6.
+ *
+ * {{{
+ * import org.apache.spark.sql.expressions.Window
+ * val df = Seq((1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b"))
+ * .toDF("id", "category")
+ * df.withColumn("sum",
+ * sum('id) over Window.partitionBy('category).orderBy('id).rowsBetween(0,1))
+ * .show()
+ *
+ * +---+--------+---+
+ * | id|category|sum|
+ * +---+--------+---+
+ * | 1| b| 3|
+ * | 2| b| 5|
+ * | 3| b| 3|
+ * | 1| a| 2|
+ * | 1| a| 3|
+ * | 2| a| 2|
+ * +---+--------+---+
+ * }}}
+ *
* @param start boundary start, inclusive. The frame is unbounded if this is
* the minimum long value ([[Window.unboundedPreceding]]).
* @param end boundary end, inclusive. The frame is unbounded if this is the
@@ -111,6 +137,35 @@ class WindowSpec private[sql](
* and [[Window.currentRow]] to specify special boundary values, rather than using integral
* values directly.
*
+ * A range based boundary is based on the actual value of the ORDER BY
+ * expression(s). An offset is used to alter the value of the ORDER BY expression, for
+ * instance if the current order by expression has a value of 10 and the lower bound offset
+ * is -3, the resulting lower bound for the current row will be 10 - 3 = 7. This however puts a
+ * number of constraints on the ORDER BY expressions: there can be only one expression and this
+ * expression must have a numerical data type. An exception can be made when the offset is 0,
+ * because no value modification is needed, in this case multiple and non-numeric ORDER BY
+ * expression are allowed.
+ *
+ * {{{
+ * import org.apache.spark.sql.expressions.Window
+ * val df = Seq((1, "a"), (1, "a"), (2, "a"), (1, "b"), (2, "b"), (3, "b"))
+ * .toDF("id", "category")
+ * df.withColumn("sum",
+ * sum('id) over Window.partitionBy('category).orderBy('id).rangeBetween(0,1))
+ * .show()
+ *
+ * +---+--------+---+
+ * | id|category|sum|
+ * +---+--------+---+
+ * | 1| b| 3|
+ * | 2| b| 5|
+ * | 3| b| 3|
+ * | 1| a| 4|
+ * | 1| a| 4|
+ * | 2| a| 2|
+ * +---+--------+---+
+ * }}}
+ *
* @param start boundary start, inclusive. The frame is unbounded if this is
* the minimum long value ([[Window.unboundedPreceding]]).
* @param end boundary end, inclusive. The frame is unbounded if this is the