aboutsummaryrefslogtreecommitdiff
path: root/sql/core/src/test
diff options
context:
space:
mode:
authorhyukjinkwon <gurwls223@gmail.com>2016-11-02 20:56:30 -0700
committergatorsmile <gatorsmile@gmail.com>2016-11-02 20:56:30 -0700
commit7eb2ca8e338e04034a662920261e028f56b07395 (patch)
tree60ba82749182efb7bc86408985dba150bf4e1b99 /sql/core/src/test
parent3a1bc6f4780f8384c1211b1335e7394a4a28377e (diff)
downloadspark-7eb2ca8e338e04034a662920261e028f56b07395.tar.gz
spark-7eb2ca8e338e04034a662920261e028f56b07395.tar.bz2
spark-7eb2ca8e338e04034a662920261e028f56b07395.zip
[SPARK-17963][SQL][DOCUMENTATION] Add examples (extend) in each expression and improve documentation
## What changes were proposed in this pull request? This PR proposes to change the documentation for functions. Please refer the discussion from https://github.com/apache/spark/pull/15513 The changes include - Re-indent the documentation - Add examples/arguments in `extended` where the arguments are multiple or specific format (e.g. xml/ json). For examples, the documentation was updated as below: ### Functions with single line usage **Before** - `pow` ``` sql Usage: pow(x1, x2) - Raise x1 to the power of x2. Extended Usage: > SELECT pow(2, 3); 8.0 ``` - `current_timestamp` ``` sql Usage: current_timestamp() - Returns the current timestamp at the start of query evaluation. Extended Usage: No example for current_timestamp. ``` **After** - `pow` ``` sql Usage: pow(expr1, expr2) - Raises `expr1` to the power of `expr2`. Extended Usage: Examples: > SELECT pow(2, 3); 8.0 ``` - `current_timestamp` ``` sql Usage: current_timestamp() - Returns the current timestamp at the start of query evaluation. Extended Usage: No example/argument for current_timestamp. ``` ### Functions with (already) multiple line usage **Before** - `approx_count_distinct` ``` sql Usage: approx_count_distinct(expr) - Returns the estimated cardinality by HyperLogLog++. approx_count_distinct(expr, relativeSD=0.05) - Returns the estimated cardinality by HyperLogLog++ with relativeSD, the maximum estimation error allowed. Extended Usage: No example for approx_count_distinct. ``` - `percentile_approx` ``` sql Usage: percentile_approx(col, percentage [, accuracy]) - Returns the approximate percentile value of numeric column `col` at the given percentage. The value of percentage must be between 0.0 and 1.0. The `accuracy` parameter (default: 10000) is a positive integer literal which controls approximation accuracy at the cost of memory. Higher value of `accuracy` yields better accuracy, `1.0/accuracy` is the relative error of the approximation. percentile_approx(col, array(percentage1 [, percentage2]...) [, accuracy]) - Returns the approximate percentile array of column `col` at the given percentage array. Each value of the percentage array must be between 0.0 and 1.0. The `accuracy` parameter (default: 10000) is a positive integer literal which controls approximation accuracy at the cost of memory. Higher value of `accuracy` yields better accuracy, `1.0/accuracy` is the relative error of the approximation. Extended Usage: No example for percentile_approx. ``` **After** - `approx_count_distinct` ``` sql Usage: approx_count_distinct(expr[, relativeSD]) - Returns the estimated cardinality by HyperLogLog++. `relativeSD` defines the maximum estimation error allowed. Extended Usage: No example/argument for approx_count_distinct. ``` - `percentile_approx` ``` sql Usage: percentile_approx(col, percentage [, accuracy]) - Returns the approximate percentile value of numeric column `col` at the given percentage. The value of percentage must be between 0.0 and 1.0. The `accuracy` parameter (default: 10000) is a positive numeric literal which controls approximation accuracy at the cost of memory. Higher value of `accuracy` yields better accuracy, `1.0/accuracy` is the relative error of the approximation. When `percentage` is an array, each value of the percentage array must be between 0.0 and 1.0. In this case, returns the approximate percentile array of column `col` at the given percentage array. Extended Usage: Examples: > SELECT percentile_approx(10.0, array(0.5, 0.4, 0.1), 100); [10.0,10.0,10.0] > SELECT percentile_approx(10.0, 0.5, 100); 10.0 ``` ## How was this patch tested? Manually tested **When examples are multiple** ``` sql spark-sql> describe function extended reflect; Function: reflect Class: org.apache.spark.sql.catalyst.expressions.CallMethodViaReflection Usage: reflect(class, method[, arg1[, arg2 ..]]) - Calls a method with reflection. Extended Usage: Examples: > SELECT reflect('java.util.UUID', 'randomUUID'); c33fb387-8500-4bfa-81d2-6e0e3e930df2 > SELECT reflect('java.util.UUID', 'fromString', 'a5cf6c42-0c85-418f-af6c-3e4e5b1328f2'); a5cf6c42-0c85-418f-af6c-3e4e5b1328f2 ``` **When `Usage` is in single line** ``` sql spark-sql> describe function extended min; Function: min Class: org.apache.spark.sql.catalyst.expressions.aggregate.Min Usage: min(expr) - Returns the minimum value of `expr`. Extended Usage: No example/argument for min. ``` **When `Usage` is already in multiple lines** ``` sql spark-sql> describe function extended percentile_approx; Function: percentile_approx Class: org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile Usage: percentile_approx(col, percentage [, accuracy]) - Returns the approximate percentile value of numeric column `col` at the given percentage. The value of percentage must be between 0.0 and 1.0. The `accuracy` parameter (default: 10000) is a positive numeric literal which controls approximation accuracy at the cost of memory. Higher value of `accuracy` yields better accuracy, `1.0/accuracy` is the relative error of the approximation. When `percentage` is an array, each value of the percentage array must be between 0.0 and 1.0. In this case, returns the approximate percentile array of column `col` at the given percentage array. Extended Usage: Examples: > SELECT percentile_approx(10.0, array(0.5, 0.4, 0.1), 100); [10.0,10.0,10.0] > SELECT percentile_approx(10.0, 0.5, 100); 10.0 ``` **When example/argument is missing** ``` sql spark-sql> describe function extended rank; Function: rank Class: org.apache.spark.sql.catalyst.expressions.Rank Usage: rank() - Computes the rank of a value in a group of values. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will produce gaps in the sequence. Extended Usage: No example/argument for rank. ``` Author: hyukjinkwon <gurwls223@gmail.com> Closes #15677 from HyukjinKwon/SPARK-17963-1.
Diffstat (limited to 'sql/core/src/test')
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala7
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala22
2 files changed, 18 insertions, 11 deletions
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
index 9a3d93cf17..6b517bc70f 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
@@ -85,15 +85,16 @@ class SQLQuerySuite extends QueryTest with SharedSQLContext {
checkKeywordsExist(sql("describe function extended upper"),
"Function: upper",
"Class: org.apache.spark.sql.catalyst.expressions.Upper",
- "Usage: upper(str) - Returns str with all characters changed to uppercase",
+ "Usage: upper(str) - Returns `str` with all characters changed to uppercase",
"Extended Usage:",
+ "Examples:",
"> SELECT upper('SparkSql');",
- "'SPARKSQL'")
+ "SPARKSQL")
checkKeywordsExist(sql("describe functioN Upper"),
"Function: upper",
"Class: org.apache.spark.sql.catalyst.expressions.Upper",
- "Usage: upper(str) - Returns str with all characters changed to uppercase")
+ "Usage: upper(str) - Returns `str` with all characters changed to uppercase")
checkKeywordsNotExist(sql("describe functioN Upper"), "Extended Usage")
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
index bde3c8a42e..22d4c929bf 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
@@ -1445,34 +1445,34 @@ class DDLSuite extends QueryTest with SharedSQLContext with BeforeAndAfterEach {
sql("DESCRIBE FUNCTION log"),
Row("Class: org.apache.spark.sql.catalyst.expressions.Logarithm") ::
Row("Function: log") ::
- Row("Usage: log(b, x) - Returns the logarithm of x with base b.") :: Nil
+ Row("Usage: log(base, expr) - Returns the logarithm of `expr` with `base`.") :: Nil
)
// predicate operator
checkAnswer(
sql("DESCRIBE FUNCTION or"),
Row("Class: org.apache.spark.sql.catalyst.expressions.Or") ::
Row("Function: or") ::
- Row("Usage: a or b - Logical OR.") :: Nil
+ Row("Usage: expr1 or expr2 - Logical OR.") :: Nil
)
checkAnswer(
sql("DESCRIBE FUNCTION !"),
Row("Class: org.apache.spark.sql.catalyst.expressions.Not") ::
Row("Function: !") ::
- Row("Usage: ! a - Logical not") :: Nil
+ Row("Usage: ! expr - Logical not.") :: Nil
)
// arithmetic operators
checkAnswer(
sql("DESCRIBE FUNCTION +"),
Row("Class: org.apache.spark.sql.catalyst.expressions.Add") ::
Row("Function: +") ::
- Row("Usage: a + b - Returns a+b.") :: Nil
+ Row("Usage: expr1 + expr2 - Returns `expr1`+`expr2`.") :: Nil
)
// comparison operators
checkAnswer(
sql("DESCRIBE FUNCTION <"),
Row("Class: org.apache.spark.sql.catalyst.expressions.LessThan") ::
Row("Function: <") ::
- Row("Usage: a < b - Returns TRUE if a is less than b.") :: Nil
+ Row("Usage: expr1 < expr2 - Returns true if `expr1` is less than `expr2`.") :: Nil
)
// STRING
checkAnswer(
@@ -1480,15 +1480,21 @@ class DDLSuite extends QueryTest with SharedSQLContext with BeforeAndAfterEach {
Row("Class: org.apache.spark.sql.catalyst.expressions.Concat") ::
Row("Function: concat") ::
Row("Usage: concat(str1, str2, ..., strN) " +
- "- Returns the concatenation of str1, str2, ..., strN") :: Nil
+ "- Returns the concatenation of `str1`, `str2`, ..., `strN`.") :: Nil
)
// extended mode
checkAnswer(
sql("DESCRIBE FUNCTION EXTENDED ^"),
Row("Class: org.apache.spark.sql.catalyst.expressions.BitwiseXor") ::
- Row("Extended Usage:\n> SELECT 3 ^ 5; 2") ::
+ Row(
+ """Extended Usage:
+ | Examples:
+ | > SELECT 3 ^ 5;
+ | 2
+ | """.stripMargin) ::
Row("Function: ^") ::
- Row("Usage: a ^ b - Bitwise exclusive OR.") :: Nil
+ Row("Usage: expr1 ^ expr2 - Returns the result of " +
+ "bitwise exclusive OR of `expr1` and `expr2`.") :: Nil
)
}