diff options
author | Daoyuan Wang <daoyuan.wang@intel.com> | 2016-05-23 23:29:15 -0700 |
---|---|---|
committer | Andrew Or <andrew@databricks.com> | 2016-05-23 23:29:15 -0700 |
commit | d642b273544bb77ef7f584326aa2d214649ac61b (patch) | |
tree | e2bf63cd2c378d285165a7bf5f829dad93322efe /python/pyspark | |
parent | de726b0d533158d3ca08841bd6976bcfa26ca79d (diff) | |
download | spark-d642b273544bb77ef7f584326aa2d214649ac61b.tar.gz spark-d642b273544bb77ef7f584326aa2d214649ac61b.tar.bz2 spark-d642b273544bb77ef7f584326aa2d214649ac61b.zip |
[SPARK-15397][SQL] fix string udf locate as hive
## What changes were proposed in this pull request?
in hive, `locate("aa", "aaa", 0)` would yield 0, `locate("aa", "aaa", 1)` would yield 1 and `locate("aa", "aaa", 2)` would yield 2, while in Spark, `locate("aa", "aaa", 0)` would yield 1, `locate("aa", "aaa", 1)` would yield 2 and `locate("aa", "aaa", 2)` would yield 0. This results from the different understanding of the third parameter in udf `locate`. It means the starting index and starts from 1, so when we use 0, the return would always be 0.
## How was this patch tested?
tested with modified `StringExpressionsSuite` and `StringFunctionsSuite`
Author: Daoyuan Wang <daoyuan.wang@intel.com>
Closes #13186 from adrian-wang/locate.
Diffstat (limited to 'python/pyspark')
-rw-r--r-- | python/pyspark/sql/functions.py | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py index 1f15eec645..64b8bc442d 100644 --- a/python/pyspark/sql/functions.py +++ b/python/pyspark/sql/functions.py @@ -1359,7 +1359,7 @@ def levenshtein(left, right): @since(1.5) -def locate(substr, str, pos=0): +def locate(substr, str, pos=1): """ Locate the position of the first occurrence of substr in a string column, after position pos. |