diff options
author | Dongjoon Hyun <dongjoon@apache.org> | 2016-04-19 22:28:11 -0700 |
---|---|---|
committer | Davies Liu <davies.liu@gmail.com> | 2016-04-19 22:28:11 -0700 |
commit | 14869ae64eb27830179d4954a5dc3e0a1e1330b4 (patch) | |
tree | c294dde39b5d77c6086b3d08a726b1c8b401b95a /R/pkg/NAMESPACE | |
parent | 6f1ec1f2670cd55bc852a810ca9d5c6a2651a9f2 (diff) | |
download | spark-14869ae64eb27830179d4954a5dc3e0a1e1330b4.tar.gz spark-14869ae64eb27830179d4954a5dc3e0a1e1330b4.tar.bz2 spark-14869ae64eb27830179d4954a5dc3e0a1e1330b4.zip |
[SPARK-14639] [PYTHON] [R] Add `bround` function in Python/R.
## What changes were proposed in this pull request?
This issue aims to expose Scala `bround` function in Python/R API.
`bround` function is implemented in SPARK-14614 by extending current `round` function.
We used the following semantics from Hive.
```java
public static double bround(double input, int scale) {
if (Double.isNaN(input) || Double.isInfinite(input)) {
return input;
}
return BigDecimal.valueOf(input).setScale(scale, RoundingMode.HALF_EVEN).doubleValue();
}
```
After this PR, `pyspark` and `sparkR` also support `bround` function.
**PySpark**
```python
>>> from pyspark.sql.functions import bround
>>> sqlContext.createDataFrame([(2.5,)], ['a']).select(bround('a', 0).alias('r')).collect()
[Row(r=2.0)]
```
**SparkR**
```r
> df = createDataFrame(sqlContext, data.frame(x = c(2.5, 3.5)))
> head(collect(select(df, bround(df$x, 0))))
bround(x, 0)
1 2
2 4
```
## How was this patch tested?
Pass the Jenkins tests (including new testcases).
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #12509 from dongjoon-hyun/SPARK-14639.
Diffstat (limited to 'R/pkg/NAMESPACE')
-rw-r--r-- | R/pkg/NAMESPACE | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/R/pkg/NAMESPACE b/R/pkg/NAMESPACE index 10b9d16279..667fff7192 100644 --- a/R/pkg/NAMESPACE +++ b/R/pkg/NAMESPACE @@ -126,6 +126,7 @@ exportMethods("%in%", "between", "bin", "bitwiseNOT", + "bround", "cast", "cbrt", "ceil", |