aboutsummaryrefslogtreecommitdiff
path: root/yarn
diff options
context:
space:
mode:
authorDongjoon Hyun <dongjoon@apache.org>2016-04-19 22:28:11 -0700
committerDavies Liu <davies.liu@gmail.com>2016-04-19 22:28:11 -0700
commit14869ae64eb27830179d4954a5dc3e0a1e1330b4 (patch)
treec294dde39b5d77c6086b3d08a726b1c8b401b95a /yarn
parent6f1ec1f2670cd55bc852a810ca9d5c6a2651a9f2 (diff)
downloadspark-14869ae64eb27830179d4954a5dc3e0a1e1330b4.tar.gz
spark-14869ae64eb27830179d4954a5dc3e0a1e1330b4.tar.bz2
spark-14869ae64eb27830179d4954a5dc3e0a1e1330b4.zip
[SPARK-14639] [PYTHON] [R] Add `bround` function in Python/R.
## What changes were proposed in this pull request? This issue aims to expose Scala `bround` function in Python/R API. `bround` function is implemented in SPARK-14614 by extending current `round` function. We used the following semantics from Hive. ```java public static double bround(double input, int scale) { if (Double.isNaN(input) || Double.isInfinite(input)) { return input; } return BigDecimal.valueOf(input).setScale(scale, RoundingMode.HALF_EVEN).doubleValue(); } ``` After this PR, `pyspark` and `sparkR` also support `bround` function. **PySpark** ```python >>> from pyspark.sql.functions import bround >>> sqlContext.createDataFrame([(2.5,)], ['a']).select(bround('a', 0).alias('r')).collect() [Row(r=2.0)] ``` **SparkR** ```r > df = createDataFrame(sqlContext, data.frame(x = c(2.5, 3.5))) > head(collect(select(df, bround(df$x, 0)))) bround(x, 0) 1 2 2 4 ``` ## How was this patch tested? Pass the Jenkins tests (including new testcases). Author: Dongjoon Hyun <dongjoon@apache.org> Closes #12509 from dongjoon-hyun/SPARK-14639.
Diffstat (limited to 'yarn')
0 files changed, 0 insertions, 0 deletions