aboutsummaryrefslogtreecommitdiff
path: root/bin/pyspark.cmd
diff options
context:
space:
mode:
authorWojtek Szymanski <wk.szymanski@gmail.com>2017-03-08 12:36:16 -0800
committerWenchen Fan <wenchen@databricks.com>2017-03-08 12:36:16 -0800
commite9e2c612d58a19ddcb4b6abfb7389a4b0f7ef6f8 (patch)
tree31f22bac0755a6384fef07155531f77423b242af /bin/pyspark.cmd
parentf3387d97487cbef894b6963bc008f6a5c4294a85 (diff)
downloadspark-e9e2c612d58a19ddcb4b6abfb7389a4b0f7ef6f8.tar.gz
spark-e9e2c612d58a19ddcb4b6abfb7389a4b0f7ef6f8.tar.bz2
spark-e9e2c612d58a19ddcb4b6abfb7389a4b0f7ef6f8.zip
[SPARK-19727][SQL] Fix for round function that modifies original column
## What changes were proposed in this pull request? Fix for SQL round function that modifies original column when underlying data frame is created from a local product. import org.apache.spark.sql.functions._ case class NumericRow(value: BigDecimal) val df = spark.createDataFrame(Seq(NumericRow(BigDecimal("1.23456789")))) df.show() +--------------------+ | value| +--------------------+ |1.234567890000000000| +--------------------+ df.withColumn("value_rounded", round('value)).show() // before +--------------------+-------------+ | value|value_rounded| +--------------------+-------------+ |1.000000000000000000| 1| +--------------------+-------------+ // after +--------------------+-------------+ | value|value_rounded| +--------------------+-------------+ |1.234567890000000000| 1| +--------------------+-------------+ ## How was this patch tested? New unit test added to existing suite `org.apache.spark.sql.MathFunctionsSuite` Author: Wojtek Szymanski <wk.szymanski@gmail.com> Closes #17075 from wojtek-szymanski/SPARK-19727.
Diffstat (limited to 'bin/pyspark.cmd')
0 files changed, 0 insertions, 0 deletions