aboutsummaryrefslogtreecommitdiff
path: root/sql/core/src/test
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2016-07-12 10:07:23 -0700
committerReynold Xin <rxin@databricks.com>2016-07-12 10:07:23 -0700
commitc377e49e38a290e5c4fbc178278069788674dfb7 (patch)
treeef043d59d8ab9eb0b778fe6a703cf73bfee6cfaf /sql/core/src/test
parent5ad68ba5ce625c7005b540ca50ed001ca18de967 (diff)
downloadspark-c377e49e38a290e5c4fbc178278069788674dfb7.tar.gz
spark-c377e49e38a290e5c4fbc178278069788674dfb7.tar.bz2
spark-c377e49e38a290e5c4fbc178278069788674dfb7.zip
[SPARK-16489][SQL] Guard against variable reuse mistakes in expression code generation
## What changes were proposed in this pull request? In code generation, it is incorrect for expressions to reuse variable names across different instances of itself. As an example, SPARK-16488 reports a bug in which pmod expression reuses variable name "r". This patch updates ExpressionEvalHelper test harness to always project two instances of the same expression, which will help us catch variable reuse problems in expression unit tests. This patch also fixes the bug in crc32 expression. ## How was this patch tested? This is a test harness change, but I also created a new test suite for testing the test harness. Author: Reynold Xin <rxin@databricks.com> Closes #14146 from rxin/SPARK-16489.
Diffstat (limited to 'sql/core/src/test')
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala14
1 files changed, 0 insertions, 14 deletions
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
index f706b20364..05935cec4b 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
@@ -449,20 +449,6 @@ class DataFrameReaderWriterSuite extends QueryTest with SharedSQLContext with Be
}
}
- test("pmod with partitionBy") {
- val spark = this.spark
- import spark.implicits._
-
- case class Test(a: Int, b: String)
- val data = Seq((0, "a"), (1, "b"), (1, "a"))
- spark.createDataset(data).createOrReplaceTempView("test")
- sql("select * from test distribute by pmod(_1, 2)")
- .write
- .partitionBy("_2")
- .mode("overwrite")
- .parquet(dir)
- }
-
private def testRead(
df: => DataFrame,
expectedResult: Seq[String],