aboutsummaryrefslogtreecommitdiff
path: root/sql/core/src/test
diff options
context:
space:
mode:
authorCheng Lian <lian@databricks.com>2016-07-21 17:15:07 +0800
committerCheng Lian <lian@databricks.com>2016-07-21 17:15:07 +0800
commit8674054d3402b400a4766fe1c9214001cebf2106 (patch)
tree86b683ae274455314ffd4386c348310537aa1956 /sql/core/src/test
parent864b764eafa57a1418b683ccf6899b01bab28fba (diff)
downloadspark-8674054d3402b400a4766fe1c9214001cebf2106.tar.gz
spark-8674054d3402b400a4766fe1c9214001cebf2106.tar.bz2
spark-8674054d3402b400a4766fe1c9214001cebf2106.zip
[SPARK-16632][SQL] Use Spark requested schema to guide vectorized Parquet reader initialization
## What changes were proposed in this pull request? In `SpecificParquetRecordReaderBase`, which is used by the vectorized Parquet reader, we convert the Parquet requested schema into a Spark schema to guide column reader initialization. However, the Parquet requested schema is tailored from the schema of the physical file being scanned, and may have inaccurate type information due to bugs of other systems (e.g. HIVE-14294). On the other hand, we already set the real Spark requested schema into Hadoop configuration in [`ParquetFileFormat`][1]. This PR simply reads out this schema to replace the converted one. ## How was this patch tested? New test case added in `ParquetQuerySuite`. [1]: https://github.com/apache/spark/blob/v2.0.0-rc5/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L292-L294 Author: Cheng Lian <lian@databricks.com> Closes #14278 from liancheng/spark-16632-simpler-fix.
Diffstat (limited to 'sql/core/src/test')
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala24
1 files changed, 24 insertions, 0 deletions
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
index 02b94452a1..7e83bcbb6e 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
@@ -680,6 +680,30 @@ class ParquetQuerySuite extends QueryTest with ParquetTest with SharedSQLContext
)
}
}
+
+ test("SPARK-16632: read Parquet int32 as ByteType and ShortType") {
+ withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> "true") {
+ withTempPath { dir =>
+ val path = dir.getCanonicalPath
+
+ // When being written to Parquet, `TINYINT` and `SMALLINT` should be converted into
+ // `int32 (INT_8)` and `int32 (INT_16)` respectively. However, Hive doesn't add the `INT_8`
+ // and `INT_16` annotation properly (HIVE-14294). Thus, when reading files written by Hive
+ // using Spark with the vectorized Parquet reader enabled, we may hit error due to type
+ // mismatch.
+ //
+ // Here we are simulating Hive's behavior by writing a single `INT` field and then read it
+ // back as `TINYINT` and `SMALLINT` in Spark to verify this issue.
+ Seq(1).toDF("f").write.parquet(path)
+
+ val withByteField = new StructType().add("f", ByteType)
+ checkAnswer(spark.read.schema(withByteField).parquet(path), Row(1: Byte))
+
+ val withShortField = new StructType().add("f", ShortType)
+ checkAnswer(spark.read.schema(withShortField).parquet(path), Row(1: Short))
+ }
+ }
+ }
}
object TestingUDT {