diff options
author | Cheng Lian <lian@databricks.com> | 2015-10-21 09:02:20 +0800 |
---|---|---|
committer | Cheng Lian <lian@databricks.com> | 2015-10-21 09:02:59 +0800 |
commit | 89e6db6150704deab46232352d1986bc1449883b (patch) | |
tree | d292f300840ea50c661bcf156384f8eed1fd3755 /external/kafka | |
parent | aea7142c9802d1e855443c01621ebc8d57be8c5e (diff) | |
download | spark-89e6db6150704deab46232352d1986bc1449883b.tar.gz spark-89e6db6150704deab46232352d1986bc1449883b.tar.bz2 spark-89e6db6150704deab46232352d1986bc1449883b.zip |
[SPARK-11153][SQL] Disables Parquet filter push-down for string and binary columns
Due to PARQUET-251, `BINARY` columns in existing Parquet files may be written with corrupted statistics information. This information is used by filter push-down optimization. Since Spark 1.5 turns on Parquet filter push-down by default, we may end up with wrong query results. PARQUET-251 has been fixed in parquet-mr 1.8.1, but Spark 1.5 is still using 1.7.0.
This affects all Spark SQL data types that can be mapped to Parquet {{BINARY}}, namely:
- `StringType`
- `BinaryType`
- `DecimalType`
(But Spark SQL doesn't support pushing down filters involving `DecimalType` columns for now.)
To avoid wrong query results, we should disable filter push-down for columns of `StringType` and `BinaryType` until we upgrade to parquet-mr 1.8.
Author: Cheng Lian <lian@databricks.com>
Closes #9152 from liancheng/spark-11153.workaround-parquet-251.
(cherry picked from commit 0887e5e87891e8e22f534ca6d0406daf86ec2dad)
Signed-off-by: Cheng Lian <lian@databricks.com>
Diffstat (limited to 'external/kafka')
0 files changed, 0 insertions, 0 deletions