aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorCheng Hao <hao.cheng@intel.com>2015-10-14 16:29:32 -0700
committerCheng Lian <lian@databricks.com>2015-10-14 16:29:32 -0700
commit1baaf2b9bd7c949a8f95cd14fc1be2a56e1139b3 (patch)
tree686955f577440f49e07f6b5a9f3dad497c269702 /docs
parent2b5e31c7e97811ef7b4da47609973b7f51444346 (diff)
downloadspark-1baaf2b9bd7c949a8f95cd14fc1be2a56e1139b3.tar.gz
spark-1baaf2b9bd7c949a8f95cd14fc1be2a56e1139b3.tar.bz2
spark-1baaf2b9bd7c949a8f95cd14fc1be2a56e1139b3.zip
[SPARK-10829] [SQL] Filter combine partition key and attribute doesn't work in DataSource scan
```scala withSQLConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED.key -> "true") { withTempPath { dir => val path = s"${dir.getCanonicalPath}/part=1" (1 to 3).map(i => (i, i.toString)).toDF("a", "b").write.parquet(path) // If the "part = 1" filter gets pushed down, this query will throw an exception since // "part" is not a valid column in the actual Parquet file checkAnswer( sqlContext.read.parquet(path).filter("a > 0 and (part = 0 or a > 1)"), (2 to 3).map(i => Row(i, i.toString, 1))) } } ``` We expect the result to be: ``` 2,1 3,1 ``` But got ``` 1,1 2,1 3,1 ``` Author: Cheng Hao <hao.cheng@intel.com> Closes #8916 from chenghao-intel/partition_filter.
Diffstat (limited to 'docs')
0 files changed, 0 insertions, 0 deletions