aboutsummaryrefslogtreecommitdiff
path: root/sql
diff options
context:
space:
mode:
authorNong Li <nong@databricks.com>2016-02-26 22:36:32 -0800
committerReynold Xin <rxin@databricks.com>2016-02-26 22:36:32 -0800
commit7a0cb4e58728834b49050ce4fae418acc18a601f (patch)
tree673bb9f7d919826c4eef8d86ac4988c91afb1304 /sql
parent59e3e10be2f9a1c53979ca72c038adb4fa17ca64 (diff)
downloadspark-7a0cb4e58728834b49050ce4fae418acc18a601f.tar.gz
spark-7a0cb4e58728834b49050ce4fae418acc18a601f.tar.bz2
spark-7a0cb4e58728834b49050ce4fae418acc18a601f.zip
[SPARK-13518][SQL] Enable vectorized parquet scanner by default
## What changes were proposed in this pull request? Change the default of the flag to enable this feature now that the implementation is complete. ## How was this patch tested? The new parquet reader should be a drop in, so will be exercised by the existing tests. Author: Nong Li <nong@databricks.com> Closes #11397 from nongli/spark-13518.
Diffstat (limited to 'sql')
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala5
1 files changed, 1 insertions, 4 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala b/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index 9a50ef77ef..1d1e288441 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -345,12 +345,9 @@ object SQLConf {
defaultValue = Some(true),
doc = "Enables using the custom ParquetUnsafeRowRecordReader.")
- // Note: this can not be enabled all the time because the reader will not be returning UnsafeRows.
- // Doing so is very expensive and we should remove this requirement instead of fixing it here.
- // Initial testing seems to indicate only sort requires this.
val PARQUET_VECTORIZED_READER_ENABLED = booleanConf(
key = "spark.sql.parquet.enableVectorizedReader",
- defaultValue = Some(false),
+ defaultValue = Some(true),
doc = "Enables vectorized parquet decoding.")
val ORC_FILTER_PUSHDOWN_ENABLED = booleanConf("spark.sql.orc.filterPushdown",