aboutsummaryrefslogtreecommitdiff
path: root/sbt
diff options
context:
space:
mode:
authorCheng Lian <lian@databricks.com>2015-10-08 16:18:35 -0700
committerCheng Lian <lian@databricks.com>2015-10-08 16:18:35 -0700
commit02149ff08eed3745086589a047adbce9a580389f (patch)
treebf68b33823e9fb25455185fb247ed20d3f42ff28 /sbt
parent2816c89b6a304cb0b5214e14ebbc320158e88260 (diff)
downloadspark-02149ff08eed3745086589a047adbce9a580389f.tar.gz
spark-02149ff08eed3745086589a047adbce9a580389f.tar.bz2
spark-02149ff08eed3745086589a047adbce9a580389f.zip
[SPARK-8848] [SQL] Refactors Parquet write path to follow parquet-format
This PR refactors Parquet write path to follow parquet-format spec. It's a successor of PR #7679, but with less non-essential changes. Major changes include: 1. Replaces `RowWriteSupport` and `MutableRowWriteSupport` with `CatalystWriteSupport` - Writes Parquet data using standard layout defined in parquet-format Specifically, we are now writing ... - ... arrays and maps in standard 3-level structure with proper annotations and field names - ... decimals as `INT32` and `INT64` whenever possible, and taking `FIXED_LEN_BYTE_ARRAY` as the final fallback - Supports legacy mode which is compatible with Spark 1.4 and prior versions The legacy mode is by default off, and can be turned on by flipping SQL option `spark.sql.parquet.writeLegacyFormat` to `true`. - Eliminates per value data type dispatching costs via prebuilt composed writer functions 1. Cleans up the last pieces of old Parquet support code As pointed out by rxin previously, we probably want to rename all those `Catalyst*` Parquet classes to `Parquet*` for clarity. But I'd like to do this in a follow-up PR to minimize code review noises in this one. Author: Cheng Lian <lian@databricks.com> Closes #8988 from liancheng/spark-8848/standard-parquet-write-path.
Diffstat (limited to 'sbt')
0 files changed, 0 insertions, 0 deletions