aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorWenchen Fan <wenchen@databricks.com>2017-03-20 21:43:14 -0700
committerXiao Li <gatorsmile@gmail.com>2017-03-20 21:43:14 -0700
commit68d65fae71e475ad811a9716098aca03a2af9532 (patch)
tree8eb62ef41f500b43cdfe1325c35dc39498841020 /docs
parent21e366aea5a7f49e42e78dce06ff6b3ee1e36f06 (diff)
downloadspark-68d65fae71e475ad811a9716098aca03a2af9532.tar.gz
spark-68d65fae71e475ad811a9716098aca03a2af9532.tar.bz2
spark-68d65fae71e475ad811a9716098aca03a2af9532.zip
[SPARK-19949][SQL] unify bad record handling in CSV and JSON
## What changes were proposed in this pull request? Currently JSON and CSV have exactly the same logic about handling bad records, this PR tries to abstract it and put it in a upper level to reduce code duplication. The overall idea is, we make the JSON and CSV parser to throw a BadRecordException, then the upper level, FailureSafeParser, handles bad records according to the parse mode. Behavior changes: 1. with PERMISSIVE mode, if the number of tokens doesn't match the schema, previously CSV parser will treat it as a legal record and parse as many tokens as possible. After this PR, we treat it as an illegal record, and put the raw record string in a special column, but we still parse as many tokens as possible. 2. all logging is removed as they are not very useful in practice. ## How was this patch tested? existing tests Author: Wenchen Fan <wenchen@databricks.com> Author: hyukjinkwon <gurwls223@gmail.com> Author: Wenchen Fan <cloud0fan@gmail.com> Closes #17315 from cloud-fan/bad-record2.
Diffstat (limited to 'docs')
0 files changed, 0 insertions, 0 deletions