diff options
author | hyukjinkwon <gurwls223@gmail.com> | 2016-08-26 17:29:37 +0200 |
---|---|---|
committer | Herman van Hovell <hvanhovell@databricks.com> | 2016-08-26 17:29:37 +0200 |
commit | 6063d5963fcf01768570c1a9b542be6175a3bcbc (patch) | |
tree | 7e3e9843d0765987224ba734803563d0e2441b94 /tools | |
parent | 341e0e778dff8c404b47d34ee7661b658bb91880 (diff) | |
download | spark-6063d5963fcf01768570c1a9b542be6175a3bcbc.tar.gz spark-6063d5963fcf01768570c1a9b542be6175a3bcbc.tar.bz2 spark-6063d5963fcf01768570c1a9b542be6175a3bcbc.zip |
[SPARK-16216][SQL][FOLLOWUP] Enable timestamp type tests for JSON and verify all unsupported types in CSV
## What changes were proposed in this pull request?
This PR enables the tests for `TimestampType` for JSON and unifies the logics for verifying schema when writing in CSV.
In more details, this PR,
- Enables the tests for `TimestampType` for JSON and
This was disabled due to an issue in `DatatypeConverter.parseDateTime` which parses dates incorrectly, for example as below:
```scala
val d = javax.xml.bind.DatatypeConverter.parseDateTime("0900-01-01T00:00:00.000").getTime
println(d.toString)
```
```
Fri Dec 28 00:00:00 KST 899
```
However, since we use `FastDateFormat`, it seems we are safe now.
```scala
val d = FastDateFormat.getInstance("yyyy-MM-dd'T'HH:mm:ss.SSS").parse("0900-01-01T00:00:00.000")
println(d)
```
```
Tue Jan 01 00:00:00 PST 900
```
- Verifies all unsupported types in CSV
There is a separate logics to verify the schemas in `CSVFileFormat`. This is actually not quite correct enough because we don't support `NullType` and `CalanderIntervalType` as well `StructType`, `ArrayType`, `MapType`. So, this PR adds both types.
## How was this patch tested?
Tests in `JsonHadoopFsRelation` and `CSVSuite`
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #14829 from HyukjinKwon/SPARK-16216-followup.
Diffstat (limited to 'tools')
0 files changed, 0 insertions, 0 deletions