diff options
author | 0x0FFF <programmerag@gmail.com> | 2015-10-07 23:12:35 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2015-10-07 23:12:35 -0700 |
commit | b8f849b546739d3e4339563557509a51417fcb68 (patch) | |
tree | eae1788d5269037100b5a8ab76f64990b15e4d83 /docs/configuration.md | |
parent | 3aff0866a8601b4daf760d6bf175f68d5a0c8912 (diff) | |
download | spark-b8f849b546739d3e4339563557509a51417fcb68.tar.gz spark-b8f849b546739d3e4339563557509a51417fcb68.tar.bz2 spark-b8f849b546739d3e4339563557509a51417fcb68.zip |
[SPARK-7869][SQL] Adding Postgres JSON and JSONb data types support
This PR addresses [SPARK-7869](https://issues.apache.org/jira/browse/SPARK-7869)
Before the patch, attempt to load the table from Postgres with JSON/JSONb datatype caused error `java.sql.SQLException: Unsupported type 1111`
Postgres data types JSON and JSONb are now mapped to String on Spark side thus they can be loaded into DF and processed on Spark side
Example
Postgres:
```
create table test_json (id int, value json);
create table test_jsonb (id int, value jsonb);
insert into test_json (id, value) values
(1, '{"field1":"value1","field2":"value2","field3":[1,2,3]}'::json),
(2, '{"field1":"value3","field2":"value4","field3":[4,5,6]}'::json),
(3, '{"field3":"value5","field4":"value6","field3":[7,8,9]}'::json);
insert into test_jsonb (id, value) values
(4, '{"field1":"value1","field2":"value2","field3":[1,2,3]}'::jsonb),
(5, '{"field1":"value3","field2":"value4","field3":[4,5,6]}'::jsonb),
(6, '{"field3":"value5","field4":"value6","field3":[7,8,9]}'::jsonb);
```
PySpark:
```
>>> import json
>>> df1 = sqlContext.read.jdbc("jdbc:postgresql://127.0.0.1:5432/test?user=testuser", "test_json")
>>> df1.map(lambda x: (x.id, json.loads(x.value))).map(lambda (id, value): (id, value.get('field3'))).collect()
[(1, [1, 2, 3]), (2, [4, 5, 6]), (3, [7, 8, 9])]
>>> df2 = sqlContext.read.jdbc("jdbc:postgresql://127.0.0.1:5432/test?user=testuser", "test_jsonb")
>>> df2.map(lambda x: (x.id, json.loads(x.value))).map(lambda (id, value): (id, value.get('field1'))).collect()
[(4, u'value1'), (5, u'value3'), (6, None)]
```
Author: 0x0FFF <programmerag@gmail.com>
Closes #8948 from 0x0FFF/SPARK-7869.
Diffstat (limited to 'docs/configuration.md')
0 files changed, 0 insertions, 0 deletions