aboutsummaryrefslogtreecommitdiff
path: root/mllib/src
diff options
context:
space:
mode:
authorXiao Li <gatorsmile@gmail.com>2017-03-16 12:06:20 +0800
committerWenchen Fan <wenchen@databricks.com>2017-03-16 12:06:20 +0800
commit1472cac4bb31c1886f82830778d34c4dd9030d7a (patch)
treefa3c0a07035ceed3d849080a2ccce00339dccc2d /mllib/src
parent21f333c635465069b7657d788052d510ffb0779a (diff)
downloadspark-1472cac4bb31c1886f82830778d34c4dd9030d7a.tar.gz
spark-1472cac4bb31c1886f82830778d34c4dd9030d7a.tar.bz2
spark-1472cac4bb31c1886f82830778d34c4dd9030d7a.zip
[SPARK-19830][SQL] Add parseTableSchema API to ParserInterface
### What changes were proposed in this pull request? Specifying the table schema in DDL formats is needed for different scenarios. For example, - [specifying the schema in SQL function `from_json` using DDL formats](https://issues.apache.org/jira/browse/SPARK-19637), which is suggested by marmbrus , - [specifying the customized JDBC data types](https://github.com/apache/spark/pull/16209). These two PRs need users to use the JSON format to specify the table schema. This is not user friendly. This PR is to provide a `parseTableSchema` API in `ParserInterface`. ### How was this patch tested? Added a test suite `TableSchemaParserSuite` Author: Xiao Li <gatorsmile@gmail.com> Closes #17171 from gatorsmile/parseDDLStmt.
Diffstat (limited to 'mllib/src')
0 files changed, 0 insertions, 0 deletions