aboutsummaryrefslogtreecommitdiff
path: root/external
diff options
context:
space:
mode:
authorDongjoon Hyun <dongjoon@apache.org>2016-07-08 16:07:12 -0700
committerReynold Xin <rxin@databricks.com>2016-07-08 16:07:12 -0700
commit3b22291b5f0317609cd71ce7af78e4c5063d66e8 (patch)
tree3c31c4439683523dd0dfca35f0208eeef974911d /external
parent60ba436b7010436c77dfe5219a9662accc25bffa (diff)
downloadspark-3b22291b5f0317609cd71ce7af78e4c5063d66e8.tar.gz
spark-3b22291b5f0317609cd71ce7af78e4c5063d66e8.tar.bz2
spark-3b22291b5f0317609cd71ce7af78e4c5063d66e8.zip
[SPARK-16387][SQL] JDBC Writer should use dialect to quote field names.
## What changes were proposed in this pull request? Currently, JDBC Writer uses dialects to get datatypes, but doesn't to quote field names. This PR uses dialects to quote the field names, too. **Reported Error Scenario (MySQL case)** ```scala scala> val url="jdbc:mysql://localhost:3306/temp" scala> val prop = new java.util.Properties scala> prop.setProperty("user","root") scala> spark.createDataset(Seq("a","b","c")).toDF("order") scala> df.write.mode("overwrite").jdbc(url, "temptable", prop) ...MySQLSyntaxErrorException: ... near 'order TEXT ) ``` ## How was this patch tested? Pass the Jenkins tests and manually do the above case. Author: Dongjoon Hyun <dongjoon@apache.org> Closes #14107 from dongjoon-hyun/SPARK-16387.
Diffstat (limited to 'external')
0 files changed, 0 insertions, 0 deletions