diff options
author | GraceH <93113783@qq.com> | 2016-08-13 11:39:58 +0100 |
---|---|---|
committer | Sean Owen <sowen@cloudera.com> | 2016-08-13 11:39:58 +0100 |
commit | 8c8acdec9365136cba13060ce36c22b28e29b59b (patch) | |
tree | 77d36550321fb0872d1c9a5ad4e4f8328a56cea4 /sql/core/src/test/scala | |
parent | 7f7133bdccecaccd6dfb52f13c18c1e320d65f86 (diff) | |
download | spark-8c8acdec9365136cba13060ce36c22b28e29b59b.tar.gz spark-8c8acdec9365136cba13060ce36c22b28e29b59b.tar.bz2 spark-8c8acdec9365136cba13060ce36c22b28e29b59b.zip |
[SPARK-16968] Add additional options in jdbc when creating a new table
## What changes were proposed in this pull request?
In the PR, we just allow the user to add additional options when create a new table in JDBC writer.
The options can be table_options or partition_options.
E.g., "CREATE TABLE t (name string) ENGINE=InnoDB DEFAULT CHARSET=utf8"
Here is the usage example:
```
df.write.option("createTableOptions", "ENGINE=InnoDB DEFAULT CHARSET=utf8").jdbc(...)
```
## How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
will apply test result soon.
Author: GraceH <93113783@qq.com>
Closes #14559 from GraceH/jdbc_options.
Diffstat (limited to 'sql/core/src/test/scala')
-rw-r--r-- | sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala index d99b3cf975..ff3309874f 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala @@ -174,6 +174,18 @@ class JDBCWriteSuite extends SharedSQLContext with BeforeAndAfter { JdbcDialects.unregisterDialect(testH2Dialect) } + test("createTableOptions") { + JdbcDialects.registerDialect(testH2Dialect) + val df = spark.createDataFrame(sparkContext.parallelize(arr2x2), schema2) + + val m = intercept[org.h2.jdbc.JdbcSQLException] { + df.write.option("createTableOptions", "ENGINE tableEngineName") + .jdbc(url1, "TEST.CREATETBLOPTS", properties) + }.getMessage + assert(m.contains("Class \"TABLEENGINENAME\" not found")) + JdbcDialects.unregisterDialect(testH2Dialect) + } + test("Incompatible INSERT to append") { val df = spark.createDataFrame(sparkContext.parallelize(arr2x2), schema2) val df2 = spark.createDataFrame(sparkContext.parallelize(arr2x3), schema3) |