| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes error `org.postgresql.util.PSQLException: Unable to find server array type for provided name decimal(38,18)`.
* Passes scale metadata to JDBC dialect for usage in type conversions.
* Removes unused length/scale/precision parameters from `createArrayOf` parameter `typeName` (for writing).
* Adds configurable precision and scale to Postgres `DecimalType` (for reading).
* Adds a new kind of test that verifies the schema written by `DataFrame.write.jdbc`.
Author: Brandon Bradley <bradleytastic@gmail.com>
Closes #10928 from blbradley/spark-12966.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch changes Spark's build to make Scala 2.11 the default Scala version. To be clear, this does not mean that Spark will stop supporting Scala 2.10: users will still be able to compile Spark for Scala 2.10 by following the instructions on the "Building Spark" page; however, it does mean that Scala 2.11 will be the default Scala version used by our CI builds (including pull request builds).
The Scala 2.11 compiler is faster than 2.10, so I think we'll be able to look forward to a slight speedup in our CI builds (it looks like it's about 2X faster for the Maven compile-only builds, for instance).
After this patch is merged, I'll update Jenkins to add new compile-only jobs to ensure that Scala 2.10 compilation doesn't break.
Author: Josh Rosen <joshrosen@databricks.com>
Closes #10608 from JoshRosen/SPARK-6363.
|
|
|
|
|
|
|
|
|
| |
We can handle posgresql-specific enum types as strings in jdbc.
So, we should just add tests and close the corresponding JIRA ticket.
Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>
Closes #10596 from maropu/AddTestsInIntegration.
|
|
|
|
|
|
|
|
|
|
| |
https://issues.apache.org/jira/browse/SPARK-12747
Postgres JDBC driver uses "FLOAT4" or "FLOAT8" not "real".
Author: Liang-Chi Hsieh <viirya@gmail.com>
Closes #10695 from viirya/fix-postgres-jdbc.
|
|
|
|
|
|
| |
Author: Marcelo Vanzin <vanzin@cloudera.com>
Closes #10582 from vanzin/SPARK-3873-tests.
|
|
|
|
|
|
| |
Author: Reynold Xin <rxin@databricks.com>
Closes #10387 from rxin/version-bump.
|
|
|
|
|
|
|
|
|
|
| |
docker-client
This commit fixes dependency issues which prevented the Docker-based JDBC integration tests from running in the Maven build.
Author: Mark Grover <mgrover@cloudera.com>
Closes #9876 from markgrover/master_docker.
|
|
|
|
|
|
| |
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9783 from cloud-fan/postgre.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add ARRAY support to `PostgresDialect`.
Nested ARRAY is not allowed for now because it's hard to get the array dimension info. See http://stackoverflow.com/questions/16619113/how-to-get-array-base-type-in-postgres-via-jdbc
Thanks for the initial work from mariusvniekerk !
Close https://github.com/apache/spark/pull/9137
Author: Wenchen Fan <wenchen@databricks.com>
Closes #9662 from cloud-fan/postgre.
|
|
This patch re-enables tests for the Docker JDBC data source. These tests were reverted in #4872 due to transitive dependency conflicts introduced by the `docker-client` library. This patch should avoid those problems by using a version of `docker-client` which shades its transitive dependencies and by performing some build-magic to work around problems with that shaded JAR.
In addition, I significantly refactored the tests to simplify the setup and teardown code and to fix several Docker networking issues which caused problems when running in `boot2docker`.
Closes #8101.
Author: Josh Rosen <joshrosen@databricks.com>
Author: Yijie Shen <henry.yijieshen@gmail.com>
Closes #9503 from JoshRosen/docker-jdbc-tests.
|