diff options
author | Hossein <hossein@databricks.com> | 2016-03-08 17:45:15 -0800 |
---|---|---|
committer | Michael Armbrust <michael@databricks.com> | 2016-03-08 17:45:15 -0800 |
commit | cc4ab37ee78c888867e773d732e2b3ed89683fe2 (patch) | |
tree | 307258f71eedb2b6ee624d322842ed1a136d1fa6 /sql/core/src/main/scala/org/apache | |
parent | 982ef2b87e3a9e0f3f252a1a0f30970cafe58c52 (diff) | |
download | spark-cc4ab37ee78c888867e773d732e2b3ed89683fe2.tar.gz spark-cc4ab37ee78c888867e773d732e2b3ed89683fe2.tar.bz2 spark-cc4ab37ee78c888867e773d732e2b3ed89683fe2.zip |
[SPARK-13754] Keep old data source name for backwards compatibility
## Motivation
CSV data source was contributed by Databricks. It is the inlined version of https://github.com/databricks/spark-csv. The data source name was `com.databricks.spark.csv`. As a result there are many tables created on older versions of spark with that name as the source. For backwards compatibility we should keep the old name.
## Proposed changes
`com.databricks.spark.csv` was added to list of `backwardCompatibilityMap` in `ResolvedDataSource.scala`
## Tests
A unit test was added to `CSVSuite` to parse a csv file using the old name.
Author: Hossein <hossein@databricks.com>
Closes #11589 from falaki/SPARK-13754.
Diffstat (limited to 'sql/core/src/main/scala/org/apache')
-rw-r--r-- | sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala index e90e72dc8c..e048ee1441 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala @@ -75,7 +75,8 @@ case class DataSource( "org.apache.spark.sql.json" -> classOf[json.DefaultSource].getCanonicalName, "org.apache.spark.sql.json.DefaultSource" -> classOf[json.DefaultSource].getCanonicalName, "org.apache.spark.sql.parquet" -> classOf[parquet.DefaultSource].getCanonicalName, - "org.apache.spark.sql.parquet.DefaultSource" -> classOf[parquet.DefaultSource].getCanonicalName + "org.apache.spark.sql.parquet.DefaultSource" -> classOf[parquet.DefaultSource].getCanonicalName, + "com.databricks.spark.csv" -> classOf[csv.DefaultSource].getCanonicalName ) /** Given a provider name, look up the data source class definition. */ |