aboutsummaryrefslogtreecommitdiff
path: root/project
diff options
context:
space:
mode:
authorSean Zhong <seanzhong@databricks.com>2016-06-14 09:10:27 -0700
committerYin Huai <yhuai@databricks.com>2016-06-14 09:10:27 -0700
commit6e8cdef0cf36f6e921d9e1a65c61b66196935820 (patch)
treea15e8367d45bcac6308703b35453f7cafece9a7e /project
parent53bb03084796231f724ff8369490df520e1ee33c (diff)
downloadspark-6e8cdef0cf36f6e921d9e1a65c61b66196935820.tar.gz
spark-6e8cdef0cf36f6e921d9e1a65c61b66196935820.tar.bz2
spark-6e8cdef0cf36f6e921d9e1a65c61b66196935820.zip
[SPARK-15914][SQL] Add deprecated method back to SQLContext for backward source code compatibility
## What changes were proposed in this pull request? Revert partial changes in SPARK-12600, and add some deprecated method back to SQLContext for backward source code compatibility. ## How was this patch tested? Manual test. Author: Sean Zhong <seanzhong@databricks.com> Closes #13637 from clockfly/SPARK-15914.
Diffstat (limited to 'project')
-rw-r--r--project/MimaExcludes.scala9
1 files changed, 9 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index 9d0d9b1be0..a6209d78e1 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -778,6 +778,15 @@ object MimaExcludes {
) ++ Seq(
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.mllib.linalg.Vector.asBreeze"),
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.mllib.linalg.Matrix.asBreeze")
+ ) ++ Seq(
+ // [SPARK-15914] Binary compatibility is broken since consolidation of Dataset and DataFrame
+ // in Spark 2.0. However, source level compatibility is still maintained.
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.load"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.jsonRDD"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.jsonFile"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.jdbc"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.parquetFile"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.applySchema")
)
case v if v.startsWith("1.6") =>
Seq(