diff options
author | Reynold Xin <rxin@databricks.com> | 2015-01-16 21:09:06 -0800 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2015-01-16 21:09:06 -0800 |
commit | 61b427d4b1c4934bd70ed4da844b64f0e9a377aa (patch) | |
tree | 5068b31119fa7e2256422d4fdf18703ae64d7ab2 /project/MimaExcludes.scala | |
parent | ee1c1f3a04dfe80843432e349f01178e47f02443 (diff) | |
download | spark-61b427d4b1c4934bd70ed4da844b64f0e9a377aa.tar.gz spark-61b427d4b1c4934bd70ed4da844b64f0e9a377aa.tar.bz2 spark-61b427d4b1c4934bd70ed4da844b64f0e9a377aa.zip |
[SPARK-5193][SQL] Remove Spark SQL Java-specific API.
After the following patches, the main (Scala) API is now usable for Java users directly.
https://github.com/apache/spark/pull/4056
https://github.com/apache/spark/pull/4054
https://github.com/apache/spark/pull/4049
https://github.com/apache/spark/pull/4030
https://github.com/apache/spark/pull/3965
https://github.com/apache/spark/pull/3958
Author: Reynold Xin <rxin@databricks.com>
Closes #4065 from rxin/sql-java-api and squashes the following commits:
b1fd860 [Reynold Xin] Fix Mima
6d86578 [Reynold Xin] Ok one more attempt in fixing Python...
e8f1455 [Reynold Xin] Fix Python again...
3e53f91 [Reynold Xin] Fixed Python.
83735da [Reynold Xin] Fix BigDecimal test.
e9f1de3 [Reynold Xin] Use scala BigDecimal.
500d2c4 [Reynold Xin] Fix Decimal.
ba3bfa2 [Reynold Xin] Updated javadoc for RowFactory.
c4ae1c5 [Reynold Xin] [SPARK-5193][SQL] Remove Spark SQL Java-specific API.
Diffstat (limited to 'project/MimaExcludes.scala')
-rw-r--r-- | project/MimaExcludes.scala | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala index d3ea594245..0ccbfcb0c4 100644 --- a/project/MimaExcludes.scala +++ b/project/MimaExcludes.scala @@ -78,6 +78,10 @@ object MimaExcludes { "org.apache.spark.TaskContext.taskAttemptId"), ProblemFilters.exclude[MissingMethodProblem]( "org.apache.spark.TaskContext.attemptNumber") + ) ++ Seq( + // SPARK-5166 Spark SQL API stabilization + ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.Transformer.transform"), + ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.Estimator.fit") ) case v if v.startsWith("1.2") => |