aboutsummaryrefslogtreecommitdiff
path: root/project/MimaExcludes.scala
diff options
context:
space:
mode:
authorReynold Xin <rxin@databricks.com>2015-01-13 17:16:41 -0800
committerReynold Xin <rxin@databricks.com>2015-01-13 17:16:41 -0800
commitf9969098c8cb15e36c718b80c6cf5b534a6cf7c3 (patch)
treef7335abaafcd8e044d09565f6f1f21f24d5bc544 /project/MimaExcludes.scala
parent14e3f114efb906937b2d7b7ac04484b2814a3b48 (diff)
downloadspark-f9969098c8cb15e36c718b80c6cf5b534a6cf7c3.tar.gz
spark-f9969098c8cb15e36c718b80c6cf5b534a6cf7c3.tar.bz2
spark-f9969098c8cb15e36c718b80c6cf5b534a6cf7c3.zip
[SPARK-5123][SQL] Reconcile Java/Scala API for data types.
Having two versions of the data type APIs (one for Java, one for Scala) requires downstream libraries to also have two versions of the APIs if the library wants to support both Java and Scala. I took a look at the Scala version of the data type APIs - it can actually work out pretty well for Java out of the box. As part of the PR, I created a sql.types package and moved all type definitions there. I then removed the Java specific data type API along with a lot of the conversion code. This subsumes https://github.com/apache/spark/pull/3925 Author: Reynold Xin <rxin@databricks.com> Closes #3958 from rxin/SPARK-5123-datatype-2 and squashes the following commits: 66505cc [Reynold Xin] [SPARK-5123] Expose only one version of the data type APIs (i.e. remove the Java-specific API).
Diffstat (limited to 'project/MimaExcludes.scala')
-rw-r--r--project/MimaExcludes.scala12
1 files changed, 12 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index 51e8bd4cf6..f6f9f491f4 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -60,6 +60,18 @@ object MimaExcludes {
ProblemFilters.exclude[IncompatibleResultTypeProblem](
"org.apache.spark.streaming.flume.sink.SparkAvroCallbackHandler." +
"removeAndGetProcessor")
+ ) ++ Seq(
+ // SPARK-5123 (SparkSQL data type change) - alpha component only
+ ProblemFilters.exclude[IncompatibleResultTypeProblem](
+ "org.apache.spark.ml.feature.HashingTF.outputDataType"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem](
+ "org.apache.spark.ml.feature.Tokenizer.outputDataType"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.ml.feature.Tokenizer.validateInputType"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.ml.classification.LogisticRegressionModel.validateAndTransformSchema"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem](
+ "org.apache.spark.ml.classification.LogisticRegression.validateAndTransformSchema")
)
case v if v.startsWith("1.2") =>