aboutsummaryrefslogtreecommitdiff
path: root/project
diff options
context:
space:
mode:
authorCheng Lian <lian@databricks.com>2016-03-11 22:17:50 +0800
committerCheng Lian <lian@databricks.com>2016-03-11 22:17:50 +0800
commit6d37e1eb90054cdb6323b75fb202f78ece604b15 (patch)
tree1a93192d453c0ad68929b38fd1346af82314131b /project
parent07f1c5447753a3d593cd6ececfcb03c11b1cf8ff (diff)
downloadspark-6d37e1eb90054cdb6323b75fb202f78ece604b15.tar.gz
spark-6d37e1eb90054cdb6323b75fb202f78ece604b15.tar.bz2
spark-6d37e1eb90054cdb6323b75fb202f78ece604b15.zip
[SPARK-13817][BUILD][SQL] Re-enable MiMA and removes object DataFrame
## What changes were proposed in this pull request? PR #11443 temporarily disabled MiMA check, this PR re-enables it. One extra change is that `object DataFrame` is also removed. The only purpose of introducing `object DataFrame` was to use it as an internal factory for creating `Dataset[Row]`. By replacing this internal factory with `Dataset.newDataFrame`, both `DataFrame` and `DataFrame$` are entirely removed from the API, so that we can simply put a `MissingClassProblem` filter in `MimaExcludes.scala` for most DataFrame API changes. ## How was this patch tested? Tested by MiMA check triggered by Jenkins. Author: Cheng Lian <lian@databricks.com> Closes #11656 from liancheng/re-enable-mima.
Diffstat (limited to 'project')
-rw-r--r--project/MimaExcludes.scala22
1 files changed, 22 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index 45776fbb9f..567a717b9d 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -296,6 +296,28 @@ object MimaExcludes {
// SPARK-12073: backpressure rate controller consumes events preferentially from lagging partitions
ProblemFilters.exclude[MissingMethodProblem]("org.apache.spark.streaming.kafka.KafkaTestUtils.createTopic"),
ProblemFilters.exclude[MissingMethodProblem]("org.apache.spark.streaming.kafka.DirectKafkaInputDStream.maxMessagesPerPartition")
+ ) ++ Seq(
+ // [SPARK-13244][SQL] Migrates DataFrame to Dataset
+ ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.sql.DataFrameHolder.apply"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.DataFrameHolder.toDF"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.DataFrameHolder.toDF"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.sql.DataFrameHolder.copy"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.DataFrameHolder.copy$default$1"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.DataFrameHolder.df$1"),
+ ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.sql.DataFrameHolder.this"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.tables"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.tables"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.sql"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.baseRelationToDataFrame"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.SQLContext.table"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.DataFrame.apply"),
+
+ ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.DataFrame"),
+ ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.DataFrame$"),
+
+ ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.mllib.evaluation.MultilabelMetrics.this"),
+ ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.ml.classification.LogisticRegressionSummary.predictions"),
+ ProblemFilters.exclude[MissingMethodProblem]("org.apache.spark.ml.classification.LogisticRegressionSummary.predictions")
)
case v if v.startsWith("1.6") =>
Seq(