diff options
author | Cheng Lian <lian@databricks.com> | 2016-04-19 17:32:23 -0700 |
---|---|---|
committer | Yin Huai <yhuai@databricks.com> | 2016-04-19 17:32:23 -0700 |
commit | 10f273d8db999cdc2e6c73bdbe98757de5d11676 (patch) | |
tree | 09150dc6ea97e6959b92aacdaed303c01207c611 /project/MimaExcludes.scala | |
parent | 3664142350afb6bf40a8bcb3508b56670603dae4 (diff) | |
download | spark-10f273d8db999cdc2e6c73bdbe98757de5d11676.tar.gz spark-10f273d8db999cdc2e6c73bdbe98757de5d11676.tar.bz2 spark-10f273d8db999cdc2e6c73bdbe98757de5d11676.zip |
[SPARK-14407][SQL] Hides HadoopFsRelation related data source API into execution/datasources package #12178
## What changes were proposed in this pull request?
This PR moves `HadoopFsRelation` related data source API into `execution/datasources` package.
Note that to avoid conflicts, this PR is based on #12153. Effective changes for this PR only consist of the last three commits. Will rebase after merging #12153.
## How was this patch tested?
Existing tests.
Author: Yin Huai <yhuai@databricks.com>
Author: Cheng Lian <lian@databricks.com>
Closes #12361 from liancheng/spark-14407-hide-hadoop-fs-relation.
Diffstat (limited to 'project/MimaExcludes.scala')
-rw-r--r-- | project/MimaExcludes.scala | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala index b2c80afb53..7b15f58558 100644 --- a/project/MimaExcludes.scala +++ b/project/MimaExcludes.scala @@ -652,6 +652,10 @@ object MimaExcludes { ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.status.api.v1.TaskMetricDistributions.shuffleWriteMetrics"), ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.status.api.v1.TaskMetricDistributions.shuffleReadMetrics"), ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.status.api.v1.TaskMetricDistributions.this") + ) ++ Seq( + // [SPARK-14407] Hides HadoopFsRelation related data source API into execution package + ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.sources.OutputWriter"), + ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.sources.OutputWriterFactory") ) case v if v.startsWith("1.6") => Seq( |