diff options
author | Nezih Yigitbasi <nyigitbasi@netflix.com> | 2016-04-19 14:35:26 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2016-04-19 14:35:26 -0700 |
commit | 3c91afec20607e0d853433a904105ee22df73c73 (patch) | |
tree | 428fd278fbffed6115e6fbf6b6b2bd9a3903c0f4 /project | |
parent | 0b8369d8548c0204b9c24d826c731063b72360b8 (diff) | |
download | spark-3c91afec20607e0d853433a904105ee22df73c73.tar.gz spark-3c91afec20607e0d853433a904105ee22df73c73.tar.bz2 spark-3c91afec20607e0d853433a904105ee22df73c73.zip |
[SPARK-14042][CORE] Add custom coalescer support
## What changes were proposed in this pull request?
This PR adds support for specifying an optional custom coalescer to the `coalesce()` method. Currently I have only added this feature to the `RDD` interface, and once we sort out the details we can proceed with adding this feature to the other APIs (`Dataset` etc.)
## How was this patch tested?
Added a unit test for this functionality.
/cc rxin (per our discussion on the mailing list)
Author: Nezih Yigitbasi <nyigitbasi@netflix.com>
Closes #11865 from nezihyigitbasi/custom_coalesce_policy.
Diffstat (limited to 'project')
-rw-r--r-- | project/MimaExcludes.scala | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala index ff35dc010d..b2c80afb53 100644 --- a/project/MimaExcludes.scala +++ b/project/MimaExcludes.scala @@ -49,6 +49,10 @@ object MimaExcludes { "org.apache.spark.status.api.v1.ApplicationAttemptInfo.this"), ProblemFilters.exclude[MissingMethodProblem]( "org.apache.spark.status.api.v1.ApplicationAttemptInfo.<init>$default$5"), + // SPARK-14042 Add custom coalescer support + ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.rdd.RDD.coalesce"), + ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.rdd.PartitionCoalescer$LocationIterator"), + ProblemFilters.exclude[IncompatibleTemplateDefProblem]("org.apache.spark.rdd.PartitionCoalescer"), // SPARK-12600 Remove SQL deprecated methods ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.SQLContext$QueryExecution"), ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.SQLContext$SparkPlanner"), |