diff options
author | Andrew Or <andrew@databricks.com> | 2015-08-13 17:42:01 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2015-08-13 17:42:01 -0700 |
commit | 8187b3ae477e2b2987ae9acc5368d57b1d5653b2 (patch) | |
tree | e80b71bbbfbf39b0fdca5a5bfca567ae8e0ca6a3 /project/MimaExcludes.scala | |
parent | c50f97dafd2d5bf5a8351efcc1c8d3e2b87efc72 (diff) | |
download | spark-8187b3ae477e2b2987ae9acc5368d57b1d5653b2.tar.gz spark-8187b3ae477e2b2987ae9acc5368d57b1d5653b2.tar.bz2 spark-8187b3ae477e2b2987ae9acc5368d57b1d5653b2.zip |
[SPARK-9580] [SQL] Replace singletons in SQL tests
A fundamental limitation of the existing SQL tests is that *there is simply no way to create your own `SparkContext`*. This is a serious limitation because the user may wish to use a different master or config. As a case in point, `BroadcastJoinSuite` is entirely commented out because there is no way to make it pass with the existing infrastructure.
This patch removes the singletons `TestSQLContext` and `TestData`, and instead introduces a `SharedSQLContext` that starts a context per suite. Unfortunately the singletons were so ingrained in the SQL tests that this patch necessarily needed to touch *all* the SQL test files.
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/8111)
<!-- Reviewable:end -->
Author: Andrew Or <andrew@databricks.com>
Closes #8111 from andrewor14/sql-tests-refactor.
Diffstat (limited to 'project/MimaExcludes.scala')
-rw-r--r-- | project/MimaExcludes.scala | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala index 784f83c10e..88745dc086 100644 --- a/project/MimaExcludes.scala +++ b/project/MimaExcludes.scala @@ -179,6 +179,16 @@ object MimaExcludes { ProblemFilters.exclude[MissingMethodProblem]( "org.apache.spark.SparkContext.supportDynamicAllocation") ) ++ Seq( + // SPARK-9580: Remove SQL test singletons + ProblemFilters.exclude[MissingClassProblem]( + "org.apache.spark.sql.test.LocalSQLContext$SQLSession"), + ProblemFilters.exclude[MissingClassProblem]( + "org.apache.spark.sql.test.LocalSQLContext"), + ProblemFilters.exclude[MissingClassProblem]( + "org.apache.spark.sql.test.TestSQLContext"), + ProblemFilters.exclude[MissingClassProblem]( + "org.apache.spark.sql.test.TestSQLContext$") + ) ++ Seq( // SPARK-9704 Made ProbabilisticClassifier, Identifiable, VectorUDT public APIs ProblemFilters.exclude[IncompatibleResultTypeProblem]( "org.apache.spark.mllib.linalg.VectorUDT.serialize") |