diff options
author | Xin Ren <iamshrek@126.com> | 2016-04-28 10:49:58 -0700 |
---|---|---|
committer | Josh Rosen <joshrosen@databricks.com> | 2016-04-28 10:50:06 -0700 |
commit | 5743352a28fffbfbaca2201208ce7a1d7893f813 (patch) | |
tree | b0c72f3dcbfff8e8e43e38335da819a80684c57f | |
parent | bed0b002023441f8c4bd132e9d209222943b20f7 (diff) | |
download | spark-5743352a28fffbfbaca2201208ce7a1d7893f813.tar.gz spark-5743352a28fffbfbaca2201208ce7a1d7893f813.tar.bz2 spark-5743352a28fffbfbaca2201208ce7a1d7893f813.zip |
[SPARK-14935][CORE] DistributedSuite "local-cluster format" shouldn't actually launch clusters
https://issues.apache.org/jira/browse/SPARK-14935
In DistributedSuite, the "local-cluster format" test actually launches a bunch of clusters, but this doesn't seem necessary for what should just be a unit test of a regex. We should clean up the code so that this is testable without actually launching a cluster, which should buy us about 20 seconds per build.
Passed unit test on my local machine
Author: Xin Ren <iamshrek@126.com>
Closes #12744 from keypointt/SPARK-14935.
-rw-r--r-- | core/src/test/scala/org/apache/spark/DistributedSuite.scala | 27 |
1 files changed, 15 insertions, 12 deletions
diff --git a/core/src/test/scala/org/apache/spark/DistributedSuite.scala b/core/src/test/scala/org/apache/spark/DistributedSuite.scala index 2110d3d770..a0086e1843 100644 --- a/core/src/test/scala/org/apache/spark/DistributedSuite.scala +++ b/core/src/test/scala/org/apache/spark/DistributedSuite.scala @@ -51,18 +51,21 @@ class DistributedSuite extends SparkFunSuite with Matchers with LocalSparkContex } test("local-cluster format") { - sc = new SparkContext("local-cluster[2,1,1024]", "test") - assert(sc.parallelize(1 to 2, 2).count() == 2) - resetSparkContext() - sc = new SparkContext("local-cluster[2 , 1 , 1024]", "test") - assert(sc.parallelize(1 to 2, 2).count() == 2) - resetSparkContext() - sc = new SparkContext("local-cluster[2, 1, 1024]", "test") - assert(sc.parallelize(1 to 2, 2).count() == 2) - resetSparkContext() - sc = new SparkContext("local-cluster[ 2, 1, 1024 ]", "test") - assert(sc.parallelize(1 to 2, 2).count() == 2) - resetSparkContext() + import SparkMasterRegex._ + + val masterStrings = Seq( + "local-cluster[2,1,1024]", + "local-cluster[2 , 1 , 1024]", + "local-cluster[2, 1, 1024]", + "local-cluster[ 2, 1, 1024 ]" + ) + + masterStrings.foreach { + case LOCAL_CLUSTER_REGEX(numSlaves, coresPerSlave, memoryPerSlave) => + assert(numSlaves.toInt == 2) + assert(coresPerSlave.toInt == 1) + assert(memoryPerSlave.toInt == 1024) + } } test("simple groupByKey") { |