aboutsummaryrefslogtreecommitdiff
path: root/repl/scala-2.11
diff options
context:
space:
mode:
authorShixiong Zhu <shixiong@databricks.com>2016-08-31 23:25:20 -0700
committerShixiong Zhu <shixiong@databricks.com>2016-08-31 23:25:20 -0700
commit21c0a4fe9d8e21819ba96e7dc2b1f2999d3299ae (patch)
tree496883a6da800226a51ea4564b3d18eaff26ebb2 /repl/scala-2.11
parentaaf632b2132750c697dddd0469b902d9308dbf36 (diff)
downloadspark-21c0a4fe9d8e21819ba96e7dc2b1f2999d3299ae.tar.gz
spark-21c0a4fe9d8e21819ba96e7dc2b1f2999d3299ae.tar.bz2
spark-21c0a4fe9d8e21819ba96e7dc2b1f2999d3299ae.zip
[SPARK-17318][TESTS] Fix ReplSuite replicating blocks of object with class defined in repl again
## What changes were proposed in this pull request? After digging into the logs, I noticed the failure is because in this test, it starts a local cluster with 2 executors. However, when SparkContext is created, executors may be still not up. When one of the executor is not up during running the job, the blocks won't be replicated. This PR just adds a wait loop before running the job to fix the flaky test. ## How was this patch tested? Jenkins Author: Shixiong Zhu <shixiong@databricks.com> Closes #14905 from zsxwing/SPARK-17318-2.
Diffstat (limited to 'repl/scala-2.11')
-rw-r--r--repl/scala-2.11/src/test/scala/org/apache/spark/repl/ReplSuite.scala9
1 files changed, 9 insertions, 0 deletions
diff --git a/repl/scala-2.11/src/test/scala/org/apache/spark/repl/ReplSuite.scala b/repl/scala-2.11/src/test/scala/org/apache/spark/repl/ReplSuite.scala
index f1284b1df3..f7d7a4f041 100644
--- a/repl/scala-2.11/src/test/scala/org/apache/spark/repl/ReplSuite.scala
+++ b/repl/scala-2.11/src/test/scala/org/apache/spark/repl/ReplSuite.scala
@@ -399,6 +399,15 @@ class ReplSuite extends SparkFunSuite {
test("replicating blocks of object with class defined in repl") {
val output = runInterpreter("local-cluster[2,1,1024]",
"""
+ |val timeout = 60000 // 60 seconds
+ |val start = System.currentTimeMillis
+ |while(sc.getExecutorStorageStatus.size != 3 &&
+ | (System.currentTimeMillis - start) < timeout) {
+ | Thread.sleep(10)
+ |}
+ |if (System.currentTimeMillis - start >= timeout) {
+ | throw new java.util.concurrent.TimeoutException("Executors were not up in 60 seconds")
+ |}
|import org.apache.spark.storage.StorageLevel._
|case class Foo(i: Int)
|val ret = sc.parallelize((1 to 100).map(Foo), 10).persist(MEMORY_AND_DISK_2)