diff options
author | Josh Rosen <joshrosen@databricks.com> | 2015-11-18 15:55:41 -0800 |
---|---|---|
committer | Andrew Or <andrew@databricks.com> | 2015-11-18 15:55:41 -0800 |
commit | 3a9851936ddfe5bcb6a7f364d535fac977551f5d (patch) | |
tree | 1de10ca1795de6eeafe65bfa6c09697f0505b507 /core/src/test | |
parent | 7e987de1770f4ab3d54bc05db8de0a1ef035941d (diff) | |
download | spark-3a9851936ddfe5bcb6a7f364d535fac977551f5d.tar.gz spark-3a9851936ddfe5bcb6a7f364d535fac977551f5d.tar.bz2 spark-3a9851936ddfe5bcb6a7f364d535fac977551f5d.zip |
[SPARK-11649] Properly set Akka frame size in SparkListenerSuite test
SparkListenerSuite's _"onTaskGettingResult() called when result fetched remotely"_ test was extremely slow (1 to 4 minutes to run) and recently became extremely flaky, frequently failing with OutOfMemoryError.
The root cause was the fact that this was using `System.setProperty` to set the Akka frame size, which was not actually modifying the frame size. As a result, this test would allocate much more data than necessary. The fix here is to simply use SparkConf in order to configure the frame size.
Author: Josh Rosen <joshrosen@databricks.com>
Closes #9822 from JoshRosen/SPARK-11649.
Diffstat (limited to 'core/src/test')
-rw-r--r-- | core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala index 53102b9f1c..84e545851f 100644 --- a/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala +++ b/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala @@ -269,14 +269,15 @@ class SparkListenerSuite extends SparkFunSuite with LocalSparkContext with Match } test("onTaskGettingResult() called when result fetched remotely") { - sc = new SparkContext("local", "SparkListenerSuite") + val conf = new SparkConf().set("spark.akka.frameSize", "1") + sc = new SparkContext("local", "SparkListenerSuite", conf) val listener = new SaveTaskEvents sc.addSparkListener(listener) // Make a task whose result is larger than the akka frame size - System.setProperty("spark.akka.frameSize", "1") val akkaFrameSize = sc.env.actorSystem.settings.config.getBytes("akka.remote.netty.tcp.maximum-frame-size").toInt + assert(akkaFrameSize === 1024 * 1024) val result = sc.parallelize(Seq(1), 1) .map { x => 1.to(akkaFrameSize).toArray } .reduce { case (x, y) => x } |