diff options
author | Andrew Or <andrew@databricks.com> | 2015-02-06 10:54:23 -0800 |
---|---|---|
committer | Andrew Or <andrew@databricks.com> | 2015-02-06 10:55:13 -0800 |
commit | fe3740c4c859d087b714c666741a29061bba5f58 (patch) | |
tree | d88dd2b9c1407861441aa0554161793f994b78a0 /core | |
parent | 1a88f20de798030a7d5713bd267f612ba5617fca (diff) | |
download | spark-fe3740c4c859d087b714c666741a29061bba5f58.tar.gz spark-fe3740c4c859d087b714c666741a29061bba5f58.tar.bz2 spark-fe3740c4c859d087b714c666741a29061bba5f58.zip |
[SPARK-5636] Ramp up faster in dynamic allocation
A recent patch #4051 made the initial number default to 0. With this change, any Spark application using dynamic allocation's default settings will ramp up very slowly. Since we never request more executors than needed to saturate the pending tasks, it is safe to ramp up quickly. The current default of 60 may be too slow.
Author: Andrew Or <andrew@databricks.com>
Closes #4409 from andrewor14/dynamic-allocation-interval and squashes the following commits:
d3cc485 [Andrew Or] Lower request interval
Diffstat (limited to 'core')
-rw-r--r-- | core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala index 5d5288bb6e..8b38366e03 100644 --- a/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala +++ b/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala @@ -76,15 +76,15 @@ private[spark] class ExecutorAllocationManager( private val maxNumExecutors = conf.getInt("spark.dynamicAllocation.maxExecutors", Integer.MAX_VALUE) - // How long there must be backlogged tasks for before an addition is triggered + // How long there must be backlogged tasks for before an addition is triggered (seconds) private val schedulerBacklogTimeout = conf.getLong( - "spark.dynamicAllocation.schedulerBacklogTimeout", 60) + "spark.dynamicAllocation.schedulerBacklogTimeout", 5) // Same as above, but used only after `schedulerBacklogTimeout` is exceeded private val sustainedSchedulerBacklogTimeout = conf.getLong( "spark.dynamicAllocation.sustainedSchedulerBacklogTimeout", schedulerBacklogTimeout) - // How long an executor must be idle for before it is removed + // How long an executor must be idle for before it is removed (seconds) private val executorIdleTimeout = conf.getLong( "spark.dynamicAllocation.executorIdleTimeout", 600) |