aboutsummaryrefslogtreecommitdiff
path: root/docs/running-on-yarn.md
diff options
context:
space:
mode:
authorRyan Blue <blue@apache.org>2016-06-23 14:03:46 -0500
committerTom Graves <tgraves@yahoo-inc.com>2016-06-23 14:03:46 -0500
commit738f134bf4bf07bafb17e7066cf1a36e315872c2 (patch)
tree6a1b71c2743206fe22c82ed44df56d80f6611741 /docs/running-on-yarn.md
parenta410814c87b120cb5cfbf095b1bd94b1de862844 (diff)
downloadspark-738f134bf4bf07bafb17e7066cf1a36e315872c2.tar.gz
spark-738f134bf4bf07bafb17e7066cf1a36e315872c2.tar.bz2
spark-738f134bf4bf07bafb17e7066cf1a36e315872c2.zip
[SPARK-13723][YARN] Change behavior of --num-executors with dynamic allocation.
## What changes were proposed in this pull request? This changes the behavior of --num-executors and spark.executor.instances when using dynamic allocation. Instead of turning dynamic allocation off, it uses the value for the initial number of executors. This changes was discussed on [SPARK-13723](https://issues.apache.org/jira/browse/SPARK-13723). I highly recommend using it while we can change the behavior for 2.0.0. In practice, the 1.x behavior causes unexpected behavior for users (it is not clear that it disables dynamic allocation) and wastes cluster resources because users rarely notice the log message. ## How was this patch tested? This patch updates tests and adds a test for Utils.getDynamicAllocationInitialExecutors. Author: Ryan Blue <blue@apache.org> Closes #13338 from rdblue/SPARK-13723-num-executors-with-dynamic-allocation.
Diffstat (limited to 'docs/running-on-yarn.md')
-rw-r--r--docs/running-on-yarn.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 9833806716..dbd46cc48c 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -244,7 +244,7 @@ To use a custom metrics.properties for the application master and executors, upd
<td><code>spark.executor.instances</code></td>
<td><code>2</code></td>
<td>
- The number of executors. Note that this property is incompatible with <code>spark.dynamicAllocation.enabled</code>. If both <code>spark.dynamicAllocation.enabled</code> and <code>spark.executor.instances</code> are specified, dynamic allocation is turned off and the specified number of <code>spark.executor.instances</code> is used.
+ The number of executors for static allocation. With <code>spark.dynamicAllocation.enabled</code>, the initial set of executors will be at least this large.
</td>
</tr>
<tr>