aboutsummaryrefslogtreecommitdiff
path: root/common
diff options
context:
space:
mode:
authorzhonghaihua <793507405@qq.com>2016-04-01 16:23:14 -0500
committerTom Graves <tgraves@yahoo-inc.com>2016-04-01 16:23:14 -0500
commitbd7b91cefb0d192d808778e6182dcdd2c143e132 (patch)
treeed8f76bab3aa5042e7f3fa88b4ef2dcd5eb0ddcd /common
parent3e991dbc310a4a33eec7f3909adce50bf8268d04 (diff)
downloadspark-bd7b91cefb0d192d808778e6182dcdd2c143e132.tar.gz
spark-bd7b91cefb0d192d808778e6182dcdd2c143e132.tar.bz2
spark-bd7b91cefb0d192d808778e6182dcdd2c143e132.zip
[SPARK-12864][YARN] initialize executorIdCounter after ApplicationMaster killed for max n…
Currently, when max number of executor failures reached the `maxNumExecutorFailures`, `ApplicationMaster` will be killed and re-register another one.This time, `YarnAllocator` will be created a new instance. But, the value of property `executorIdCounter` in `YarnAllocator` will reset to `0`. Then the Id of new executor will starting from `1`. This will confuse with the executor has already created before, which will cause FetchFailedException. This situation is just in yarn client mode, so this is an issue in yarn client mode. For more details, [link to jira issues SPARK-12864](https://issues.apache.org/jira/browse/SPARK-12864) This PR introduce a mechanism to initialize `executorIdCounter` after `ApplicationMaster` killed. Author: zhonghaihua <793507405@qq.com> Closes #10794 from zhonghaihua/initExecutorIdCounterAfterAMKilled.
Diffstat (limited to 'common')
0 files changed, 0 insertions, 0 deletions