aboutsummaryrefslogtreecommitdiff
path: root/yarn
diff options
context:
space:
mode:
authorShixiong Zhu <shixiong@databricks.com>2016-04-05 22:32:37 -0700
committerAndrew Or <andrew@databricks.com>2016-04-05 22:32:37 -0700
commit48467f4eb02209a884adbcf052670a057a75fcbd (patch)
treee64b8b6f4ef8565540c1e2600c945f34c580fb33 /yarn
parentadbfdb878dd1029738db3d1955d08b33de1aa8a9 (diff)
downloadspark-48467f4eb02209a884adbcf052670a057a75fcbd.tar.gz
spark-48467f4eb02209a884adbcf052670a057a75fcbd.tar.bz2
spark-48467f4eb02209a884adbcf052670a057a75fcbd.zip
[SPARK-14416][CORE] Add thread-safe comments for CoarseGrainedSchedulerBackend's fields
## What changes were proposed in this pull request? While I was reviewing #12078, I found most of CoarseGrainedSchedulerBackend's mutable fields doesn't have any comments about the thread-safe assumptions and it's hard for people to figure out which part of codes should be protected by the lock. This PR just added comments/annotations for them and also added strict access modifiers for some fields. ## How was this patch tested? Existing unit tests. Author: Shixiong Zhu <shixiong@databricks.com> Closes #12188 from zsxwing/comments.
Diffstat (limited to 'yarn')
-rw-r--r--yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala9
1 files changed, 6 insertions, 3 deletions
diff --git a/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala b/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
index 5aeaf44732..8720ee57fe 100644
--- a/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
+++ b/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
@@ -39,9 +39,12 @@ private[spark] abstract class YarnSchedulerBackend(
sc: SparkContext)
extends CoarseGrainedSchedulerBackend(scheduler, sc.env.rpcEnv) {
- if (conf.getOption("spark.scheduler.minRegisteredResourcesRatio").isEmpty) {
- minRegisteredRatio = 0.8
- }
+ override val minRegisteredRatio =
+ if (conf.getOption("spark.scheduler.minRegisteredResourcesRatio").isEmpty) {
+ 0.8
+ } else {
+ super.minRegisteredRatio
+ }
protected var totalExpectedExecutors = 0