aboutsummaryrefslogtreecommitdiff
path: root/project
diff options
context:
space:
mode:
authorShixiong Zhu <shixiong@databricks.com>2016-04-08 17:18:19 -0700
committerShixiong Zhu <shixiong@databricks.com>2016-04-08 17:18:19 -0700
commit4d7c35926371f9e016577987c037abcf984443d9 (patch)
tree7824fdcaa5d50d2a336e9830d1786945b5ea2529 /project
parent906eef4c7a380419f2d089262afdcf39454fe31e (diff)
downloadspark-4d7c35926371f9e016577987c037abcf984443d9.tar.gz
spark-4d7c35926371f9e016577987c037abcf984443d9.tar.bz2
spark-4d7c35926371f9e016577987c037abcf984443d9.zip
[SPARK-14437][CORE] Use the address that NettyBlockTransferService listens to create BlockManagerId
## What changes were proposed in this pull request? Here is why SPARK-14437 happens: BlockManagerId is created using NettyBlockTransferService.hostName which comes from `customHostname`. And `Executor` will set `customHostname` to the hostname which is detected by the driver. However, the driver may not be able to detect the correct address in some complicated network (Netty's Channel.remoteAddress doesn't always return a connectable address). In such case, `BlockManagerId` will be created using a wrong hostname. To fix this issue, this PR uses `hostname` provided by `SparkEnv.create` to create `NettyBlockTransferService` and set `NettyBlockTransferService.hostname` to this one directly. A bonus of this approach is NettyBlockTransferService won't bound to `0.0.0.0` which is much safer. ## How was this patch tested? Manually checked the bound address using local-cluster. Author: Shixiong Zhu <shixiong@databricks.com> Closes #12240 from zsxwing/SPARK-14437.
Diffstat (limited to 'project')
-rw-r--r--project/MimaExcludes.scala3
1 files changed, 3 insertions, 0 deletions
diff --git a/project/MimaExcludes.scala b/project/MimaExcludes.scala
index a53161dc9a..f240c30427 100644
--- a/project/MimaExcludes.scala
+++ b/project/MimaExcludes.scala
@@ -615,6 +615,9 @@ object MimaExcludes {
// [SPARK-13430][ML] moved featureCol from LinearRegressionModelSummary to LinearRegressionSummary
ProblemFilters.exclude[MissingMethodProblem]("org.apache.spark.ml.regression.LinearRegressionSummary.this")
) ++ Seq(
+ // [SPARK-14437][Core] Use the address that NettyBlockTransferService listens to create BlockManagerId
+ ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.network.netty.NettyBlockTransferService.this")
+ ) ++ Seq(
// [SPARK-13048][ML][MLLIB] keepLastCheckpoint option for LDA EM optimizer
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.mllib.clustering.DistributedLDAModel.this")
)