From a8ced76f16523c571284dccfc73d655e89ad570f Mon Sep 17 00:00:00 2001 From: Shuai Lin Date: Wed, 7 Dec 2016 07:40:54 +0800 Subject: [SPARK-18171][MESOS] Show correct framework address in mesos master web ui when the advertised address is used ## What changes were proposed in this pull request? In [SPARK-4563](https://issues.apache.org/jira/browse/SPARK-4563) we added the support for the driver to advertise a different hostname/ip (`spark.driver.host` to the executors other than the hostname/ip the driver actually binds to (`spark.driver.bindAddress`). But in the mesos webui's frameworks page, it still shows the driver's binds hostname/ip (though the web ui link is correct). We should fix it to make them consistent. Before: ![mesos-spark](https://cloud.githubusercontent.com/assets/717363/19835148/4dffc6d4-9eb8-11e6-8999-f80f10e4c3f7.png) After: ![mesos-spark2](https://cloud.githubusercontent.com/assets/717363/19835149/54596ae4-9eb8-11e6-896c-230426acd1a1.png) This PR doesn't affect the behavior on the spark side, only makes the display on the mesos master side more consistent. ## How was this patch tested? Manual test. - Build the package and build a docker image (spark:2.1-test) ``` sh ./dev/make-distribution.sh -Phadoop-2.6 -Phive -Phive-thriftserver -Pyarn -Pmesos ``` - Then run the spark driver inside a docker container. ``` sh docker run --rm -it \ --name=spark \ -p 30000-30010:30000-30010 \ -e LIBPROCESS_ADVERTISE_IP=172.17.42.1 \ -e LIBPROCESS_PORT=30000 \ -e MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos-1.0.0.so \ -e MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos-1.0.0.so \ -e SPARK_HOME=/opt/dist \ spark:2.1-test ``` - Inside the container, launch the spark driver, making use of the advertised address: ``` sh /opt/dist/bin/spark-shell \ --master mesos://zk://172.17.42.1:2181/mesos \ --conf spark.driver.host=172.17.42.1 \ --conf spark.driver.bindAddress=172.17.0.1 \ --conf spark.driver.port=30001 \ --conf spark.driver.blockManager.port=30002 \ --conf spark.ui.port=30003 \ --conf spark.mesos.coarse=true \ --conf spark.cores.max=2 \ --conf spark.executor.cores=1 \ --conf spark.executor.memory=1g \ --conf spark.mesos.executor.docker.image=spark:2.1-test ``` - Run several spark jobs to ensure everything is running fine. ``` scala val rdd = sc.textFile("file:///opt/dist/README.md") rdd.cache().count ``` Author: Shuai Lin Closes #15684 from lins05/spark-18171-show-correct-host-name-in-mesos-master-web-ui. --- .../org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala b/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala index 9cb6023704..1d742fefbb 100644 --- a/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala +++ b/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala @@ -80,6 +80,8 @@ trait MesosSchedulerUtils extends Logging { frameworkId.foreach { id => fwInfoBuilder.setId(FrameworkID.newBuilder().setValue(id).build()) } + fwInfoBuilder.setHostname(Option(conf.getenv("SPARK_PUBLIC_DNS")).getOrElse( + conf.get(DRIVER_HOST_ADDRESS))) conf.getOption("spark.mesos.principal").foreach { principal => fwInfoBuilder.setPrincipal(principal) credBuilder.setPrincipal(principal) -- cgit v1.2.3