aboutsummaryrefslogtreecommitdiff
path: root/sql/hive-thriftserver
diff options
context:
space:
mode:
authorYadong Qi <qiyadong2010@gmail.com>2015-06-29 22:34:38 -0700
committerYin Huai <yhuai@databricks.com>2015-06-29 22:34:38 -0700
commite6c3f7462b3fde220ec0084b52388dd4dabb75b9 (patch)
treed428c9c762cb6ed6971d46d015141727fdd5daa2 /sql/hive-thriftserver
parentf79410c49b2225b2acdc58293574860230987775 (diff)
downloadspark-e6c3f7462b3fde220ec0084b52388dd4dabb75b9.tar.gz
spark-e6c3f7462b3fde220ec0084b52388dd4dabb75b9.tar.bz2
spark-e6c3f7462b3fde220ec0084b52388dd4dabb75b9.zip
[SPARK-8650] [SQL] Use the user-specified app name priority in SparkSQLCLIDriver or HiveThriftServer2
When run `./bin/spark-sql --name query1.sql` [Before] ![before](https://cloud.githubusercontent.com/assets/1400819/8370336/fa20b75a-1bf8-11e5-9171-040049a53240.png) [After] ![after](https://cloud.githubusercontent.com/assets/1400819/8370189/dcc35cb4-1bf6-11e5-8796-a0694140bffb.png) Author: Yadong Qi <qiyadong2010@gmail.com> Closes #7030 from watermen/SPARK-8650 and squashes the following commits: 51b5134 [Yadong Qi] Improve code and add comment. e3d7647 [Yadong Qi] use spark.app.name priority.
Diffstat (limited to 'sql/hive-thriftserver')
-rw-r--r--sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala7
1 files changed, 6 insertions, 1 deletions
diff --git a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
index 79eda1f512..1d41c46131 100644
--- a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
+++ b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
@@ -38,9 +38,14 @@ private[hive] object SparkSQLEnv extends Logging {
val sparkConf = new SparkConf(loadDefaults = true)
val maybeSerializer = sparkConf.getOption("spark.serializer")
val maybeKryoReferenceTracking = sparkConf.getOption("spark.kryo.referenceTracking")
+ // If user doesn't specify the appName, we want to get [SparkSQL::localHostName] instead of
+ // the default appName [SparkSQLCLIDriver] in cli or beeline.
+ val maybeAppName = sparkConf
+ .getOption("spark.app.name")
+ .filterNot(_ == classOf[SparkSQLCLIDriver].getName)
sparkConf
- .setAppName(s"SparkSQL::${Utils.localHostName()}")
+ .setAppName(maybeAppName.getOrElse(s"SparkSQL::${Utils.localHostName()}"))
.set(
"spark.serializer",
maybeSerializer.getOrElse("org.apache.spark.serializer.KryoSerializer"))