diff options
author | Dongjoon Hyun <dongjoon@apache.org> | 2017-03-20 10:07:31 -0700 |
---|---|---|
committer | Marcelo Vanzin <vanzin@cloudera.com> | 2017-03-20 10:07:31 -0700 |
commit | fc7554599a4b6e5c22aa35e7296b424a653a420b (patch) | |
tree | 72298d92c7a53a4b03b8e53660e4b08eaaeb1cde /sql/hive | |
parent | 7ce30e00b236e77b5175f797f9c6fc6cf4ca7e93 (diff) | |
download | spark-fc7554599a4b6e5c22aa35e7296b424a653a420b.tar.gz spark-fc7554599a4b6e5c22aa35e7296b424a653a420b.tar.bz2 spark-fc7554599a4b6e5c22aa35e7296b424a653a420b.zip |
[SPARK-19970][SQL] Table owner should be USER instead of PRINCIPAL in kerberized clusters
## What changes were proposed in this pull request?
In the kerberized hadoop cluster, when Spark creates tables, the owner of tables are filled with PRINCIPAL strings instead of USER names. This is inconsistent with Hive and causes problems when using [ROLE](https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization) in Hive. We had better to fix this.
**BEFORE**
```scala
scala> sql("create table t(a int)").show
scala> sql("desc formatted t").show(false)
...
|Owner: |sparkEXAMPLE.COM | |
```
**AFTER**
```scala
scala> sql("create table t(a int)").show
scala> sql("desc formatted t").show(false)
...
|Owner: |spark | |
```
## How was this patch tested?
Manually do `create table` and `desc formatted` because this happens in Kerberized clusters.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #17311 from dongjoon-hyun/SPARK-19970.
Diffstat (limited to 'sql/hive')
-rw-r--r-- | sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala index 989fdc5564..13edcd0517 100644 --- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala +++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala @@ -851,7 +851,7 @@ private[hive] object HiveClientImpl { hiveTable.setFields(schema.asJava) } hiveTable.setPartCols(partCols.asJava) - conf.foreach(c => hiveTable.setOwner(c.getUser)) + conf.foreach { _ => hiveTable.setOwner(SessionState.get().getAuthenticator().getUserName()) } hiveTable.setCreateTime((table.createTime / 1000).toInt) hiveTable.setLastAccessTime((table.lastAccessTime / 1000).toInt) table.storage.locationUri.map(CatalogUtils.URIToString(_)).foreach { loc => |