aboutsummaryrefslogtreecommitdiff
path: root/R/pkg/inst
diff options
context:
space:
mode:
authorWenchen Fan <wenchen@databricks.com>2016-06-13 14:57:35 -0700
committerYin Huai <yhuai@databricks.com>2016-06-13 14:57:35 -0700
commitc4b1ad020962c42be804d3a1a55171d9b51b01e7 (patch)
tree567f143a02a0b657ddc29deb37c24701b7c59fcc /R/pkg/inst
parentc654ae2140bc184adb407fd02072b653c5359ee5 (diff)
downloadspark-c4b1ad020962c42be804d3a1a55171d9b51b01e7.tar.gz
spark-c4b1ad020962c42be804d3a1a55171d9b51b01e7.tar.bz2
spark-c4b1ad020962c42be804d3a1a55171d9b51b01e7.zip
[SPARK-15887][SQL] Bring back the hive-site.xml support for Spark 2.0
## What changes were proposed in this pull request? Right now, Spark 2.0 does not load hive-site.xml. Based on users' feedback, it seems make sense to still load this conf file. This PR adds a `hadoopConf` API in `SharedState`, which is `sparkContext.hadoopConfiguration` by default. When users are under hive context, `SharedState.hadoopConf` will load hive-site.xml and append its configs to `sparkContext.hadoopConfiguration`. When we need to read hadoop config in spark sql, we should call `SessionState.newHadoopConf`, which contains `sparkContext.hadoopConfiguration`, hive-site.xml and sql configs. ## How was this patch tested? new test in `HiveDataFrameSuite` Author: Wenchen Fan <wenchen@databricks.com> Closes #13611 from cloud-fan/hive-site.
Diffstat (limited to 'R/pkg/inst')
0 files changed, 0 insertions, 0 deletions