aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorwoj-i <wojciechindyk@gmail.com>2015-12-01 11:05:45 -0800
committerMarcelo Vanzin <vanzin@cloudera.com>2015-12-01 11:05:45 -0800
commit6a8cf80cc8ef435ec46138fa57325bda5d68f3ce (patch)
treea1534cac8bbd7b12c81492f309e9d9e3706c1386 /docs
parent0a7bca2da04aefff16f2513ec27a92e69ceb77f6 (diff)
downloadspark-6a8cf80cc8ef435ec46138fa57325bda5d68f3ce.tar.gz
spark-6a8cf80cc8ef435ec46138fa57325bda5d68f3ce.tar.bz2
spark-6a8cf80cc8ef435ec46138fa57325bda5d68f3ce.zip
[SPARK-11821] Propagate Kerberos keytab for all environments
andrewor14 the same PR as in branch 1.5 harishreedharan Author: woj-i <wojciechindyk@gmail.com> Closes #9859 from woj-i/master.
Diffstat (limited to 'docs')
-rw-r--r--docs/running-on-yarn.md4
-rw-r--r--docs/sql-programming-guide.md7
2 files changed, 6 insertions, 5 deletions
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 925a1e0ba6..06413f83c3 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -358,14 +358,14 @@ If you need a reference to the proper location to put log files in the YARN so t
<td>
The full path to the file that contains the keytab for the principal specified above.
This keytab will be copied to the node running the YARN Application Master via the Secure Distributed Cache,
- for renewing the login tickets and the delegation tokens periodically.
+ for renewing the login tickets and the delegation tokens periodically. (Works also with the "local" master)
</td>
</tr>
<tr>
<td><code>spark.yarn.principal</code></td>
<td>(none)</td>
<td>
- Principal to be used to login to KDC, while running on secure HDFS.
+ Principal to be used to login to KDC, while running on secure HDFS. (Works also with the "local" master)
</td>
</tr>
<tr>
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index d7b205c2fa..7b1d97baa3 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -1614,7 +1614,8 @@ This command builds a new assembly jar that includes Hive. Note that this Hive a
on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries
(SerDes) in order to access data stored in Hive.
-Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`. Please note when running
+Configuration of Hive is done by placing your `hive-site.xml`, `core-site.xml` (for security configuration),
+ `hdfs-site.xml` (for HDFS configuration) file in `conf/`. Please note when running
the query on a YARN cluster (`cluster` mode), the `datanucleus` jars under the `lib_managed/jars` directory
and `hive-site.xml` under `conf/` directory need to be available on the driver and all executors launched by the
YARN cluster. The convenient way to do this is adding them through the `--jars` option and `--file` option of the
@@ -2028,7 +2029,7 @@ Beeline will ask you for a username and password. In non-secure mode, simply ent
your machine and a blank password. For secure mode, please follow the instructions given in the
[beeline documentation](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients).
-Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
+Configuration of Hive is done by placing your `hive-site.xml`, `core-site.xml` and `hdfs-site.xml` files in `conf/`.
You may also use the beeline script that comes with Hive.
@@ -2053,7 +2054,7 @@ To start the Spark SQL CLI, run the following in the Spark directory:
./bin/spark-sql
-Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
+Configuration of Hive is done by placing your `hive-site.xml`, `core-site.xml` and `hdfs-site.xml` files in `conf/`.
You may run `./bin/spark-sql --help` for a complete list of all available
options.