aboutsummaryrefslogtreecommitdiff
path: root/docs/security.md
diff options
context:
space:
mode:
authorPatrick Wendell <patrick@databricks.com>2015-04-30 14:59:20 -0700
committerPatrick Wendell <patrick@databricks.com>2015-04-30 14:59:20 -0700
commite0628f2fae7f99d096f9dd625876a60d11020d9b (patch)
tree33739db91af80b01dddf1935f24a73a6267a1a43 /docs/security.md
parentadbdb19a7d2cc939795f0cecbdc07c605dc946c1 (diff)
downloadspark-e0628f2fae7f99d096f9dd625876a60d11020d9b.tar.gz
spark-e0628f2fae7f99d096f9dd625876a60d11020d9b.tar.bz2
spark-e0628f2fae7f99d096f9dd625876a60d11020d9b.zip
Revert "[SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS"
This reverts commit 6c65da6bb7d1213e6a4a9f7fd1597d029d87d07c.
Diffstat (limited to 'docs/security.md')
-rw-r--r--docs/security.md2
1 files changed, 0 insertions, 2 deletions
diff --git a/docs/security.md b/docs/security.md
index d4ffa60e59..c034ba12ff 100644
--- a/docs/security.md
+++ b/docs/security.md
@@ -32,8 +32,6 @@ SSL must be configured on each node and configured for each component involved i
### YARN mode
The key-store can be prepared on the client side and then distributed and used by the executors as the part of the application. It is possible because the user is able to deploy files before the application is started in YARN by using `spark.yarn.dist.files` or `spark.yarn.dist.archives` configuration settings. The responsibility for encryption of transferring these files is on YARN side and has nothing to do with Spark.
-For long-running apps like Spark Streaming apps to be able to write to HDFS, it is possible to pass a principal and keytab to `spark-submit` via the `--principal` and `--keytab` parameters respectively. The keytab passed in will be copied over to the machine running the Application Master via the Hadoop Distributed Cache (securely - if YARN is configured with SSL and HDFS encryption is enabled). The Kerberos login will be periodically renewed using this principal and keytab and the delegation tokens required for HDFS will be generated periodically so the application can continue writing to HDFS.
-
### Standalone mode
The user needs to provide key-stores and configuration options for master and workers. They have to be set by attaching appropriate Java system properties in `SPARK_MASTER_OPTS` and in `SPARK_WORKER_OPTS` environment variables, or just in `SPARK_DAEMON_JAVA_OPTS`. In this mode, the user may allow the executors to use the SSL settings inherited from the worker which spawned that executor. It can be accomplished by setting `spark.ssl.useNodeLocalConf` to `true`. If that parameter is set, the settings provided by user on the client side, are not used by the executors.