aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorThomas Graves <tgraves@apache.org>2014-08-05 12:52:52 -0500
committerThomas Graves <tgraves@apache.org>2014-08-05 12:52:52 -0500
commit1c5555a23d3aa40423d658cfbf2c956ad415a6b1 (patch)
treeb66cee1204610fca7374300b5229520613a1474b /docs
parent2c0f705e26ca3dfc43a1e9a0722c0e57f67c970a (diff)
downloadspark-1c5555a23d3aa40423d658cfbf2c956ad415a6b1.tar.gz
spark-1c5555a23d3aa40423d658cfbf2c956ad415a6b1.tar.bz2
spark-1c5555a23d3aa40423d658cfbf2c956ad415a6b1.zip
SPARK-1890 and SPARK-1891- add admin and modify acls
It was easier to combine these 2 jira since they touch many of the same places. This pr adds the following: - adds modify acls - adds admin acls (list of admins/users that get added to both view and modify acls) - modify Kill button on UI to take modify acls into account - changes config name of spark.ui.acls.enable to spark.acls.enable since I choose poorly in original name. We keep backwards compatibility so people can still use spark.ui.acls.enable. The acls should apply to any web ui as well as any CLI interfaces. - send view and modify acls information on to YARN so that YARN interfaces can use (yarn cli for killing applications for example). Author: Thomas Graves <tgraves@apache.org> Closes #1196 from tgravescs/SPARK-1890 and squashes the following commits: 8292eb1 [Thomas Graves] review comments b92ec89 [Thomas Graves] remove unneeded variable from applistener 4c765f4 [Thomas Graves] Add in admin acls 72eb0ac [Thomas Graves] Add modify acls
Diffstat (limited to 'docs')
-rw-r--r--docs/configuration.md27
-rw-r--r--docs/security.md7
2 files changed, 27 insertions, 7 deletions
diff --git a/docs/configuration.md b/docs/configuration.md
index b3dee3f131..25adea210c 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -815,13 +815,13 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
- <td><code>spark.ui.acls.enable</code></td>
+ <td><code>spark.acls.enable</code></td>
<td>false</td>
<td>
- Whether Spark web ui acls should are enabled. If enabled, this checks to see if the user has
- access permissions to view the web ui. See <code>spark.ui.view.acls</code> for more details.
- Also note this requires the user to be known, if the user comes across as null no checks
- are done. Filters can be used to authenticate and set the user.
+ Whether Spark acls should are enabled. If enabled, this checks to see if the user has
+ access permissions to view or modify the job. Note this requires the user to be known,
+ so if the user comes across as null no checks are done. Filters can be used with the UI
+ to authenticate and set the user.
</td>
</tr>
<tr>
@@ -832,6 +832,23 @@ Apart from these, the following properties are also available, and may be useful
user that started the Spark job has view access.
</td>
</tr>
+<tr>
+ <td><code>spark.modify.acls</code></td>
+ <td>Empty</td>
+ <td>
+ Comma separated list of users that have modify access to the Spark job. By default only the
+ user that started the Spark job has access to modify it (kill it for example).
+ </td>
+</tr>
+<tr>
+ <td><code>spark.admin.acls</code></td>
+ <td>Empty</td>
+ <td>
+ Comma separated list of users/administrators that have view and modify access to all Spark jobs.
+ This can be used if you run on a shared cluster and have a set of administrators or devs who
+ help debug when things work.
+ </td>
+</tr>
</table>
#### Spark Streaming
diff --git a/docs/security.md b/docs/security.md
index 90ba678033..8312f8d017 100644
--- a/docs/security.md
+++ b/docs/security.md
@@ -8,8 +8,11 @@ Spark currently supports authentication via a shared secret. Authentication can
* For Spark on [YARN](running-on-yarn.html) deployments, configuring `spark.authenticate` to `true` will automatically handle generating and distributing the shared secret. Each application will use a unique shared secret.
* For other types of Spark deployments, the Spark parameter `spark.authenticate.secret` should be configured on each of the nodes. This secret will be used by all the Master/Workers and applications.
-The Spark UI can also be secured by using [javax servlet filters](http://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html) via the `spark.ui.filters` setting. A user may want to secure the UI if it has data that other users should not be allowed to see. The javax servlet filter specified by the user can authenticate the user and then once the user is logged in, Spark can compare that user versus the view ACLs to make sure they are authorized to view the UI. The configs `spark.ui.acls.enable` and `spark.ui.view.acls` control the behavior of the ACLs. Note that the user who started the application always has view access to the UI.
-On YARN, the Spark UI uses the standard YARN web application proxy mechanism and will authenticate via any installed Hadoop filters.
+The Spark UI can also be secured by using [javax servlet filters](http://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html) via the `spark.ui.filters` setting. A user may want to secure the UI if it has data that other users should not be allowed to see. The javax servlet filter specified by the user can authenticate the user and then once the user is logged in, Spark can compare that user versus the view ACLs to make sure they are authorized to view the UI. The configs `spark.acls.enable` and `spark.ui.view.acls` control the behavior of the ACLs. Note that the user who started the application always has view access to the UI. On YARN, the Spark UI uses the standard YARN web application proxy mechanism and will authenticate via any installed Hadoop filters.
+
+Spark also supports modify ACLs to control who has access to modify a running Spark application. This includes things like killing the application or a task. This is controlled by the configs `spark.acls.enable` and `spark.modify.acls`. Note that if you are authenticating the web UI, in order to use the kill button on the web UI it might be necessary to add the users in the modify acls to the view acls also. On YARN, the modify acls are passed in and control who has modify access via YARN interfaces.
+
+Spark allows for a set of administrators to be specified in the acls who always have view and modify permissions to all the applications. is controlled by the config `spark.admin.acls`. This is useful on a shared cluster where you might have administrators or support staff who help users debug applications.
If your applications are using event logging, the directory where the event logs go (`spark.eventLog.dir`) should be manually created and have the proper permissions set on it. If you want those log files secured, the permissions should be set to `drwxrwxrwxt` for that directory. The owner of the directory should be the super user who is running the history server and the group permissions should be restricted to super user group. This will allow all users to write to the directory but will prevent unprivileged users from removing or renaming a file unless they own the file or directory. The event log files will be created by Spark with permissions such that only the user and group have read and write access.