-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RBAC - Phase 1 #18178
Comments
Original comment by @kobelb: LINK REDACTED is the meta issue for all currently planned phases |
Original comment by @kobelb: Larry and I were talking on Zoom about what we want to do about roles that have Kibana application privileges, documented in this LINK REDACTED, being shown and edited via the Roles Management screens. Currently, multi-tenant Kibana uses different We used to enumerate the potential |
Original comment by @epixa: So to clarify, the issue here is that we plan to introduce a new kibana-privileges management tool in the roles UI that should only work for kibana indices, and @kobelb is trying to figure out how we could reliably show that section for each .kibana index across the entire cluster within every Kibana install. I think this overcomplicates things. Any given Kibana install should only treat its own indices as special in any way. Any index that isn't explicitly managed by the current install should be treated exactly the same as any other index in Elasticsearch. If someone wants to do this deployment, then they need to go to each Kibana separately to manage those privileges. Remember, this isn't a deployment scenario that we want people to do. When we release Spaces, they will become the recommended way to handle "tenant" scenarios like this. If for some reason a person wants to still maintain different Kibana installs, they can, but the features in the product shouldn't be optimized for that deployment scenario. |
Original comment by @kobelb:
👍 sounds like we're in agreement then, thanks! |
@kobelb some answers & thoughts to the open questions above:
I tested a bit this morning, and disabling the native realm does not have any impact on role creation. Kibana will still create the roles on startup, and the roles appear in the role management screen as if the native realm is still enabled. If Kibana is able to determine that the native realm is disabled, then it'd probably be a good idea to let users know this in the UI, since their changes really won't have an impact.
Can we tell if ES is in the middle of a restoration? If so, we could have the Security plugin wait for the restoration to finish before creating roles/going green.
According to Tim's comment here, it appears Superusers have all privileges on all applications, so there shouldn't be a need to have a separate check within Kibana. |
That's a good point, and we do have a mitigation in place for this as well. I added the
That's a great question, that I don't have an answer to... we'll likely have to defer to the Elasticsearch team on this one.
Agreed, I should've checked this question off after testing it with the superuser changes. |
Summary of the current approach of Phase 1 Between 6.4 and 7.0, every time Kibana starts up, Kibana (under the kibana_system role) will check the existing application privileges that are registered with the cluster. Currently, these privileges are either The builtin When will the legacy fallback be invoked? The legacy fallback is only invoked when a user has no permissions through the new system. As soon as a user is granted anything in the new system, then the legacy fallback is no longer an option for them. How the custom privileges are assigned to builtin roles and custom roles? Kibana defines the custom permissions in Elasticsearch, and Kibana can assign the custom permissions to user created roles via the UI, but Elasticsearch has to assign the custom permissions to the builtin roles, because of how builtin roles are defined within Elasticsearch. Under what kind of circumstances, the privileges won't match what Kibana expects? That will happen if, for example, users are upgrading from 6.4 to 6.5 (or any version in the future). The new version of Kibana may have a different set of privileges than the old version, so in that case, the new version will overwrite the existing privileges that the old version used. |
Rather than do this for every request, what do you think about doing the legacy check at initial authentication time. The idea being that the entire user's session is tagged for the new model when it is created, and then at request time we only check the auth model that is appropriate for that session. The upside to this approach is that any given user cannot have their session influenced by both new and legacy privileges. For example, under the original legacy fallback proposal if I understand it correct, if a user has read/write through the old system and only read through the new system, when creating a dashboard we would first check the new system, see no write permission, then we check the old and see write permission, so we allow for writing. In reality we want any usage of the new system to completely invalidate any of the old rules, right? |
The legacy fallback is only invoked when a user has no permissions through the new system. As soon as a user is granted anything in the new system, then the legacy fallback is no longer an option for them. So in your example above, if a user has read/write through the old system and only read through the new system, then they would not be allowed to create a dashboard. We can certainly investaigate moving the check to login-time, but our gut feeling is that it'd be a non-trivial amount of effort. If the only motivation is to prevent a split authZ model, then it might not be necessary to persue, given the way the check is structured today. |
Awesome. This alleviates my concern. |
Original comment by @kobelb:
Phase 1 - Remove access to .kibana from end-users
Prior to these changes, end-users have had direct access to the .kibana index, which prevents us from applying the granular access control of OLS and RBAC. The first step in preparing for OLS and RBAC requires us to no longer allow the end-users direct access to the .kibana index, but instead to force all requests to go through the Kibana server which will enforce its’ own access control.
These changes will have negligible impact on most end-users. However, if they are using DLS/FLS to provide read-only-access to Kibana, this will break their implementation and objects that were private will now be public to all authorized users of Kibana. The following built-in roles will no longer have privileges to the .kibana index, but will instead have the following Kibana custom privileges:
The role management page in Kibana will be modified to allow users to assign the Kibana custom privileges to roles, and any custom Kibana end-user roles will need to be modified to match the built-in roles. All Kibana server code that reads/writes to the .kibana index will need to be modified to use the internal Kibana user and enforce access control based on the custom privileges.
If we wish for this and/or subsequent phases to be shipped in a minor release, we’ll have to create separate kibana_user and kibana_dashboard_only_user roles and the user would have to opt-in to this functionality via a kibana.yml setting.Legacy Fallback
#19824 introduces a "legacy fallback" feature which allows RBAC Phase 1 to ship in a minor release without introducing a breaking change, and without requiring users to opt-in via a kibana.yml setting.
Authorization Flow
a. If yes, access is granted, and the request is allowed to continue.
b. If not, Kibana will then check to see if the user has direct access to the Kibana index.
index
,read
) are used to determine if the request is allowed to continue, and a deprecation warning is logged.Example auth flows
Tasks
kibana_dashboard_only_user
andkibana_user
privileges on the index.default
resourcesQuestions
The text was updated successfully, but these errors were encountered: