Skip to content
This repository has been archived by the owner on Jun 24, 2020. It is now read-only.

Principle of least privilege should be applied in operator (Cluster)Roles #282

Closed
Cynocracy opened this issue Feb 3, 2020 · 9 comments · Fixed by #291
Closed

Principle of least privilege should be applied in operator (Cluster)Roles #282

Cynocracy opened this issue Feb 3, 2020 · 9 comments · Fixed by #291
Assignees

Comments

@Cynocracy
Copy link
Contributor

Cynocracy commented Feb 3, 2020

Describe the bug
Today the operator is by default granted all permissions on all resources clusterwide by a blanket ClusterRole

Expected behavior
Only specifically those permissions which are necessary in order to create a functional Knative serving setup should be added, using escalate and/or bind to create roles or rolebindings that the operator itself does not need explicitly, but which are needed transitively by knative serving.

To Reproduce

  1. Install Knative Serving Operator.
  2. kubectl describe clusterrole knative-serving-operator
Name:         knative-serving-operator
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"knative-serving-operator"},"rules":...
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  *.*        []                 []              [*]
@knative-prow-robot
Copy link
Contributor

@Cynocracy: The label(s) kind/bug cannot be applied, because the repository doesn't have them

In response to this:

/kind bug

Describe the bug
Today the operator is by default granted all permissions on all resources clusterwide by a blanket ClusterRole

Expected behavior
Only specifically those permissions which are necessary in order to create a functional Knative serving setup should be added, using escalate to create roles that the operator itself does not need explicitly, but which are needed transitively by knative serving.

To Reproduce

  1. Install Knative Serving Operator.
  2. kubectl describe clusterrole knative-serving-operator
Name:         knative-serving-operator
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
               {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"knative-serving-operator"},"rules":...
PolicyRule:
 Resources  Non-Resource URLs  Resource Names  Verbs
 ---------  -----------------  --------------  -----
 *.*        []                 []              [*]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jcrossley3
Copy link
Contributor

The primary reason the operator is given cluster-admin privs is because the knative-serving release manifest employs aggregated clusterroles. At the time, this feature required cluster-admin privs. I'm not sure if that's still the case or not. /cc @pmorie

@Cynocracy
Copy link
Contributor Author

Thanks for the context! That makes sense.

It looks like we may be able to have the Operator config set up the aggregated clusterrole, and grant the Operator service account bind on that role.

@pmorie
Copy link
Member

pmorie commented Feb 4, 2020

@Cynocracy
Copy link
Contributor Author

Thanks for the link, I'm still very new to the space so it's quite helpful for me :)

For now, I'll start a PR adding a comment to the */* permission. It'd be nice if we can get some kind of fix upstream that removes the req, but at least with the current setup, I found the very explicit list followed by */* to be quite confusing.

@Cynocracy
Copy link
Contributor Author

Cynocracy commented Feb 5, 2020

Circling back to this, a couple of different things:

@pmorie it looks like https://github.com/kubernetes/kubernetes/blob/3b618af0d435628feedf06f97bd1c69340d07d95/pkg/registry/rbac/clusterrole/policybased/storage.go#L69 early-exits that function when the user has escalate on clusterroles, so theoretically we could give the Operator the power to grant all powers, but not directly grant it everything (which would be nice from a we-don't-accidentally-reconcile-the-world perspective).

What I was thinking was a little different actually, if we could:

  1. Add the creation of the aggregated clusterroles to the Operator configuration. This forces the installer of the Operator to be a cluster-admin (or have escalate on clusterroles).
  2. Remove the clusterrole creation from the manifestival install (via label, maybe?).
  3. Grant the Operator the ability to bind the aggregated clusterrole, which is later authz'd by https://github.com/kubernetes/kubernetes/blob/3b618af0d435628feedf06f97bd1c69340d07d95/pkg/registry/rbac/clusterrolebinding/policybased/storage.go#L70
  4. Now we have bootstrapped the Operator into creating the right clusterrolebindings without it needing to be cluster-admin!

@Cynocracy
Copy link
Contributor Author

/assign Cynocracy

@jcrossley3
Copy link
Contributor

jcrossley3 commented Feb 5, 2020

  1. Remove the clusterrole creation from the manifestival install (via label, maybe?).

This needs to be addressed upstream. The embedded manifest is taken directly from upstream's official release, and the only edits we should ever make to our embedded copy are those we know will be included in upstream's next release (ideally referencing a merged PR). Otherwise, we complicate our upgrade process (even more than it already is) by having to post process the official manifest for every future release.

@Cynocracy
Copy link
Contributor Author

Ack, I think the change to grant escalate is a little less invasive, and still an improvement (in order to use the Operator as a confused deputy, you would need it to create a role and rolebinding, whereas today you merely need to get it to apply something for you).

Do you agree? If so, I can go ahead and start testing out that change.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants