Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] rbac role manifest #266

Closed
jhorwit2 opened this issue Feb 13, 2017 · 14 comments
Closed

[feature request] rbac role manifest #266

jhorwit2 opened this issue Feb 13, 2017 · 14 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@jhorwit2
Copy link

It's unclear what privileges the ingress controllers require.

@whereisaaron
Copy link
Contributor

Couldn't agree more @jhorwit2. Ideally all cluster services should ship sample RBAC roles, or at least a manifest of API resources and verbs they rely on.

It's the old chestnut from SELinux and App Armor, if the software authors don't publish the security requirements information, it requires every security conscious end-user to almost do a code audit to engineer a suitable least-privilege policy.

Is this something the automated build, test tools could do? E.g. monitor API calls during testing and spit out a manifest or a generated RBAC policy that is just what the tests need to run? Could be both a good leg-up for writing RBAC policies and useful as an audit tool.

As k8s matures it seems (to me) especially important for the core project to set a positive, secure-by-default example. Right now I see lots of early k8s users who are surprised and sometimes shocked when you tell them every one of their containers is getting, by default, an unrestricted, admin-strength Service Account token. Container break-outs make news, it is probably only a matter of time before k8s gets unnecessary negative PR for the default security stance.

@aledbf aledbf added backend/generic help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Feb 25, 2017
@InAnimaTe
Copy link

Platforms like Tectonic are already doing RBAC by default and having to utilize a global very-permissive cluster role to make things like nginx-ingress and kube-lego work is kind of worrying. Would love to see more effort on this and eventually, an included nginx-rbac.yaml example for people deploying this, as well as documentation, which I can help with!

@ankon
Copy link
Contributor

ankon commented Mar 9, 2017

Maybe helpful as a starting point for people ending here after googling: This set of RBAC policies seems to work for me with the nginx controller 0.9.0-beta.2.

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRole
metadata:
  name: ingress
rules:
- apiGroups:
  - ""
  - "extensions"
  resources:
  - configmaps
  - secrets
  - services
  - endpoints
  - ingresses
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - ingresses
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
- apiGroups:
  - "extensions"
  resources:
  - ingresses/status
  verbs:
  - update
---
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: Role
metadata:
  name: ingress-ns
  namespace: nginx-ingress
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get
  - create
  - update
---
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: RoleBinding
metadata:
  name: ingress-ns-binding
  namespace: nginx-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-ns
subjects:
  - kind: ServiceAccount
    name: default
    namespace: nginx-ingress
---
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
  name: ingress-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress
subjects:
  - kind: ServiceAccount
    name: default
    namespace: nginx-ingress

In my case all ingress-related resources are in the namespace 'nginx-ingress', similar to how https://github.com/jetstack/kube-lego/tree/master/examples/nginx/nginx is structured.

(EDIT: Locked down access to resources inside the nginx-ingress namespace as well)

@ankon
Copy link
Contributor

ankon commented Mar 14, 2017

After some more painful debugging:

diff --git a/nginx-ingress-controller.yml b/nginx-ingress-controller.yml
index 06e3c77..29f2502 100644
--- a/nginx-ingress-controller.yml
+++ b/nginx-ingress-controller.yml
@@ -14,14 +14,23 @@ rules:
   resources:
   - configmaps
   - secrets
-  - services
   - endpoints
   - ingresses
   - nodes
+  - pods
   verbs:
   - list
   - watch
 - apiGroups:
+  - ""
+  resources:
+  - services
+  verbs:
+  - list
+  - watch
+  - get
+  - update
+- apiGroups:
   - extensions
   resources:
   - ingresses

Especially the 'update services' was hard to understand why, ultimately it is only needed when using named ports. See kubernetes-retired/contrib#766 (comment) for details.

@whereisaaron
Copy link
Contributor

Thank you for working this out @ankon ❤️, I know how painful it is to reverse engineer permissions from code. I hope we can get this into the Ingress Controller documentation or examples - could you make a PR to add it and see what @aledbf thinks?

Especially the 'update services' was hard to understand why, ultimately it is only needed when using named ports. See kubernetes-retired/contrib#766 (comment) for details.

The Ingress Controller needs update permissions to Services just to use annotations to duplicate the named port information from the Pod spec. This apparently stems from a pragmatic workaround for the limitation that EndPoints don't have a place to store the named port name specified in the Pod.

kubernetes-retired/contrib@d455524#r59898626

screenshot 2017-03-14 at 4 35 08 pm

This workaround also may be a broken approach? Since the annotation appears to assume that all Pods for a Service will use the same numeric port number for a particular named port. A Service label selector could easily encompass Pods from two Deployments, one where e.g. named port 'foo' is numeric port 80 and another where port 'foo' is numeric port 8080 (e.g. a blue/green deployment scenario). It might also break the Ingress during regular Deployment rolling updates, if the numeric port for a Pod named port is changed in the update. It seems like this Service annotation approach might fail in these cases? Or am I missing something @aledbf?

@aledbf
Copy link
Member

aledbf commented Mar 14, 2017

Or am I missing something @aledbf?

No. I need to review this and remove the annotation

@aledbf
Copy link
Member

aledbf commented Mar 14, 2017

could you make a PR to add it and see what @aledbf thinks?

:+1 to open a PR

@liggitt
Copy link
Member

liggitt commented Apr 10, 2017

fwiw, I am working on an audit2rbac (shamelessly taken from audit2allow) webhook authorizer that you can run that will gather API requests per user and generate RBAC roles that would cover the requests that were made. Adding that to your authz chain and exercising an app would make it easier to develop those roles (though rarely used permissions might still be tricky to trigger/capture)

@mccormd
Copy link

mccormd commented Apr 13, 2017

Hi

Thanks very much for this! I had to add the following to the ClusterRole in order to allow the ingress controller to read its configmap: -

- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get

@nevetS
Copy link

nevetS commented May 22, 2017

Just following up on this a bit. I ran into this issue over the weekend and saw that there were mentions of creating a pull request for an example, but none had been made or accepted. I created an example and pull request #747 based on what I found here and my own testing. I'd appreciate any feedback on what I put together.

nevetS pushed a commit to nevetS/ingress that referenced this issue May 23, 2017
aledbf added a commit that referenced this issue May 23, 2017
added rbac example discussed in kubernetes/ingress issue #266
@nevetS
Copy link

nevetS commented May 27, 2017

Question - is this closable now that we have an example? Or does another step need to be taken?

@aledbf
Copy link
Member

aledbf commented May 27, 2017

Closing. Here there is an example to configure RBAC

@liggitt
Copy link
Member

liggitt commented Sep 11, 2017

fwiw, I am working on an audit2rbac (shamelessly taken from audit2allow)

For anyone interested, an initial version is now available at https://github.com/liggitt/audit2rbac

@garthenweb
Copy link

In case you are using helm charts and the nginx-ingress chart you just need to enable rbac as defined in the configuration section.

Just leaving this comment because it was not clear to me and I first tried to implement the example files on my own. Maybe it's helpful to someone else!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

No branches or pull requests

9 participants