-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move executor container rbac operations to workflow controller #5067
Comments
We want the controller to have as few RBAC permissions as possible. Having the executor handle pods/log not only helps with scalability but also with RBAC granularity |
@simster7 from a security perspective, how is it better to give these permissions to the executor? Also what about the patch permission, the workflow-controller already has permission for that so why does the executor handle the patching? |
I've recently also run into this situation/problem in my organization. We are moving toward a k8s cluster provided and managed by other team(s) and along with that come very heavy restrictions on what a developer, application, etc. can do in K8s, all managed through service accounts. We use service accounts to identify applications and manage their rights, so there is a 1-to-1 correspondence between some deployment (usually) and a service account, which allows us to have our executables locked down very tightly. Now, the problem in Argo's current setup is that the injected sidecars need to have rights like pod/log, pod/patch, etc. and, really, even sharing volumeMounts with the main container or potentially having access to /proc (pns executor) are almost deal breakers because now we have foreign code running on what was supposed to be a locked down container with minimal k8s access. I understand that these are needed to intercept output artifacts, but the pod management at least could be done externally, i.e. in the controller). The other problem with having these rights in a sidecar is that now one application pod has rights to patch any pod in its namespace, not just the pod it's running on. As long as the sidecar is well behaved this doesn't matter of course, but if it ever isn't (bug, hack, what have you) it could screw up other applications, even those from other teams. We think that having the rights to modify pods, read logs, etc. should really be consolidated in the workflow controller (which needs these rights anyway), instead of delegating them out to the application pod, which leaks the permissions all over the place since service accounts / roles, as far as we know, don't allow restricting to "this pod only". I'm really curious how delegating RBAC is supposed to help with granularity, because, the way we see it, this only leaks permissions to lots of applications that shouldn't have them. I'm also curious why the workflow-controller should have as few permissions as possible. Of course that's good in general, but permissions that it definitely needs, like modifying pods, can never really be gotten rid of, so why duplicate them and introduce additional indirections, which weaken or circumvent k8s own security model. |
The emissary does not need |
Duplicates #3961. |
Well the executor is instantiated by the controller, so any permissions it has must have been available to the controller, and I don't see how that reduces the scope of permissions of the controller. If the scalability is the main concern, why not instantiate sibling-pods to handle potential pod/patch operations, and leave the sidecar to handle outputs, logs, an other read-only operations. That it still scales horizontally, but the workflow pod itself doesn't need to have any special permissions. I think the real problem is that smallest scope provided by Kubernetes' RBAC model is the namespace. To my knowledge there isn't a way to specify a |
Hi,
Currently the executor container needs rbac permission such as pod/logs and patch.
Why are these operations not performed by the workflow controller instead? Is it due to scalability concerns?
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
The text was updated successfully, but these errors were encountered: