-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent workflows code from exploiting pod patch
permission to change non-workflow pods
#3961
Comments
I'll bring this up with the team |
@simster7 Any update? |
I also have some concerns with my Workflow requiring Pod patch permissions. I am developing a system that allows external users to run arbitrary workflows within my system. I have each user segregated into their own Namespace, so it wouldn't be absolutely devastating if they achieved Pod patch access, but it is still something that could potentially wreak havoc on my system. Currently, I am setting |
I think we could address this simply another way. If the pod can patch the workflow, then we could directly update the status. To avoid conflicts, and to work with node offloading. We would need a new field to store the data in. Not sure how this scales with many patches. So we could introduce another CRD as discussed here: https://docs.google.com/document/d/18hg6PTejp1knp5QTaCwP4j4gUTsRu4KDeKHs-4l9shs/edit This is not a popular issue. |
|
Ohhhh.... interesting! |
In v3.1 you will be able run workflows without |
@alexec I can see that v3.1.2 is already available but this issue is still Open and I can still see the Patch verb in the installation file https://github.com/argoproj/argo-workflows/blob/master/manifests/install.yaml Could please confirm that we are able to run workflows without |
With the introduction of TaskSet we now have a way to replace |
We should test some attacks to verify this is true. Much of the pod spec is immutable, is it really true that you can change the image or args? |
patch
permission exploitation pod patch
permission to change non-workflow pods
Notes from PoC:
|
Does this refer to global workflow outputs or step outputs? We use step outputs to communicate things between steps, and I don't see how pod/patch is required for that. What's stopping the controller from just creating the workflow pod with the required volumes for passing outputs from |
Correct. Outputs are patched onto the pod using annotations. Not outputs, no need for annotations. You could pass outputs using a volume mounted to all pods in the workflow. This volume would need to be readable from the controller. Not sure if that’s possible. @jessesuen I think we only need patch for result and logs, exit code is found by controller. We know that we need these in the controller. Maybe we should look at pods/log in more detail. |
Reviewing The current solution the executor capturing logs scales. Consider a 1000 node workflow, each pod captures its own logs. If this was moved to the controller, it would have to do much more work that it currently does. The controller is the wrong place to do heavy lifting as it creates a single point of failure. On top of this, we don't know where to save the main.log (or any artifact) in the controller, because it does not have We could write the outputs to the logs, rather than as an annotation, or to the container termination-log, but these all have different problems. Who else could do this? The agent, but it's just moving the problem. |
…oproj#3961 Signed-off-by: Alex Collins <alex_collins@intuit.com>
… (#8000) Signed-off-by: Alex Collins <alex_collins@intuit.com>
Summary
Find a way to prevent malicious code from exploiting
patch
permission of the minimum RBAC privileges.Details:
The minimum RBAC privileges of workflow includes
patch
pods permission, this seems to be a potential security issue.patch
permission allows to do actions likekubectl patch pod valid-pod -type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
, meaning, it allows to change all the images in the namespace, in other words, bring the namespace down.The problem is even grater since role is set on
pod
and not oncontainer
, so not only that argo'swait
container is getting this role, but also user'smain
container` is getting it. This means any malicious code creeped into the pod can exploit this role.I'm not sure how this can be done. I can say I tried the solution suggested here and it worked. But it's a big mess to make it work with
kustomize
, so I wish for a more elegant solution.Use Cases
Always.
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
The text was updated successfully, but these errors were encountered: