-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PSP Replacement KEP #2582
PSP Replacement KEP #2582
Conversation
1234867
to
c28827a
Compare
version._ | ||
|
||
Note that policies are not guaranteed to be backwards compatible, and a newer restricted policy | ||
could require setting a field that doesn't exist in the current API version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like an option for a perma-break. However, I don't see a practical way of knowing the version of a cluster. I don't believe we force behavior for /version for conformance purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
discussed this in the 3/24 breakout meeting without arriving at a specific conclusion. thought about various mechanisms for communicating the version of the cluster to the webhook:
- check /version (unclear that is guaranteed to return the kubernetes version)
- configure it in the webhook manifest (only works if you remember to keep the webhook in sync, makes skew during upgrade hard, only works if a single API server is talking to the webhook)
- configure it in the webhook invocation (e.g. webhook path)
- add server Kubernetes version into admission review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that we did resolve this? The conclusion I drew from our discussion was that under the webhook implementation, policy versions would be tied to the webhook version, not the cluster version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's certainly the easiest, implementation-wise, but makes for tricky ordering on upgrade... if you upgrade the webhook first, then its latest
restricted policy can start requiring fields to be set that the calling server might not be capable of setting yet.
I think the webhook implementation is likely secondary, so I'm ok saying that the webhook library version determines the meaning of latest
, but we need to clearly document the expected order on upgrades between API server and webhook.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments.
/lgtm
|
||
The following audit annotations will be added: | ||
|
||
1. `pod-security.kubernetes.io/enforce-policy = <policy_level>:<resolved_version>` Record which policy was evaluated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious why level and version a munged into one key instead of two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It cuts down on the size & verbosity of the logs. It's useful to keep them separate on labels because of the limitations of label selectors, but most log processors that I've seen can handle separating these values if need be. Is there a reason you'd want to see them separated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This just seemed inconsistent with the labels used on namespaces.
|
||
_Blocking for Beta._ | ||
|
||
How long will old profiles be kept for? What is the removal policy? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on keeping forever.
- Using labels enables various workflows around policy management through kubectl, for example | ||
issuing queries like `kubectl get namespaces -l | ||
pod-security.kubernetes.io/enforce-version!=v1.22` to find namespaces where the enforcing | ||
policy isn't pinned to the most recent version. | ||
- Keeping the options on namespaces allows atomic create-and-set-policy, as opposed to creating a | ||
namespace and then creating a second object inside the namespace. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have a separate cluster scoped object called PodSecurity
where the name of object matches the name of the namespace, one interesting property is that authorization to write said PodSecurity
is distinct from write on namespace.
That being said, I like the flexibility and UX of the label based approach.
So should we define extra authorization (SAR checks with virtual verbs similar to PSP's use
) checks needed to set these labels and enforce them in admission?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So should we define extra authorization (SAR checks with virtual verbs similar to PSP's use) checks needed to set these labels and enforce them in admission?
I think there is a use case for generic label policy, and I'd be interested in a proposal for it, but IMO we shouldn't implement something special for pod security.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have a separate cluster scoped object called PodSecurity where the name of object matches the name of the namespace, one interesting property is that authorization to write said PodSecurity is distinct from write on namespace.
I feel like we discussed this before, but I can't remember if there were any concerns aside from losing out on the label-selector UX. If we went this route, we'd probably want to add a custom kubectl command for it, but it would need to be fairly complicated to cover all the use cases we'd get for free with the label selector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have a separate cluster scoped object called PodSecurity where the name of object matches the name of the namespace, one interesting property is that authorization to write said PodSecurity is distinct from write on namespace.
I feel like we discussed this before, but I can't remember if there were any concerns aside from losing out on the label-selector UX. If we went this route, we'd probably want to add a custom kubectl command for it, but it would need to be fairly complicated to cover all the use cases we'd get for free with the label selector.
Note that this mostly a thought exercise of what we lose by using label selectors instead of a distinct object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So should we define extra authorization (SAR checks with virtual verbs similar to PSP's use) checks needed to set these labels and enforce them in admission?
I think there is a use case for generic label policy, and I'd be interested in a proposal for it, but IMO we shouldn't implement something special for pod security.
I do not know if I buy the "let us wait for the generic label policy" approach, especially since we want the pod security stuff to mostly be static. Having a one-off approach for pod security that encodes some well known permissions for the pod security labels could provide significant value and safety to this feature, especially in environments where users are allowed to provision namespaces.
We are targeting GA in v1.24 to allow for migration off PodSecurityPolicy before it is removed in | ||
v1.25. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I understand the rationale, this seems aggressive for exactly the wrong reason. 😞
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. We shouldn't rush it just for the sake of rushing it. If there are red flags, we'll hold it back. PSP is only beta, so I don't think it would be terrible if this was still beta in v1.24 (although I'd really like to get it past alpha).
/lgtm |
/approve 🎉 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder, deads2k, enj, IanColdwater, tallclair The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Once this is merged, do we want to go back and tweak the blog article about it? |
/lgtm |
That's a wrap! /remove-hold |
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Fixes: #3 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Fixes: #3 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Fixes: #3 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Fixes: #3 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Fixes: #3 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This change adds the runc wrapper which has a purpose of applying policies set through Kubernetes namespaces. For now, it uses the namespace labels proposed in the PSP Replacement KEP[0]. The runc wrapper checks whether the container which is about to be started was scheduled by kubelet (by checking runc annotations set by kubelet / runtime servers). If some non-default policy is set, that policy is written for the particilar container to the BPF map, so then BPF programs can be aware of that policy. For now, there is no way of proving that those annotations really come from Kubernetes. Coming up with some sane way of securely proving that will be something to implement in a follow up work. [0] kubernetes/enhancements#2582 Fixes: #3 Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
This KEP proposes a new policy mechanism to replace the use cases covered by PodSecurityPolicy.
This is an adaptation of the initial proposal that's been under discussion by members of sig-auth and sig-security, here: https://docs.google.com/document/d/1dpfDF3Dk4HhbQe74AyCpzUYMjp4ZhiEgGXSMpVWLlqQ/edit?usp=sharing
Most of the content is copied over from that doc, with the following additions:
@liggitt recorded a demo of this proposal, which you can find here: https://youtu.be/SRg_apFQaHE
Enhancement issue: #2579
Outstanding unresolved sections:
Alpha blockers:
allowenforce
,warning
,audit
)implementation-time decisions:
Beta blockers:
Required approvals: