-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provisioning each Control Plane Machine with Unique Credentials. #3782
Comments
Thinking somewhat generically about this with a bit of an AWS tinted set of glasses, I would expect to be able to accomplish something along these lines using a combination of IAM profiles that allow access to the needed services for KMS, and the individual identities of the control plane instances. Of course that isn't overly applicable to all the places that Cluster API can run or for supporting a KMS provider that isn't tied closely to the cloud provider (such as Hashicorp Vault), so I do think it makes sense to try to enable this type of workflow in a more generic way. |
/milestone Next |
Possibly a use case within #3761 . Similar to how we want to treat domain joins potentially. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/lifecycle frozen |
Actually, should we close this? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
@randomvariable @yastij Can this be part of machine attestation? |
it's completely orthogonal. machine attestation isn't part of control plane provisioning (because you're providing the machine with the cluster key material). we'll need to treat it as part of whatever happens if we revisit #4221 |
/lifecycle frozen |
/assign @randomvariable |
/area control-plane |
Reading through this again, I wonder if this is a use case for #5175 |
I think i'll take my name off this for now, as I'll be reviewing the area labels on a frequent basis. /unassign |
/triage accepted |
@fabriziopandini: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
(doing some cleanup on old issues without updates) |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
As an operator, I would like to be able to provision each control plane machine with unique credentials. My use case is for KMS Plugins. KMS plugins need to run as static pods or services and can't rely on the api-server (configmaps, secrets). (Can't encrypt secrets if you rely on them!)
Today the best I can do is use KCP and add the credentials as a file, but the problem is that this file is for the whole KCP replica set and not an individual machine. Ideally, each KMS Plugin instance has its own "identity". In theory, I guess could try to do some sort of appending to the Files list for each time I scale up, but that sounds pretty messy and would result in controlplanes having other controlplanes creds.
I am not really sure how we would go about editing the KCP CRD to support this. Is there a pattern used today by other kubernetes "replicasets" to achieve this?
Another solution I thought of was letting Infrastructure Providers have the ability to edit the bootstrap kubeadmconfig before it's encoded as cloud-init. Then the infrastructure providers can add a way to add files or other configs? I don't really know if this route makes much sense, but it would be nice if infrastructure providers had some say in the bootstrap data. A way for infrastructure providers to always set some needed configs without relying on higher-level input.
/kind feature
The text was updated successfully, but these errors were encountered: