-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AWS CCM] Permission to create SA token #11368
Conversation
/hold |
I'm not 100% about these yet. I was thinking it might be better to use a deployment so that replicas is independent from master node count, and a deployment of 1 seems to match the kops default of other components like KCM slightly better (although I'm not sure what the governing controller is). This means we will need to disable hostnetwork though, and I notice KCM is running with it enabled, so I'd like to understand whether there's any reason we need to run CCM the same way (getting metrics without a service?). Or, maybe it should be configurable. Regarding the need for creating the service account token, I have a PR open to move the CCM to use the provided cloud provider libraries, and this, for some reason, requires permissions to create a service account token. We might want to make this permission optional (we shouldn't need it in all cases, like if we're using certificates for client identity, and even if we are using a service account, we aren't the token controller so.. why do we need this permission? ...). I need to look into this one a bit more but wanted to open this as a placeholder. |
metadata: | ||
name: aws-cloud-controller-manager | ||
namespace: kube-system | ||
labels: | ||
k8s-app: aws-cloud-controller-manager | ||
spec: | ||
replicas: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't we want some redundancy for this, like we do for controller-manager?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, maybe start with a default of 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can set repicas to 1 or 2 based on the amount of CP nodes here: pkg/model/components/cloudconfiguration.go
You'd just have to add instance groups to the struct. These are created in upup/pkg/fi/cloudup/populate_cluster_spec.go
The concern I have with turning off host network would be whether we would get a dependency loop between this and the CNI. |
The advantage of using a DaemonSet is that the level of redundancy for the CCM pod matches the redundancy for the control plane hosts. Users with 1 control plane host get 1 of each control plane pod. We could also settle on using a template function to set the Deployment replicas to either 1 or 2 based on the number of control plane instance groups. We'll also want to use |
With a |
I don't think this should be a problem. CNIs shouldn't be dependent on CCM. One thing that may be a concern is if it access the metadata API and IMDSv2 max-hop-limit is set to 1.
KCM etc are static pods, which is why they have hostnetworking and why we have higher redundancy than what we need for controllers with leader election.
Not a huge fan of template functions here technically. But there is a number of controllers/operators where we could use this functionality. |
My main concerns for a cyclic dependency would be the volume controller and the route controller. If the CNI can't get nodes to run its operators to get itself running that would be a problem. |
780f239
to
82ca7d7
Compare
I decided to remove the change from Daemonset -> Deployment from this PR, and if we decide its still desired we'll create a separate PR. The more important change is giving the permissions required for the controller's 1.21 changes, where we use the upstream libraries for controller initialization. These libraries (clientbuilder) require SA token create permissions because they create a SA token per controller. This is a good thing, because it gives each controller its own identity, which helps for audit logs, and reduces the blast radius of potential vulnerabilities present in a single controller. SA per controller is enabled/disabled with |
in kubernetes/kubernetes#99291 Jordan Ligget states:
But it seems this PR is granting broad "serviceaccounts/tokens" permissions, not just on the resourceNames for the expected service accounts. |
82ca7d7
to
202f590
Compare
Thanks @johngmyers, I've restricted it to be just create token permissions on "node-controller", "service-controller" and "route-controller" |
@nckturner Thanks. You need to run |
* We need the ability to create service account token because this is required by clientbuilder/controller-manager framework which we will be using in 1.21. * This is required for the CCM to use 1 SA per controller, which follows principle of least privilege and makes audit logs easier to understand * Restricts token creation to resource names "node-controller", "service-controller", and "route-controller".
202f590
to
0239dc1
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: johngmyers The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/unhold |
Permission to create servcice account tokens
Note: edited to remove change from daemonset -> deployment. I think that change might still be valuable, at least as an option, but should be distinct from this change.