-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Roadmap] Support node specific configuration #2367
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/lifecycle frozen |
we are already at v1beta4 and i just don't think v1 will have instance specific configuration that introduces new structures like NodeConfiguration that is stored in a CM for a given node and managed by kubeadm. we actually also haven't seen demand for that from users. seems like they are fine with the patches and also keeping the init/joinconfiguration YAML around. the only difficult aspect has been the kubelet instance configuration and we are finally adding that with maybe we can consider this for a vNext (e.g. v2), /shrug |
My problem with the current patch support is that the parameter needs to be given with upgrades... Keeping the config for the path to the patches in the kubeadm-config configmap or an annotation on the node would help... (An annotation might make most sense...) We often have the problem that people forget to apply the extra parameter when upgrading... Edit: I'm referring to the kubelet config patches above, they are the most commonly used ones. I also see that the setting can now be given in the InitConfiguration and JoinConfiguration, which I can't remember being the case previously... (I know that there were warnings passing The non-kubelet patches might not have such an obvious place to save the path... I see the note about the kubeletExtraArgs setting (we were mostly messing with /etc/defaults/kubelet instead, but often had to edit the kubeadm-generated file in the case of upgrades (migrating from docker to containerd, removing deprecated / removed flags), which is also not ideal...) (Using Ansible for much of it) |
one benefit of the patches approach is that it gives the user the control to not apply a certain patch (node config) to components when they are not needed on upgrade to version next. if during init/join a certain node config or the patches for it are written in the cluster that means the node config becomes pinned, and to fix that the user must go and edit a config map before upgrade if that is no longer needed. to me it seems the UX is not necessary better for this.
that is a user mistake, i would say.
ideally all user provider kubelet flags should be gone. (as they are deprecated in the kubelet). but we will keep kubeletExtraArgs , of course, |
The settings that we have in the patches seems unlikely to change significantly between versions: cgroupDriver (I think all have been migrated to systemd by now at least...), maxPods and resolvConf (we have had clusters with a mix of node operating systems, mostly for testing and then that setting needs to be set per-node) (criSocket would also make sense, but there is the (The non-kubelet configs is another story (I think you replied on the pre-edit version of my comment, before I remembered that patches are supported for more than just kubelet (we only use it for kubelet))) (kubelets have a 1:1 mapping to Node objects, the control plane components not necessarily (I do not think kubeadm supports control-plane nodes without kubelet though)) I do not think that keeping the contents of the patch on the cluster makes sense, but the path to it might (the UpgradeConfiguration would help as well though)
Currently, the kubelet flags that we go to a lot of effort to add, is In the kubeadm cases, the playbook mainly removes the settings added previously (some of these clusters have been upgraded from ~1.15, so the parameters changed quite a bit over time) (I would prefer if that config was updated by kubeadm on upgrades) (The runtime settings is updated elsewhere if the runtime changes)
|
the link in my previous post is the feature which adds a feature gate to not use the annotation and instead use a var/lib/kubelet/instance-config.yaml (which is technically a patch).
i think it does help as it adds the consistency with --config.
the kubelet exposes it as a field as well:
we do perform cleanup but only for the flags kubeadm maintains (i.e. not for --allowed-unsafe-sysctls0. if we missed doing cleanup on a managed flag that is no longer needed that's a kubeadm bug. |
That config, if enabled by default or by a command-line parameter, would work for our EKS issue as well... (where kubeadm is not involved and they generate the config from their node bootstrap script (called from the end of a cloud-init script), so changing settings in the config is hard there)
Correct 🎉
I do know it is in the config, but we can't control the config that AWS generates with EKS.... (for the kubeadm cluster that would be useful) (we currently only set that for a specific customer's application where we need to tweak network buffers and max connections) (whether the items are actually safe and missing from the default allowlist might be the real question) It might be possible to add an instance-config, but AWS might use that for their own purposes.... (an instance-config.d directory might be the other way...)
With kubeadm, those tend to go into /etc/default/kubelet. (It is a lot easier if just one tool mess with a config) (We don't currently have extra settings on any of the kubeadm-deployed clusters) I suspect that in our case, we mainly messed with it when switching existing servers from Docker to containerd. (I'm not sure if setting the annotation / patches and running |
note, the kubelet has another way of doing instance config without kubeadm.
kubeadm is only going to cleanup kubeadm-flags.env as the /etc/default/kubelet is user owned.
the migration guide had instructions to modify the annotation and kubeadm-flags.env, FWIW |
Kubeadm as of today does not provide a first-class support node-specific configuration.
There are ways to achieve this, like using patches, or for kubelet only, by providing node-specific extrargs, but in the end, the UX is not ideal and we are not covering all the use cases.
As a consequence, I'm proposing to include in the discussions for the next kubeadm roadmap also a definition of proper support for node-specific configurations in the kubeadm API, possibly in line with the outcomes of the ongoing discussion in kubernetes/enhancements#1439.
This should cover both the core components managed by kubeadm (API server, controller-manager, scheduler, etcd, kubelet) but also addon when this makes sense.
The text was updated successfully, but these errors were encountered: