Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix version upgrade kubelet support #16932

Merged
merged 2 commits into from
Dec 4, 2024

Conversation

rsafonseca
Copy link
Contributor

Automatically preserve kubelet supported version skew on worker nodes while control plane is being updated

Fixes: #16907

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 4, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @rsafonseca. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@rsafonseca
Copy link
Contributor Author

cc @danports @hakman @justinsb as discussed in the office hours
I've only had time to do some basic testing, but haven't gone through an actual upgrade, though the outputs look as expected :)

@danports
Copy link
Contributor

danports commented Nov 4, 2024

Thanks @rsafonseca! On the one hand, I can't say I'm the biggest fan of the ergonomics of this approach, because it basically means we have to run the identical kops update cluster and kops rolling-update cluster commands twice in our CD script - and not having a command line switch like --instance-group-role makes the script less self-documenting. But on the other hand, I am very happy to have any solution that works and this also has the advantage of being safe by default in terms of automatically enforcing the version skew policy. An "uber command" as @rifelpet called it would still be great, and while I haven't looked at all at the code, I wondered whether one could implement it by making a rolling update (of CP or worker nodes) just another type of task that needs to be run in the update dependency tree.

@rsafonseca
Copy link
Contributor Author

rsafonseca commented Nov 4, 2024

Yeah, it's not perfect for all use cases, but it's a first step and we can build upon this for the so called "uber" command :)
Like you said, it's safe by default, but the user can override this behaviour to keep the old one if they so wish by using the --ignore-version-skew command line switch.

In addition, this will produce no behaviour change for any kops update cluster other than specifically when the kubernetes version is being increased (e.g. kops updates will still procede normally and will update things like the containerRuntime, etc without the need for 2 interactions)

It also allows the instancegroups to continue to be managed while the controlplane isn't updated. E.g. if i need to make a change to an instancegroup but haven't finished rolling out all control planes.

As for the CD script, here I'm probably the least impacted as we're running a kops update loop on all clusters to reconcile with CRDs from our operator, so this change will (in our scenario) make the upgrades seamless and safe without having to change anything :)

Edit: Optionally an easy way to maintain behaviour in current CD workflows, is to simply ensure api-server nodes are rolled out first in the rolling-update command, after the masters have rolled out, before moving on to the other instance groups trigger an update cluster :)

if err != nil {
klog.Warningf("error checking control plane running verion: %v", err)
} else {
klog.Warningf("successfully checked control plane running version: %v", minControlPlaneRunningVersion)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: any reason to make this a warning (as opposed to Infof)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh no... that was just for my testing i fully intended to make it info or debug and then forgot :)

cmd/kops/update_cluster.go Outdated Show resolved Hide resolved

func checkControlPlaneRunningVersion(ctx context.Context, clusterName string, version string) (string, error) {

configLoadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think we should try to share this "get kubeconfig" code with the rolling-update logic (I think that also connects to the cluster). Not a blocker though.

LabelSelector: "node-role.kubernetes.io/control-plane",
})
if err != nil {
return version, fmt.Errorf("cannot list nodes in cluster %q: %v", clusterName, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - this is tricky. If the control plane is offline, we presumably don't want to update the nodes (?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the control plane is offline, this is the least of our worries lol. But it might be an expected situation (e.g. creating a new cluster) so we just log the error and continue as normal

return err
}
if len(assetBuilder.KubeletSupportedVersion) > 0 && assetBuilder.KubeletSupportedVersion != c.Cluster.Spec.KubernetesVersion {
c.AssetsSupportedKubelet, err = c.addFileAssets(assetBuilder, assetBuilder.KubeletSupportedVersion)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - great point - we need different assets. Although I'm starting to worry about complexity now...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the complexity here worried me a bit as well as I dug a bit into how to achieve this. The code isn't perfect but i tried to do it without having to refactor many things, if you have any better ideas we can either improve this down the line or i can make any adjustments :)

@justinsb
Copy link
Member

justinsb commented Nov 5, 2024

Thanks for this! A few random comments on the code but I'm going to mull on the overall approach.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 9, 2024
@justinsb
Copy link
Member

So we have added the ability to update only some of the instance groups, and in #16939 I am proposing to make it easy to drive those operations in a predefined sequence.

One thing I don't like about that approach is that we do have to predefine the sequence, and one of the things I like in this PR is that we're being more reasoned about the sequence: rather than hard-coding the sequence we effectively infer the sequence at run-time.

It might be that the predefined approach is good enough to unblock 1.31, but I would like to pursue this direction more. I am wondering if we should make it more explicit, adding a field for the Kube version on each instance groups and managing that.

@danports
Copy link
Contributor

@justinsb Could we combine the two approaches by having the anneal/reconcile cluster command call update & rolling-update repeatedly, until the update command reports that there are no tasks left that it had to skip due to version skew or other dependencies? Then the anneal/reconcile command wouldn't have to hard-code any sequences, and that structure might also be able to incorporate running the --prune logic as well.

@justinsb
Copy link
Member

@danports yes, I think we might need a combination. Something brought up in office hours was that we need to support this for terraform also, where we can't emit partial configuration (at least I don't think we can!). I've gone round in circles a lot on other ways to make this work, but I am feeling like the approach in this PR is probably going to be the only one that we can really do, short of introducing a kubernetesVersion field in the instance group which is its own management problem.

I am doing some work to try to clean up our assets logic so that this PR doesn't have to plumb through multiple asset versions - that should make this PR smaller I hope!

@justinsb
Copy link
Member

OK, I think I've got something working, and it is based on this PR. The approach here is very clever, and because terraform will delete resources if we stop emitting them, I think this is the right way forward. The downside is that we will have to run update / rolling-update / update / rolling-update, but we can address that with messages to the user and potentially a command like #16939 (though that will have to be reworked)

I do think we should merge #16954 first though, as with that PR the code in this PR becomes much easier. The whole building of fileAssets looks to be unnecessarily complicated, which is part of the reason why this PR is somewhat complicated. I have a rebase of it after #16954 so once that merges I think I can probably just update this PR branch (as a maintainer).

cc @hakman

@rsafonseca
Copy link
Contributor Author

Cool, let me know if you need a hand with anything @justinsb :)

@k8s-ci-robot k8s-ci-robot added area/api and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Nov 28, 2024
@justinsb
Copy link
Member

/ok-to-test

I rebased, it's simpler now that we can resolve the file assets "at the last minute"

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 28, 2024
@rsafonseca
Copy link
Contributor Author

Awesome, much cleaner change now :D

@justinsb
Copy link
Member

Awesome, much cleaner change now :D

Thanks @rsafonseca - I want to emphasize that you did the hard work here of figuring out the approach. I just cleaned up some of the existing over-complicated code in kOps that was resolving file assets in an awkward way.

cc @hakman if we get this merged I have a few small follow-ons (e.g. there's a problem with featuregates being removed upstream), but I think this is our best approach, given terraform.

@@ -163,6 +171,7 @@ func NewCmdUpdateCluster(f *util.Factory, out io.Writer) *cobra.Command {
cmd.RegisterFlagCompletionFunc("lifecycle-overrides", completeLifecycleOverrides)

cmd.Flags().BoolVar(&options.Prune, "prune", options.Prune, "Delete old revisions of cloud resources that were needed during an upgrade")
cmd.Flags().BoolVar(&options.IgnoreVersionSkew, "ignore-version-skew", options.IgnoreVersionSkew, "Setting this to true will force updating the kubernetes version on all instance groups, regardles of which control plane version is running")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given how many policies are defined here: https://kubernetes.io/releases/version-skew-policy/

and that this only violates one of them, perhaps we should make the flag more specific? --ignore-kubelet-version-skew

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point @rifelpet, but i think instead what we could do is keep this naming and add an additional check of whether the user is trying to update more than 3 minor versions at a time, and throw an error if that's the case unless the --ignore-version-skew flag is set. That would take care of all the version skews listed there and the flag name would then make sense. WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think the two most common skews are going to be people updating the control plane by more than one version at a time, and the one we have here where people want to update the nodes at the same time as the control plane.

I suspect we could get away with two flags therefore, like ignore-node-version-skew and ignore-controlplane-version-skew (though maybe something like ignore-controlplane-skip-versions would be more accurate). But I'm wondering if making this a string would be more ergonomic, so we would have ignore-version-skew=all and ignore-version-skew=kubelet.

but the trick would be that we would accept --ignore-version-skew to mean --ignore-version-skew=all, so we wouldn't actually have to define everything yet, we're just leaving the "flag API surface" open to future extensions. I'll check out whether we can have a defaulting string...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we can do this, using something like:

	// By default we enforce the version skew between control plane and worker nodes
	options.IgnoreVersionSkew = "DoNotIgnore"
	cmd.Flags().StringVar(&options.IgnoreVersionSkew, "ignore-version-skew", options.IgnoreVersionSkew, "Setting this to true will force updating the kubernetes version on all instance groups, regardless of which control plane version is running")
	cmd.Flags().Lookup("ignore-version-skew").NoOptDefVal = "IgnoreAllSkew"

The thing I don't like is that it won't be very discoverable, particularly once we have more options. The help with the above looks like this:

      --ignore-version-skew string[="IgnoreAllSkew"]   Setting this to true will force updating the kubernetes version on all instance groups, regardless of which control plane version is running (default "DoNotIgnore")

So we're going to be struggling to get a clear description here. Maybe that just doesn't matter - this is blocking the 1.31 beta release, so maybe this is good enough?

The alternative is that we have separate flags, --ignore-kubelet-version-skew, --ignore-upgrade-skip-versions and maybe that's all (but if not we just add more). We'd have more flags, but they would be simple.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having separate flags makes sense to me because users may want to toggle them separately which is easier to understand as independent flags rather than trying to choose the most appropriate flag value.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it does seem simpler as I don't think we're going to end up with too many of them (and if we do, we can always hide the flags and move to an aggregated flag)

Any objections to --ignore-kubelet-version-skew ? I think I can push a change if we all agree cc @rsafonseca

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No concerns from me, feel free to push the change @justinsb :)

@justinsb
Copy link
Member

justinsb commented Dec 2, 2024

Thanks all - I renamed the flag and pushed!

cmd/kops/update_cluster.go Outdated Show resolved Hide resolved
cmd/kops/update_cluster.go Outdated Show resolved Hide resolved
rsafonseca and others added 2 commits December 3, 2024 07:36
…, while control plane is being updated

Co-authored-by: Peter Rifel <rifelpet@users.noreply.github.com>
@justinsb
Copy link
Member

justinsb commented Dec 3, 2024

Thanks @rifelpet ; I incorporated your log message tweaks. We do need some follow on PR (some node feature flags have been removed in 1.31), so we can also make more tweaks in follow on as we observe things (many of which will be more of the "oh, we should consider the instance group version, not the cluster version" variety)

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 4, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rifelpet

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 4, 2024
@k8s-ci-robot k8s-ci-robot merged commit 6dfbd46 into kubernetes:master Dec 4, 2024
23 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.31 milestone Dec 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/api area/documentation cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
5 participants