-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't try to migrate to new roles and rolebinding within 1.7 upgrades #53338
Conversation
cc @luxas |
if err := clusterinfo.CreateClusterInfoRBACRules(client); err != nil { | ||
errs = append(errs, err) | ||
// Not needed for 1.7 upgrades | ||
if k8sVersion.AtLeast(constants.UseEnableBootstrapTokenAuthFlagVersion) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is incorrect... we still need this permission set up for 1.7 installs. this version constant just controls whether we use the --enable-bootstrap-token-auth
or --experimental-bootstrap-token-auth
flag. In either case, we require the RBAC permissions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It exists if 1.7.x was initialized with kubeadm. Only with different roleRef than expected in 1.8.
Not running it, means cluster will be in the same state as it was expected within 1.7.x (no role/bindings migrations to 1.8 naming schema).
That's what @luxas suggested. First version of that PR had migration to 1.8 naming schema even in cases of 1.7.x->1.7.x upgrade.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, this is just the upgrade path. this still seems like a fragile way to determine what changes are needed, but I'll defer to @luxas.
were all currently supported upgrade paths tested with this change?
1.7.x -> 1.7.y upgrade
1.7.x -> 1.8.x upgrade
1.8.0 -> 1.8.0 "upgrade"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I've tested following on my development cluster:
- install fresh cluster version 1.7.3 using kubeadm 1.7.5
- old roles/bindings naming schema
- upgrade 1.7.3 to 1.7.7 using modified kubeadm 1.8.0
- no roles/binding changes, old names/values
- upgrade 1.7.7 to 1.8.0-rc.1 using modified kubeadm 1.8.0
- roles/bindings migrated to 1.8 schema
- upgrade 1.8.0-rc.1 to 1.8.0 using modified kubeadm 1.8.0
- roles/bindings in 1.8 naming schema
Heads up- We're planning to cut 1.8.1 early this week. If this is a critical fix that will need to be in 1.8.1 please let me know, otherwise we can aim for 1.8.2. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kad, luxas Associated issue: 475 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
@jpbetz Yes, I'd really like this to make v1.8.1, it's a bug in one upgrade case. |
@jpbetz FWIW; the kubeadm e2e job is broken due to some test-infra bazel stuff; but CI jobs are passing, so please ignore the PR job e2e failure above. Currently the job fails fast and doesn't even start testing anything. We're looking into it and working on it. |
@luxas Is this 1.8.1 only or has it also been committed to master? If it's 1.8.1 only, I'd like to make sure the e2e tests are healthy since I don't have the master branch tests to give me more confidence. Can you provide any more detail on what exactly about the test-infra is failing? And it looks like only /test pull-kubernetes-e2e-kubeadm-gce |
Removing label |
/retest Review the full test history for this PR. |
1 similar comment
/retest Review the full test history for this PR. |
/test pull-kubernetes-unit |
@jpbetz it is 1.8.1 only. This fix is for case 1.7.x->1.7.x upgrade issue, and master (1.9) already dropped support for 1.7.x. |
Unit tests seem to be flaky. @jpbetz it's only the presubmit that for some day now hasn't been able to start. But things are green on the v1.8 e2e postsubmit, which is the real indicator: https://k8s-testgrid.appspot.com/sig-cluster-lifecycle#periodic-kubeadm-gce-1.8 |
/retest Review the full test history for this PR. |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. |
Commit found in the "release-1.8" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this is an error find help to get your PR picked. |
@kad: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
What this PR does / why we need it:
If user uses kubeadm 1.8.0 to upgrade within 1.7.x versions, don't try to migrate to new RBAC rules and names. It will lead to errors like described in kubernetes/kubeadm#475
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): kubernetes/kubeadm#475Special notes for your reviewer:
Release note: