You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upgrading from 1.7.x to another 1.7 release using kubeadm v1.8.0 leads to errors.
Versions
kubeadm version (use kubeadm version): 1.7.5 and 1.8.0
Environment:
Kubernetes version (use kubectl version): 1.8.0
Cloud provider or hardware configuration: none
OS (e.g. from /etc/os-release): CentOS 7
What happened?
install kubernetes 1.7.x (e.g. 1.7.3) cluster using kubeadm 1.7.x
install kubeadm 1.8.0
try "kubeadm upgrade apply v1.7.7"
you will get errors like below:
....
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
[upgrade/postupgrade] FATAL post-upgrade error: [unable to update RBAC clusterrolebinding: ClusterRoleBinding.rbac.authorization.k8s.io "kubeadm:node-autoapprove-bootstrap" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"system:certificates.k8s.io:certificatesigningrequests:nodeclient"}: cannot change roleRef, unable to update RBAC rolebinding: RoleBinding.rbac.authorization.k8s.io "kubeadm:bootstrap-signer-clusterinfo" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"kubeadm:bootstrap-signer-clusterinfo"}: cannot change roleRef]
What you expected to happen?
upgrade should succeed without errors
How to reproduce it (as minimally and precisely as possible)?
install kubeadm 1.7.5
kubeadm init --kubernetes-version v1.7.3
install kubeadm 1.8.0
kubeadm upgrade apply v1.7.7
Anything else we need to know?
The text was updated successfully, but these errors were encountered:
Automatic merge from submit-queue.
Don't try to migrate to new roles and rolebinding within 1.7 upgrades
**What this PR does / why we need it**:
If user uses kubeadm 1.8.0 to upgrade within 1.7.x versions, don't try to migrate to new RBAC rules and names. It will lead to errors like described in kubernetes/kubeadm#475
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: kubernetes/kubeadm#475
**Special notes for your reviewer**:
**Release note**:
```release-note
- kubeadm 1.8 now properly handles upgrades from to 1.7.x to newer release in 1.7 branch
```
In case anyone comes across the above mentioned errors during a kubeadm upgrade, this is the work around I used to get my kubeadm upgrade apply to succeed:
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Upgrading from 1.7.x to another 1.7 release using kubeadm v1.8.0 leads to errors.
Versions
kubeadm version (use
kubeadm version
): 1.7.5 and 1.8.0Environment:
kubectl version
): 1.8.0What happened?
install kubernetes 1.7.x (e.g. 1.7.3) cluster using kubeadm 1.7.x
install kubeadm 1.8.0
try "kubeadm upgrade apply v1.7.7"
you will get errors like below:
What you expected to happen?
upgrade should succeed without errors
How to reproduce it (as minimally and precisely as possible)?
kubeadm init --kubernetes-version v1.7.3
kubeadm upgrade apply v1.7.7
Anything else we need to know?
The text was updated successfully, but these errors were encountered: