-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading a 1.12 cluster thru 1.13 to 1.14 fails. #1471
Comments
I also did:
so that it was all reloaded and ready |
Experiencing the same issue as @neolit123 describes in #1469 (comment). Etcd manifests have not been manually altered, cluster was created before k8s 1.12. |
/assign |
should be resolved in 1.14.1 |
I just upgraded a cluster through 1.13.x and 1.14.4. Worked ok. I then tried to upgrade to 1.15.0 and it failed: It was referencing the external address. I do still see localhost stuff in etcd: |
Should I just update all references to 127.0.0.1 to the external address in /etc/kubernetes/manifests/etcd.yaml or is there more to it? |
try patching it manually. this should not have happened as we had a special case to handle the etcd upgrade not being localhost. |
Preciso subir a versao do eks do 1.12 pra 1.14 |
BUG REPORT
Versions
kubeadm version (use
kubeadm version
): 1.14.0Environment:
kubectl version
): 1.13.5uname -a
): anyWhat happened?
in 1.12 we bound etcd to localhost on single master setups. We also minted certificates that included only hostname, localhost, and 127.0.0.1 ip san.
in 1.13 we changed that behavior and started binding etcd to 127.0.0.1 and the node ip.
We also updated the cert generation to pick up the change.
You can upgrade a cluster from 1.12 to 1.13 with no issues as kubeadm plan will try to assess etcd on localhost.
When you try to upgrade the 1.13 cluster to 1.14 the upgrade fails cause in 1.14 kubeadm tries to assess etcd on the node ip. While it's a valid assumption that etcd would be bound to the node ip if the cluster were created using 1.13 this cluster was originally created using 1.12 and etcd is only bound to 127.0.0.1
What you expected to happen?
That we would either try to determine what address etcd is bound to or make the change in 1.13 to modify etcd configuration so that we don't strand 1.12 clusters.
How to reproduce it (as minimally and precisely as possible)?
bring up a 1.12 single master cluster
upgrade it to 1.13
Try to upgrade it to 1.14
Anything else we need to know?
You can work around this issue with the following:
using kubeadm for the version of kubernetes you are on you can:
where 10.192.0.2 is the node ip.
you should now see etcd port 2379 bound to 127.0.0.1 and 10.192.0.2
Now you can grab the new kubeadm and upgrade.
The text was updated successfully, but these errors were encountered: