-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[upgrade/postupgrade] FATAL post-upgrade error: unable to create/update the DNS service: services "kube-dns" not found (using coredns) #2358
Comments
hi,
the service that coredns uses is also called kube-dns, because that was part of the original transition plan in k8s from kube-dns to coredns. #sig-network on k8s slack know more about this topic. the coredns / kube-dns manifests are here:
this version is not supported. you'd have to be at 1.18 soon as older versions are going out of support. if so please re-open the ticket. |
btw, the error is coming from here: the hard to reproduce aspect here only means that somehow the service is not available at that particular moment, which is bad. |
@neolit123 I have faced exactly the same issue during cluster upgrade from 1.16.3 to 1.17.11 version. { "_raw_params": "timeout -k 600s 600s /usr/local/bin/kubeadm upgrade apply -y v1.17.11 --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --allow-experimental-upgrades --etcd-upgrade=false --force", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } Cluster is also using coredns before upgrade, can you please advice how to resolve it? |
my understanding of this problem:
you can try editing the CoreDNS ConfigMap and Deployment to use 1.6.2 (downgrade CoreDNS): if 1.6.7 is something that kubespray installs, please contact the kubespray team. |
Thank you for update, the funny thing is that even if upgrade process fails - coredns 1.6.7 is installed (there is 2 replicasets, one is failing with corefile-backup configmap, and one is working fine). |
I encountered the same issue (with the same cluster) upgrading from 1.17.12 to 1.18.10, using kubespray v2.14.2. |
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11", GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-12T21:06:11Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}Environment:
kubectl version
): 1.14.1 (upgrading to 1.15.11)uname -a
): Linux 3.10.0-862.14.4.el7.x86_64 kubeadm join on slave node fails preflight checks #1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/LinuxAfter searching the kubernetes slack, we're not the first to encouter that problem :
(slack requires an account registration)
https://kubernetes.slack.com/archives/C2V9WJSJD/p1589988061457900
https://kubernetes.slack.com/archives/C2V9WJSJD/p1570707565021200
https://kubernetes.slack.com/archives/CDQSC941H/p1597386500263300
What happened? /usr/local/bin/kubeadm upgrade apply -y v1.15.11 --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --allow-experimental-upgrades --allow-release-candidate-upgrades --etcd-upgrade=false --force
After launching the following command :
kubeadm fail at the post-upgrade stage with the following error :
[upgrade/postupgrade] FATAL post-upgrade error: unable to create/update the DNS service: services "kube-dns" not found
But it is configured to use coreDNS. Furthermore, the cluster was already using coreDNS, and there was no kube-dns Deployment or Service.
What you expected to happen?
kubeadm correctly recognizes coredns and does not try to find a kube-dns service.
How to reproduce it (as minimally and precisely as possible)?
The reproduction seems quite hard. The linked slack messages mentions that this happens sometimes, sometines nos (!).
We are updating several clusters with mostly the same method, and only (as of now) encoutered that problem on one. And the previous upgrade (1.13 -> 1.14) went fine, even though coredns was already used.
However, some facts that might be related :
After seeing that kubeadm set clusterDNS for kubelet config to the address
10.x.x.10
(x depending from the subnet services), I constated that this clusterIP was taken by another service (from an application running on the cluster), and the creation date of that service was between the previous upgrade and the one where we encoutered the error (so at least the timing makes sense). That setting of clusterDNS was not actually used by the kubelets, because kubespray handle its the kubelets configuration (I think), and use the third ip in the range (10.x.x.3
). But maybe this somehow confuse kubeadm ?I did not had the time to setup a reproducing scenario unfortunately. If I do, I will update the issue.
Anything else we need to know?
The workaround which is mentioned in one of the slack messages works well, and allowed us to perform our upgrade. I'll note it here since it's probably more accessible for future users of kubeadm which could stumble upon that :
Workaround
Copy the service coredns. Create a new service kube-dns from the copy, changing the name ("kube-dns") and forcing the clusterIP to 10.x.x.10 ( -> matching what is in your kubeadm config). Then relaunch your command
The text was updated successfully, but these errors were encountered: