-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error during kubeadm init - addon phase with coreDNS #2699
Comments
I suspect it‘s a problem in the |
/kind support |
Likely a problem with this https://www.haproxy.com/documentation/kubernetes/latest/configuration/ingress/ and headers like @pacoxu mentioned. We assume we have a working HA LB guide at https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md @felipefrocha are you following the same guide or something else? If you think something can be noted in there, please help us by sending a PR. |
@neolit123, I was folloing this docs, but I didn't use the keepalive, once my only node to HAproxy was well know and set on etc/hosts on my masters nodes. # cat /etc/hosts in all masters are the same
xxx.xxx.xxx.xxx k8s-haproxy $ ping k8s-haproxy
PING k8s-haproxy (xxx.xxx.xxx.xxx) 56(84) bytes of data.
64 bytes from k8s-haproxy (xxx.xxx.xxx.xxx): icmp_seq=1 ttl=64 time=0.287 ms
64 bytes from k8s-haproxy (xxx.xxx.xxx.xxx): icmp_seq=2 ttl=64 time=0.569 ms
64 bytes from k8s-haproxy (xxx.xxx.xxx.xxx): icmp_seq=3 ttl=64 time=0.684 ms
64 bytes from k8s-haproxy (xxx.xxx.xxx.xxx): icmp_seq=4 ttl=64 time=0.397 ms The configs to HAProxy follows the documentation that you mentioned: # /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0 info
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind *:6443 mss 1500
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
# server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check
server k8s-master01 xxx.xxx.xxx.xxx:6443 check fall 3 rise 2
server k8s-master02 xxx.xxx.xxx.xxx:6443 check fall 3 rise 2
server k8s-master03 xxx.xxx.xxx.xxx:6443 check fall 3 rise 2
Besides this setup a followed the instructions found at ha k8s |
It's not a bug in the kubeadm code. Does it happen every time? |
you might be able to get more responses on the support channels. i don't see a kubeadm bug, but if one is confirmed please open the issue with more details. |
The error does come from the haproxy. To resolve it, I skip the proxy initilization part by using |
Maybe a very silly thing, but for anyone that may find this issue: I ran into this because |
If '--skip-phases=addon/kube-proxy' is used, it does let the install complete. Give it like 40 seconds and then run
to install the kube-proxy addon successfully. (retry if you need to wait a few more seconds) ... On centos 9 stream I had to also copy the whole containerd default configuration, then modify the systemd line
|
for me the tip to install kube-proxy later do not work on ubuntu 22.04. install logs:
some logs from journalctl:
|
@gaetanquentin were you able to resolve your issue? I'm running into the same thing. |
No. I had to switch to canonical microk8s. |
same problem here on ubuntu 22.04 |
I encountered same problem on Ubuntu 22.04 |
It worked for me in Ubuntu 22.04 Server Additionally I also had to clean up the Flannel CNI config files Kubernetes: 1.24.1 |
I did this as well and mine is working too as well. |
What keywords did you search in kubeadm issues before filing this one?
coredns, addons, thoubleshooting
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:44:24Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Kubernetes version (use
kubectl version
):Client Version: v1.24.0 | Kustomize Version: v4.5.4
Cloud provider or hardware configuration:
dell optiplex 3070
OS (e.g. from /etc/os-release):
Debian GNU/Linux 11 (bullseye)
Kernel (e.g.
uname -a
):5.10.0-14-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux
Container runtime (CRI) (e.g. containerd, cri-o):
revision="1.4.13~ds1-1~deb11u1" version="1.4.13~ds1"
Container networking plugin (CNI) (e.g. Calico, Cilium):
Others:
What happened?
During kubeadm intialization I keep receiving error
What you expected to happen?
Initialization should happen without a problem
How to reproduce it (as minimally and precisely as possible)?
In Debian Environment run initialization with the referciated tools
Anything else we need to know?
The text was updated successfully, but these errors were encountered: