Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes 1.29 - node NotReady and pods still pending after weave deployment #3

Open
mvrk69 opened this issue Apr 14, 2024 · 10 comments

Comments

@mvrk69
Copy link

mvrk69 commented Apr 14, 2024

Hi,

I deployed a kubernetes 1.29 cluster and after deploying weave, the node is NotReady and the pods are still in pending.
Same thing with kuberntes 1.28
Last version where it works fine is kubernetes 1.27

Anything else we need to know?

OS: Fedora CoreOS 39

Versions:

$ weave version 
latest
$ uname -a
Linux k8sm01 6.7.7-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Mar  1 16:53:59 UTC 2024 x86_64 GNU/Linux
$ kubectl version
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3

Logs:

$ kubectl logs -n kube-system <weave-net-pod> weave
iptables backend mode: nft
DEBU: 2024/04/14 22:54:04.066975 [kube-peers] Checking peer "96:89:99:33:d7:08" against list &{[]}
Peer not in list; removing persisted data
INFO: 2024/04/14 22:54:04.137885 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:96:89:99:33:d7:08 nickname:k8sm01 no-dns:true no-masq-local:true port:6783]
INFO: 2024/04/14 22:54:04.137912 weave  2.8.6
INFO: 2024/04/14 22:54:04.619671 Bridge type is bridged_fastdp
INFO: 2024/04/14 22:54:04.619692 Communication between peers is unencrypted.
INFO: 2024/04/14 22:54:04.623727 Our name is 96:89:99:33:d7:08(k8sm01)
INFO: 2024/04/14 22:54:04.623764 Launch detected - using supplied peer list: [192.168.0.115]
INFO: 2024/04/14 22:54:04.623784 Using "no-masq-local" LocalRangeTracker
INFO: 2024/04/14 22:54:04.623788 Checking for pre-existing addresses on weave bridge
INFO: 2024/04/14 22:54:04.625340 [allocator 96:89:99:33:d7:08] No valid persisted data
INFO: 2024/04/14 22:54:04.630664 [allocator 96:89:99:33:d7:08] Initialising via deferred consensus
INFO: 2024/04/14 22:54:04.630734 Sniffing traffic on datapath (via ODP)
INFO: 2024/04/14 22:54:04.632382 ->[192.168.0.115:6783] attempting connection
INFO: 2024/04/14 22:54:04.633083 ->[192.168.0.115:32929] connection accepted
INFO: 2024/04/14 22:54:04.633616 ->[192.168.0.115:32929|96:89:99:33:d7:08(k8sm01)]: connection shutting down due to error: cannot connect to ourself
INFO: 2024/04/14 22:54:04.633737 ->[192.168.0.115:6783|96:89:99:33:d7:08(k8sm01)]: connection shutting down due to error: cannot connect to ourself
INFO: 2024/04/14 22:54:04.635961 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2024/04/14 22:54:04.635990 Listening for metrics requests on 0.0.0.0:6782
INFO: 2024/04/14 22:54:05.179948 [kube-peers] Added myself to peer list &{[{96:89:99:33:d7:08 k8sm01}]}
DEBU: 2024/04/14 22:54:05.186322 [kube-peers] Nodes that have disappeared: map[]
INFO: 2024/04/14 22:54:05.215320 adding entry 10.32.0.0/12 to weaver-no-masq-local of 0
INFO: 2024/04/14 22:54:05.215438 added entry 10.32.0.0/12 to weaver-no-masq-local of 0
10.32.0.1
DEBU: 2024/04/14 22:54:05.321394 registering for updates for node delete events
$ journalctl -u kubelet --no-pager

Apr 15 00:53:03 k8sm01 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Apr 15 00:53:03 k8sm01 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_KUBEADM_ARGS
Apr 15 00:53:03 k8sm01 kubelet[2917]: E0415 00:53:03.875656    2917 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Apr 15 00:53:03 k8sm01 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Apr 15 00:53:03 k8sm01 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 15 00:53:14 k8sm01 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Apr 15 00:53:14 k8sm01 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Apr 15 00:53:14 k8sm01 (kubelet)[3142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_KUBEADM_ARGS
Apr 15 00:53:14 k8sm01 kubelet[3142]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:14 k8sm01 kubelet[3142]: E0415 00:53:14.094787    3142 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Apr 15 00:53:14 k8sm01 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Apr 15 00:53:14 k8sm01 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 15 00:53:24 k8sm01 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Apr 15 00:53:24 k8sm01 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Apr 15 00:53:24 k8sm01 (kubelet)[3316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_KUBEADM_ARGS
Apr 15 00:53:24 k8sm01 kubelet[3316]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:24 k8sm01 kubelet[3316]: E0415 00:53:24.373352    3316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Apr 15 00:53:24 k8sm01 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Apr 15 00:53:24 k8sm01 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 15 00:53:32 k8sm01 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Apr 15 00:53:32 k8sm01 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Apr 15 00:53:32 k8sm01 kubelet[3489]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:32 k8sm01 kubelet[3489]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:32 k8sm01 kubelet[3489]: Flag --max-open-files has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:32 k8sm01 kubelet[3489]: Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:32 k8sm01 kubelet[3489]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Apr 15 00:53:32 k8sm01 kubelet[3489]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:32 k8sm01 kubelet[3489]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.546409    3489 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.756956    3489 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.757036    3489 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.757383    3489 server.go:919] "Client rotation is on, will bootstrap in background"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.766146    3489 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.770252    3489 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://k8sm01.azar.pt:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.781521    3489 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.781777    3489 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.781946    3489 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.781974    3489 topology_manager.go:138] "Creating topology manager with none policy"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.781982    3489 container_manager_linux.go:301] "Creating device plugin manager"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.782084    3489 state_mem.go:36] "Initialized new in-memory state store"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.782169    3489 kubelet.go:396] "Attempting to sync node with API server"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.782186    3489 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.782208    3489 kubelet.go:312] "Adding apiserver pod source"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.782221    3489 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.783385    3489 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.2" apiVersion="v1"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.784087    3489 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Apr 15 00:53:32 k8sm01 kubelet[3489]: W0415 00:53:32.784174    3489 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.784237    3489 plugins.go:612] "Error initializing dynamic plugin prober" err="error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system"
Apr 15 00:53:32 k8sm01 kubelet[3489]: W0415 00:53:32.784617    3489 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://k8sm01.azar.pt:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.784752    3489 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://k8sm01.azar.pt:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.784835    3489 server.go:1256] "Started kubelet"
Apr 15 00:53:32 k8sm01 kubelet[3489]: W0415 00:53:32.784885    3489 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://k8sm01.azar.pt:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8sm01&limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.784994    3489 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://k8sm01.azar.pt:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8sm01&limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.785331    3489 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.785758    3489 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.785945    3489 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.787244    3489 server.go:461] "Adding debug handlers to kubelet server"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.789399    3489 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.789580    3489 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://k8sm01.azar.pt:6443/api/v1/namespaces/default/events\": dial tcp 192.168.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{k8sm01.17c64766424059aa  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:k8sm01,UID:k8sm01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:k8sm01,},FirstTimestamp:2024-04-15 00:53:32.784802218 +0200 CEST m=+0.301702625,LastTimestamp:2024-04-15 00:53:32.784802218 +0200 CEST m=+0.301702625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:k8sm01,}"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.789665    3489 volume_manager.go:291] "Starting Kubelet Volume Manager"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.789771    3489 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.789852    3489 reconciler_new.go:29] "Reconciler: start to sync state"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.795155    3489 factory.go:221] Registration of the crio container factory successfully
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.795208    3489 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.795215    3489 factory.go:221] Registration of the systemd container factory successfully
Apr 15 00:53:32 k8sm01 kubelet[3489]: W0415 00:53:32.795258    3489 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://k8sm01.azar.pt:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.795322    3489 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://k8sm01.azar.pt:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.795334    3489 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://k8sm01.azar.pt:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8sm01?timeout=10s\": dial tcp 192.168.0.115:6443: connect: connection refused" interval="200ms"
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.804857    3489 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.810434    3489 cpu_manager.go:214] "Starting CPU manager" policy="none"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.810452    3489 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.810468    3489 state_mem.go:36] "Initialized new in-memory state store"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.812743    3489 policy_none.go:49] "None policy: Start"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.813247    3489 memory_manager.go:170] "Starting memorymanager" policy="None"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.813266    3489 state_mem.go:35] "Initializing new in-memory state store"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.857161    3489 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.857497    3489 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.858559    3489 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k8sm01\" not found"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.888149    3489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.889764    3489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.889787    3489 status_manager.go:217] "Starting to sync pod status with apiserver"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.889827    3489 kubelet.go:2329] "Starting kubelet main sync loop"
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.889947    3489 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.891456    3489 kubelet_node_status.go:73] "Attempting to register node" node="k8sm01"
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.892032    3489 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://k8sm01.azar.pt:6443/api/v1/nodes\": dial tcp 192.168.0.115:6443: connect: connection refused" node="k8sm01"
Apr 15 00:53:32 k8sm01 kubelet[3489]: W0415 00:53:32.892032    3489 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://k8sm01.azar.pt:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.892095    3489 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://k8sm01.azar.pt:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.0.115:6443: connect: connection refused
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.991118    3489 topology_manager.go:215] "Topology Admit Handler" podUID="a91f279b63c4ac15324b27bfca41a28a" podNamespace="kube-system" podName="etcd-k8sm01"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.992971    3489 topology_manager.go:215] "Topology Admit Handler" podUID="0e7098d194cab200193b45275d071676" podNamespace="kube-system" podName="kube-apiserver-k8sm01"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.994517    3489 topology_manager.go:215] "Topology Admit Handler" podUID="10e2689733ffb116d3f5338e71943654" podNamespace="kube-system" podName="kube-controller-manager-k8sm01"
Apr 15 00:53:32 k8sm01 kubelet[3489]: I0415 00:53:32.996799    3489 topology_manager.go:215] "Topology Admit Handler" podUID="29e3091c378b58020cf5ba1c223f47bd" podNamespace="kube-system" podName="kube-scheduler-k8sm01"
Apr 15 00:53:32 k8sm01 kubelet[3489]: E0415 00:53:32.997918    3489 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://k8sm01.azar.pt:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8sm01?timeout=10s\": dial tcp 192.168.0.115:6443: connect: connection refused" interval="400ms"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091635    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e7098d194cab200193b45275d071676-k8s-certs\") pod \"kube-apiserver-k8sm01\" (UID: \"0e7098d194cab200193b45275d071676\") " pod="kube-system/kube-apiserver-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091697    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-ca-certs\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091739    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-flexvolume-dir\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091782    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-kubeconfig\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091821    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29e3091c378b58020cf5ba1c223f47bd-kubeconfig\") pod \"kube-scheduler-k8sm01\" (UID: \"29e3091c378b58020cf5ba1c223f47bd\") " pod="kube-system/kube-scheduler-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091857    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a91f279b63c4ac15324b27bfca41a28a-etcd-certs\") pod \"etcd-k8sm01\" (UID: \"a91f279b63c4ac15324b27bfca41a28a\") " pod="kube-system/etcd-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091896    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a91f279b63c4ac15324b27bfca41a28a-etcd-data\") pod \"etcd-k8sm01\" (UID: \"a91f279b63c4ac15324b27bfca41a28a\") " pod="kube-system/etcd-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091956    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-etc-pki\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.091996    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-k8s-certs\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.092070    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e7098d194cab200193b45275d071676-ca-certs\") pod \"kube-apiserver-k8sm01\" (UID: \"0e7098d194cab200193b45275d071676\") " pod="kube-system/kube-apiserver-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.092162    3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/0e7098d194cab200193b45275d071676-etc-pki\") pod \"kube-apiserver-k8sm01\" (UID: \"0e7098d194cab200193b45275d071676\") " pod="kube-system/kube-apiserver-k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.094482    3489 kubelet_node_status.go:73] "Attempting to register node" node="k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: E0415 00:53:33.096991    3489 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://k8sm01.azar.pt:6443/api/v1/nodes\": dial tcp 192.168.0.115:6443: connect: connection refused" node="k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: E0415 00:53:33.229189    3489 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://k8sm01.azar.pt:6443/api/v1/namespaces/default/events\": dial tcp 192.168.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{k8sm01.17c64766424059aa  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:k8sm01,UID:k8sm01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:k8sm01,},FirstTimestamp:2024-04-15 00:53:32.784802218 +0200 CEST m=+0.301702625,LastTimestamp:2024-04-15 00:53:32.784802218 +0200 CEST m=+0.301702625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:k8sm01,}"
Apr 15 00:53:33 k8sm01 kubelet[3489]: W0415 00:53:33.349155    3489 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda91f279b63c4ac15324b27bfca41a28a.slice/crio-1a768bff89f7b29b5c59410e97e29844cd8b806bb3a9a3c0dfd1655a89332290 WatchSource:0}: Error finding container 1a768bff89f7b29b5c59410e97e29844cd8b806bb3a9a3c0dfd1655a89332290: Status 404 returned error can't find the container with id 1a768bff89f7b29b5c59410e97e29844cd8b806bb3a9a3c0dfd1655a89332290
Apr 15 00:53:33 k8sm01 kubelet[3489]: W0415 00:53:33.390365    3489 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e3091c378b58020cf5ba1c223f47bd.slice/crio-376193eea558d2efdecb7a1bf0893207d482590c5cd01ed12c819ed64f27cf27 WatchSource:0}: Error finding container 376193eea558d2efdecb7a1bf0893207d482590c5cd01ed12c819ed64f27cf27: Status 404 returned error can't find the container with id 376193eea558d2efdecb7a1bf0893207d482590c5cd01ed12c819ed64f27cf27
Apr 15 00:53:33 k8sm01 kubelet[3489]: E0415 00:53:33.399974    3489 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://k8sm01.azar.pt:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8sm01?timeout=10s\": dial tcp 192.168.0.115:6443: connect: connection refused" interval="800ms"
Apr 15 00:53:33 k8sm01 kubelet[3489]: I0415 00:53:33.498542    3489 kubelet_node_status.go:73] "Attempting to register node" node="k8sm01"
Apr 15 00:53:33 k8sm01 kubelet[3489]: E0415 00:53:33.500079    3489 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://k8sm01.azar.pt:6443/api/v1/nodes\": dial tcp 192.168.0.115:6443: connect: connection refused" node="k8sm01"
Apr 15 00:53:34 k8sm01 kubelet[3489]: I0415 00:53:34.302270    3489 kubelet_node_status.go:73] "Attempting to register node" node="k8sm01"
Apr 15 00:53:35 k8sm01 kubelet[3489]: E0415 00:53:35.165731    3489 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"k8sm01\" not found" node="k8sm01"
Apr 15 00:53:35 k8sm01 kubelet[3489]: I0415 00:53:35.258586    3489 kubelet_node_status.go:76] "Successfully registered node" node="k8sm01"
Apr 15 00:53:35 k8sm01 kubelet[3489]: I0415 00:53:35.784853    3489 apiserver.go:52] "Watching apiserver"
Apr 15 00:53:35 k8sm01 kubelet[3489]: I0415 00:53:35.790078    3489 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Apr 15 00:53:35 k8sm01 kubelet[3489]: E0415 00:53:35.922731    3489 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-k8sm01\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-k8sm01"
Apr 15 00:53:38 k8sm01 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Apr 15 00:53:38 k8sm01 kubelet[3489]: I0415 00:53:38.681477    3489 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Apr 15 00:53:38 k8sm01 systemd[1]: kubelet.service: Deactivated successfully.
Apr 15 00:53:38 k8sm01 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Apr 15 00:53:38 k8sm01 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Apr 15 00:53:38 k8sm01 kubelet[3658]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:38 k8sm01 kubelet[3658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:38 k8sm01 kubelet[3658]: Flag --max-open-files has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:38 k8sm01 kubelet[3658]: Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:38 k8sm01 kubelet[3658]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Apr 15 00:53:38 k8sm01 kubelet[3658]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:38 k8sm01 kubelet[3658]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.795631    3658 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.802642    3658 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.802667    3658 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.802884    3658 server.go:919] "Client rotation is on, will bootstrap in background"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.804328    3658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.805802    3658 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.818424    3658 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.818641    3658 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.818856    3658 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.818898    3658 topology_manager.go:138] "Creating topology manager with none policy"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.818907    3658 container_manager_linux.go:301] "Creating device plugin manager"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.818977    3658 state_mem.go:36] "Initialized new in-memory state store"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.819117    3658 kubelet.go:396] "Attempting to sync node with API server"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.819131    3658 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.819153    3658 kubelet.go:312] "Adding apiserver pod source"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.819165    3658 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.820157    3658 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.2" apiVersion="v1"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.820312    3658 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Apr 15 00:53:38 k8sm01 kubelet[3658]: W0415 00:53:38.820360    3658 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Apr 15 00:53:38 k8sm01 kubelet[3658]: E0415 00:53:38.820391    3658 plugins.go:612] "Error initializing dynamic plugin prober" err="error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.820664    3658 server.go:1256] "Started kubelet"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.820868    3658 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.821692    3658 server.go:461] "Adding debug handlers to kubelet server"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.822075    3658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.822655    3658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.822900    3658 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.824611    3658 volume_manager.go:291] "Starting Kubelet Volume Manager"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.824710    3658 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.824845    3658 reconciler_new.go:29] "Reconciler: start to sync state"
Apr 15 00:53:38 k8sm01 kubelet[3658]: E0415 00:53:38.825840    3658 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.831768    3658 factory.go:221] Registration of the systemd container factory successfully
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.831986    3658 factory.go:221] Registration of the crio container factory successfully
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.832075    3658 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.856568    3658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.861594    3658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.861676    3658 status_manager.go:217] "Starting to sync pod status with apiserver"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.861745    3658 kubelet.go:2329] "Starting kubelet main sync loop"
Apr 15 00:53:38 k8sm01 kubelet[3658]: E0415 00:53:38.861839    3658 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.876569    3658 cpu_manager.go:214] "Starting CPU manager" policy="none"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.876620    3658 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.876686    3658 state_mem.go:36] "Initialized new in-memory state store"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.876819    3658 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.876850    3658 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.876861    3658 policy_none.go:49] "None policy: Start"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.877472    3658 memory_manager.go:170] "Starting memorymanager" policy="None"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.877521    3658 state_mem.go:35] "Initializing new in-memory state store"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.877694    3658 state_mem.go:75] "Updated machine memory state"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.883216    3658 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.884825    3658 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.962716    3658 topology_manager.go:215] "Topology Admit Handler" podUID="10e2689733ffb116d3f5338e71943654" podNamespace="kube-system" podName="kube-controller-manager-k8sm01"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.962845    3658 topology_manager.go:215] "Topology Admit Handler" podUID="29e3091c378b58020cf5ba1c223f47bd" podNamespace="kube-system" podName="kube-scheduler-k8sm01"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.962989    3658 topology_manager.go:215] "Topology Admit Handler" podUID="a91f279b63c4ac15324b27bfca41a28a" podNamespace="kube-system" podName="etcd-k8sm01"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.963123    3658 topology_manager.go:215] "Topology Admit Handler" podUID="0e7098d194cab200193b45275d071676" podNamespace="kube-system" podName="kube-apiserver-k8sm01"
Apr 15 00:53:38 k8sm01 kubelet[3658]: I0415 00:53:38.987989    3658 kubelet_node_status.go:73] "Attempting to register node" node="k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.001445    3658 kubelet_node_status.go:112] "Node was previously registered" node="k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.001638    3658 kubelet_node_status.go:76] "Successfully registered node" node="k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026022    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a91f279b63c4ac15324b27bfca41a28a-etcd-certs\") pod \"etcd-k8sm01\" (UID: \"a91f279b63c4ac15324b27bfca41a28a\") " pod="kube-system/etcd-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026074    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e7098d194cab200193b45275d071676-k8s-certs\") pod \"kube-apiserver-k8sm01\" (UID: \"0e7098d194cab200193b45275d071676\") " pod="kube-system/kube-apiserver-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026102    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-ca-certs\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026140    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-k8s-certs\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026160    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29e3091c378b58020cf5ba1c223f47bd-kubeconfig\") pod \"kube-scheduler-k8sm01\" (UID: \"29e3091c378b58020cf5ba1c223f47bd\") " pod="kube-system/kube-scheduler-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026181    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a91f279b63c4ac15324b27bfca41a28a-etcd-data\") pod \"etcd-k8sm01\" (UID: \"a91f279b63c4ac15324b27bfca41a28a\") " pod="kube-system/etcd-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026204    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e7098d194cab200193b45275d071676-ca-certs\") pod \"kube-apiserver-k8sm01\" (UID: \"0e7098d194cab200193b45275d071676\") " pod="kube-system/kube-apiserver-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026225    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/0e7098d194cab200193b45275d071676-etc-pki\") pod \"kube-apiserver-k8sm01\" (UID: \"0e7098d194cab200193b45275d071676\") " pod="kube-system/kube-apiserver-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026245    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-etc-pki\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026263    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-flexvolume-dir\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.026284    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10e2689733ffb116d3f5338e71943654-kubeconfig\") pod \"kube-controller-manager-k8sm01\" (UID: \"10e2689733ffb116d3f5338e71943654\") " pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.819549    3658 apiserver.go:52] "Watching apiserver"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.824963    3658 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Apr 15 00:53:39 k8sm01 kubelet[3658]: E0415 00:53:39.888686    3658 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-k8sm01\" already exists" pod="kube-system/kube-scheduler-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: E0415 00:53:39.890622    3658 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-k8sm01\" already exists" pod="kube-system/kube-controller-manager-k8sm01"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.923625    3658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-k8sm01" podStartSLOduration=1.923570465 podStartE2EDuration="1.923570465s" podCreationTimestamp="2024-04-15 00:53:38 +0200 CEST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-15 00:53:39.923461741 +0200 CEST m=+1.193845874" watchObservedRunningTime="2024-04-15 00:53:39.923570465 +0200 CEST m=+1.193954588"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.925608    3658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-k8sm01" podStartSLOduration=1.925565618 podStartE2EDuration="1.925565618s" podCreationTimestamp="2024-04-15 00:53:38 +0200 CEST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-15 00:53:39.907393752 +0200 CEST m=+1.177777895" watchObservedRunningTime="2024-04-15 00:53:39.925565618 +0200 CEST m=+1.195949731"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.938855    3658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-k8sm01" podStartSLOduration=1.9388123130000001 podStartE2EDuration="1.938812313s" podCreationTimestamp="2024-04-15 00:53:38 +0200 CEST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-15 00:53:39.938570404 +0200 CEST m=+1.208954527" watchObservedRunningTime="2024-04-15 00:53:39.938812313 +0200 CEST m=+1.209196446"
Apr 15 00:53:39 k8sm01 kubelet[3658]: I0415 00:53:39.968739    3658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-k8sm01" podStartSLOduration=1.9687024279999998 podStartE2EDuration="1.968702428s" podCreationTimestamp="2024-04-15 00:53:38 +0200 CEST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-15 00:53:39.953614485 +0200 CEST m=+1.223998598" watchObservedRunningTime="2024-04-15 00:53:39.968702428 +0200 CEST m=+1.239086541"
Apr 15 00:53:51 k8sm01 kubelet[3658]: I0415 00:53:51.418803    3658 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.32.0.0/20"
Apr 15 00:53:51 k8sm01 kubelet[3658]: I0415 00:53:51.419997    3658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.32.0.0/20"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.445389    3658 topology_manager.go:215] "Topology Admit Handler" podUID="834f6c23-d66d-4cd9-9106-02fc17c2bf3a" podNamespace="kube-system" podName="weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.458330    3658 topology_manager.go:215] "Topology Admit Handler" podUID="c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab" podNamespace="kube-system" podName="kube-proxy-f98bn"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514180    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xhl\" (UniqueName: \"kubernetes.io/projected/c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab-kube-api-access-z5xhl\") pod \"kube-proxy-f98bn\" (UID: \"c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab\") " pod="kube-system/kube-proxy-f98bn"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514237    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"weavedb\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-weavedb\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514257    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin2\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-cni-bin2\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514273    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-machine-id\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-cni-machine-id\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514292    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-lib-modules\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514309    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtkn\" (UniqueName: \"kubernetes.io/projected/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-kube-api-access-brtkn\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514325    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab-lib-modules\") pod \"kube-proxy-f98bn\" (UID: \"c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab\") " pod="kube-system/kube-proxy-f98bn"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514339    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-xtables-lock\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514354    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab-xtables-lock\") pod \"kube-proxy-f98bn\" (UID: \"c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab\") " pod="kube-system/kube-proxy-f98bn"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514368    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-conf\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-cni-conf\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514418    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-cni-bin\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514439    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/834f6c23-d66d-4cd9-9106-02fc17c2bf3a-dbus\") pod \"weave-net-79hkp\" (UID: \"834f6c23-d66d-4cd9-9106-02fc17c2bf3a\") " pod="kube-system/weave-net-79hkp"
Apr 15 00:53:52 k8sm01 kubelet[3658]: I0415 00:53:52.514457    3658 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab-kube-proxy\") pod \"kube-proxy-f98bn\" (UID: \"c02ea42b-31c4-4ec0-a2f6-b179aa1a63ab\") " pod="kube-system/kube-proxy-f98bn"
Apr 15 00:53:53 k8sm01 kubelet[3658]: I0415 00:53:53.926576    3658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f98bn" podStartSLOduration=1.92651722 podStartE2EDuration="1.92651722s" podCreationTimestamp="2024-04-15 00:53:52 +0200 CEST" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-15 00:53:53.924342588 +0200 CEST m=+15.194726731" watchObservedRunningTime="2024-04-15 00:53:53.92651722 +0200 CEST m=+15.196901373"
Apr 15 00:53:58 k8sm01 kubelet[3658]: I0415 00:53:58.806501    3658 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Apr 15 00:54:03 k8sm01 kubelet[3658]: I0415 00:54:03.936522    3658 scope.go:117] "RemoveContainer" containerID="f74a332c8da5c71df58ecffbc7138a09d9b050a7c8e5e83bdc5bdc57ea11b186"
Apr 15 00:54:04 k8sm01 kubelet[3658]: I0415 00:54:04.960767    3658 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/weave-net-79hkp" podStartSLOduration=1.919008008 podStartE2EDuration="12.960725353s" podCreationTimestamp="2024-04-15 00:53:52 +0200 CEST" firstStartedPulling="2024-04-15 00:53:52.796978642 +0200 CEST m=+14.067362755" lastFinishedPulling="2024-04-15 00:54:03.838695957 +0200 CEST m=+25.109080100" observedRunningTime="2024-04-15 00:54:04.95985965 +0200 CEST m=+26.230243763" watchObservedRunningTime="2024-04-15 00:54:04.960725353 +0200 CEST m=+26.231109466"
Apr 15 00:55:38 k8sm01 kubelet[3658]: E0415 00:55:38.841815    3658 kubelet_node_status.go:456] "Node not becoming ready in time after startup"
Apr 15 00:55:38 k8sm01 kubelet[3658]: E0415 00:55:38.917183    3658 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
Apr 15 00:55:43 k8sm01 kubelet[3658]: E0415 00:55:43.918188    3658 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
Apr 15 00:55:48 k8sm01 kubelet[3658]: E0415 00:55:48.919628    3658 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
Apr 15 00:55:53 k8sm01 kubelet[3658]: E0415 00:55:53.920443    3658 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
...
...
...

$ kubectl get events
LAST SEEN   TYPE     REASON                    OBJECT        MESSAGE
13m         Normal   Starting                  node/k8sm01   Starting kubelet.
13m         Normal   NodeAllocatableEnforced   node/k8sm01   Updated Node Allocatable limit across pods
13m         Normal   NodeHasSufficientMemory   node/k8sm01   Node k8sm01 status is now: NodeHasSufficientMemory
13m         Normal   NodeHasNoDiskPressure     node/k8sm01   Node k8sm01 status is now: NodeHasNoDiskPressure
13m         Normal   NodeHasSufficientPID      node/k8sm01   Node k8sm01 status is now: NodeHasSufficientPID
13m         Normal   RegisteredNode            node/k8sm01   Node k8sm01 event: Registered Node k8sm01 in Controller
13m         Normal   Starting                  node/k8sm01

Any idea what might be wrong?

@rajch
Copy link
Owner

rajch commented Apr 15, 2024

From the logs, it seems that the weave initialisation procedure has not been able to write a files called /etc/cni/net.d/10-weave.conflist, and therefore the CNI plugin is not enabled. Probably a permissions issue, especially because this is CoreOS. This should affect all versions of Kubernetes, not just later ones.

To confirm, could you please do the following for me?

  1. Run the following, and paste the results here:
$ kubectl logs -n kube-system <weave-net-pod> init
  1. You could also manually create the file on all your nodes. The contents are:
{
    "cniVersion": "1.0.0",
    "name": "weave",
    "disableCheck": true,
    "plugins": [
        {
            "name": "weave",
            "type": "weave-net",
            "hairpinMode": true
        },
        {
            "type": "portmap",
            "capabilities": {"portMappings": true},
            "snat": true
        }
    ]
}
  1. How did you set up Kubernetes? Using kubeadm?

@mvrk69
Copy link
Author

mvrk69 commented Apr 15, 2024

1:

kubectl logs -n kube-system weave-net-gftst init
error: container init is not valid for pod weave-net-gftst

The container name seems to be weave-init not init, but no logs, returns empty:
kubectl logs -n kube-system weave-net-gftst weave-init

2:
I see the file there:

root@k8sm01:~# ll /etc/cni/net.d/
total 8
-rw-r--r--. 1 root root 344 Apr 15 16:06 10-weave.conflist
-rw-r--r--. 1 root root 393 Apr 15 16:05 11-crio-ipv4-bridge.conflist

And the file seems to have the same contents you posted:

root@k8sm01:~# cat /etc/cni/net.d/10-weave.conflist
{
    "cniVersion": "1.0.0",
    "name": "weave",
    "disableCheck": true,
    "plugins": [
        {
            "name": "weave",
            "type": "weave-net",
            "hairpinMode": true
        },
        {
            "type": "portmap",
            "capabilities": {"portMappings": true},
            "snat": true
        }
    ]
}

3: yes, i used kubeadm:

kubeadm init --config kubeadm-config.yml --upload-certs

kubeadm-config.yml:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.29.3
networking:
  podSubnet: "10.32.0.0/16"
  serviceSubnet: "172.16.16.0/22"
controlPlaneEndpoint: k8sm01.azar.pt:6443
controllerManager:
  extraArgs:
    flex-volume-plugin-dir: "/etc/kubernetes/kubelet-plugins/volume/exec"
    node-cidr-mask-size: "20"
    allocate-node-cidrs: "true"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  criSocket: unix:///var/run/crio/crio.sock
  imagePullPolicy: "IfNotPresent"
  kubeletExtraArgs:
    cgroup-driver: "systemd"
    resolv-conf: "/run/systemd/resolve/resolv.conf"
    max-pods: "4096"
    max-open-files: "20000000"

@rajch
Copy link
Owner

rajch commented Apr 15, 2024

Sorry, the container name was indeed weave-init. Okay. So the weave initialisation completed without issues, and the weave pod is also in running state and producing logs. At this point, weave is ready, and the NotReady taint should automatically be removed by the kubelet. Also, the coredns pods should go into ContainerCreating state.

Can we check the output of kubectl get pods -n kube-system -o wide and kubectl describe node k8sm01? Also, just to re-check the permissions problem, ls -l /etc/cni/net.d (it should have 744 permissions).

@mvrk69
Copy link
Author

mvrk69 commented Apr 15, 2024

root@k8sm01:~# kubectl  get pods -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS      AGE   IP              NODE     NOMINATED NODE   READINESS GATES
coredns-76f75df574-8stlr          0/1     Pending   0             61s   <none>          <none>   <none>           <none>
coredns-76f75df574-d64zf          0/1     Pending   0             61s   <none>          <none>   <none>           <none>
etcd-k8sm01                       1/1     Running   0             76s   192.168.0.115   k8sm01   <none>           <none>
kube-apiserver-k8sm01             1/1     Running   0             77s   192.168.0.115   k8sm01   <none>           <none>
kube-controller-manager-k8sm01    1/1     Running   0             78s   192.168.0.115   k8sm01   <none>           <none>
kube-proxy-xffc2                  1/1     Running   0             61s   192.168.0.115   k8sm01   <none>           <none>
kube-scheduler-k8sm01             1/1     Running   0             77s   192.168.0.115   k8sm01   <none>           <none>
metrics-server-84989b68d9-w8fhf   0/1     Pending   0             61s   <none>          <none>   <none>           <none>
weave-net-bdjmv                   2/2     Running   1 (54s ago)   61s   192.168.0.115   k8sm01   <none>           <none>
root@k8sm01:~# kubectl describe node k8sm01
Name:               k8sm01
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8sm01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 15 Apr 2024 16:53:35 +0200
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8sm01
  AcquireTime:     <unset>
  RenewTime:       Mon, 15 Apr 2024 16:55:31 +0200
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 15 Apr 2024 16:54:08 +0200   Mon, 15 Apr 2024 16:54:08 +0200   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
Addresses:
  InternalIP:  192.168.0.115
  Hostname:    k8sm01
Capacity:
  cpu:                4
  ephemeral-storage:  8846316Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8121560Ki
  pods:               4096
Allocatable:
  cpu:                4
  ephemeral-storage:  8152764813
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8019160Ki
  pods:               4096
System Info:
  Machine ID:                 9d0900841b2d4a5c9d930f8e1c805869
  System UUID:                9d090084-1b2d-4a5c-9d93-0f8e1c805869
  Boot ID:                    974b3842-0ba5-4aa4-bddf-6e4487920fe7
  Kernel Version:             6.7.7-200.fc39.x86_64
  OS Image:                   Fedora CoreOS 39.20240309.3.0
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  cri-o://1.29.2
  Kubelet Version:            v1.29.3
  Kube-Proxy Version:         v1.29.3
PodCIDR:                      10.32.0.0/20
PodCIDRs:                     10.32.0.0/20
Non-terminated Pods:          (6 in total)
  Namespace                   Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                              ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-k8sm01                       100m (2%)     0 (0%)      100Mi (1%)       0 (0%)         2m
  kube-system                 kube-apiserver-k8sm01             250m (6%)     0 (0%)      0 (0%)           0 (0%)         2m1s
  kube-system                 kube-controller-manager-k8sm01    200m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
  kube-system                 kube-proxy-xffc2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
  kube-system                 kube-scheduler-k8sm01             100m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
  kube-system                 weave-net-bdjmv                   100m (2%)     0 (0%)      0 (0%)           0 (0%)         105s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                750m (18%)  0 (0%)
  memory             100Mi (1%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                  From             Message
  ----    ------                   ----                 ----             -------
  Normal  Starting                 104s                 kube-proxy
  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node k8sm01 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node k8sm01 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node k8sm01 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
  Normal  Starting                 2m                   kubelet          Starting kubelet.
  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  2m                   kubelet          Node k8sm01 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node k8sm01 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m                   kubelet          Node k8sm01 status is now: NodeHasSufficientPID
  Normal  RegisteredNode           106s                 node-controller  Node k8sm01 event: Registered Node k8sm01 in Controller
root@k8sm01:~# ls -l /etc/cni/net.d
total 8
-rw-r--r--. 1 root root 344 Apr 15 16:54 10-weave.conflist
-rw-r--r--. 1 root root 393 Apr 15 16:52 11-crio-ipv4-bridge.conflist

@rajch
Copy link
Owner

rajch commented Apr 15, 2024

Yes, weave net is up and running, but the taint is not gone. I think a kubelet restart or a node restart will solve the problem.

I currently test weave net with kubernetes versions 1.27 through 1.29, on clusters running debian linux on amd64 and arm64. It works in all those cases. I also use kubeadm, with settings pretty similar to yours. Perhaps I should add coreos to the test mix.

@mvrk69
Copy link
Author

mvrk69 commented Apr 15, 2024

I tried restarting kubelet and also reboot, and still no go.

@rajch
Copy link
Owner

rajch commented Apr 16, 2024

I apologise for inconvenience. I'll try and replicate your environment, and see if I can find the problem. The only thing out of place in your logs is that kubelet has still not detected the CNI setup, even though Weave Net has indicated that the network is available. You can see that in the Conditions: section of the kubectl describe node output. Quoting the relevant part below:

Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 15 Apr 2024 16:54:08 +0200   Mon, 15 Apr 2024 16:54:08 +0200   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                False   Mon, 15 Apr 2024 16:54:10 +0200   Mon, 15 Apr 2024 16:53:35 +0200   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?

@rajch
Copy link
Owner

rajch commented Apr 21, 2024

I tried setting up a cluster on CoreOS, and faced exactly the same problem. But after a long time (over 15 minutes), the node became Ready. This happened with a different CNI as well.
I wasn't able to diagnose what the problem was. Every log gave the expected responses: no errors, no warnings. Only kubelet kept logging that there were no configuration files in /etc/cni/net.d/, but the files were very much there. After a long-ish time, suddenly kubelet reported that the node was now ready.
I have not observed this behaviour on other operating systems. I'm tempted to just blame CoreOS, because weave does everything it is supposed to do - and the same symptoms can be seen for at least one other CNI plugin. But I will observe some more, and report back here.

@mvrk69
Copy link
Author

mvrk69 commented Apr 21, 2024

Yes, seems happens to flannel and weave on clusters above 1.27 on CoreOS, though works fine with calico.

@rajch
Copy link
Owner

rajch commented Apr 22, 2024

Also with Openshift OVN-Kubernetes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants