Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CgroupsV2 errors on Fedora 35 with podman #1633

Closed
4 tasks done
jiridanek opened this issue Apr 2, 2022 · 7 comments
Closed
4 tasks done

CgroupsV2 errors on Fedora 35 with podman #1633

jiridanek opened this issue Apr 2, 2022 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@jiridanek
Copy link

Before creating an issue, make sure you've checked the following:

  • You are running the latest released version of k0s
  • Make sure you've searched for existing issues, both open and closed
  • Make sure you've searched for PRs too, a fix might've been merged already
  • You're looking at docs for the released version, "main" branch docs are usually ahead of released versions.

Version

v1.23.5+k0s.0

Platform

LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: Fedora
Description:    Fedora release 35 (Thirty Five)
Release:        35
Codename:       ThirtyFive

What happened?

When starting k0s controller using podman, the cluster never finishes starting and I don't get any nodes.

Steps to reproduce

sudo podman run --name k0s --hostname k0s --privileged --replace -p 6443:6443 docker.io/k0sproject/k0s:latest k0s controller --enable-worker

Watch the logs, try to list nodes with

$ sudo podman exec k0s kubectl get nodes

Expected behavior

Node comes up.

Actual behavior

$ sudo podman exec k0s kubectl get nodes
No resources found

Screenshots and logs

k0spodmanlogs.txt

time="2022-04-02 12:05:08" level=info msg="I0402 12:05:08.116004     204 server.go:230] \"unsupported configuration:KubeletCgroups is not within KubeReservedCgroup\"" component=kubelet

In a previous run, I saw

time="2022-04-02 11:56:09" level=info msg="I0402 11:56:09.758869    1029 kubelet_network_linux.go:57] \"Initialized protocol iptables rules.\" protocol=IPv4" component=kubelet
time="2022-04-02 11:56:09" level=info msg="W0402 11:56:09.764823    1029 watcher.go:93] Error while processing event (\"/sys/fs/cgroup/kubepods\": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/kubepods: no such file or directory" component=kubelet
time="2022-04-02 11:56:09" level=info msg="E0402 11:56:09.764858    1029 node_container_manager_linux.go:61] \"Failed to create cgroup\" err=\"cannot enter cgroupv2 \\\"/sys/fs/cgroup/kubepods\\\" with domain controllers -- it is in an invalid state\" cgroupName=[kubepods]" component=kubelet
time="2022-04-02 11:56:09" level=info msg="E0402 11:56:09.764877    1029 kubelet.go:1431] \"Failed to start ContainerManager\" err=\"cannot enter cgroupv2 \\\"/sys/fs/cgroup/kubepods\\\" with domain controllers -- it is in an invalid state\"" component=kubelet

Additional context

My issue is similar to #1195.

@jiridanek jiridanek added the bug Something isn't working label Apr 2, 2022
@ncopa
Copy link
Collaborator

ncopa commented Apr 4, 2022

Does it work if you bind mount /sys/fs/cgroup?

@jiridanek
Copy link
Author

Does it work if you bind mount /sys/fs/cgroup?

sudo podman run -v /sys/fs/cgroup:/sys/fs/cgroup --name k0s --hostname k0s --privileged --replace -p 6443:6443 docker.io/k0sproject/k0s:latest k0s controller --enable-worker

It helps. I get the node in kubectl get nodes, but it never transitions out of NotReady status.

I am using btrfs with luks encrypted / partition, if that is relevant. Some other k8s distros have currently issues open about that. I did not found any of the symptoms about this in the k0s logs, though.

When poking at things, I found

# kubectl get all -A
NAMESPACE     NAME                                  READY   STATUS              RESTARTS   AGE
kube-system   pod/coredns-8565977d9b-58l8w          0/1     Pending             0          5m16s
kube-system   pod/kube-proxy-hjg4b                  0/1     ContainerCreating   0          5m11s
kube-system   pod/kube-router-x45sd                 0/1     Init:0/2            0          5m11s
kube-system   pod/metrics-server-74c967d8d4-6gggb   0/1     Pending             0          5m9s

NAMESPACE     NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes       ClusterIP   10.96.0.1        <none>        443/TCP                  5m47s
kube-system   service/kube-dns         ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   5m17s
kube-system   service/metrics-server   ClusterIP   10.105.100.173   <none>        443/TCP                  5m9s

NAMESPACE     NAME                                DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/konnectivity-agent   0         0         0       0            0           kubernetes.io/os=linux   5m24s
kube-system   daemonset.apps/kube-proxy           1         1         0       1            0           kubernetes.io/os=linux   5m18s
kube-system   daemonset.apps/kube-router          1         1         0       1            0           <none>                   5m18s

NAMESPACE     NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns          0/1     1            0           5m17s
kube-system   deployment.apps/metrics-server   0/1     1            0           5m9s

NAMESPACE     NAME                                        DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-8565977d9b          1         1         0       5m16s
kube-system   replicaset.apps/metrics-server-74c967d8d4   1         1         0       5m9s

and the containers are not scheduled because of

# kubectl describe pod/kube-proxy-hjg4b -nkube-system
[...]
  Warning  FailedCreatePodSandBox  1s (x10 over 2m7s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/40/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown
time="2022-04-04 16:01:17" level=info msg="time=\"2022-04-04T16:01:17.133629500Z\" level=warning msg=\"cleanup warnings time=\\\"2022-04-04T16:01:17Z\\\" level=info msg=\\\"starting signal loop\\\" namespace=k8s.io pid=2311\\ntime=\\\"2022-04-04T16:01:17Z\\\" level=warning msg=\\\"failed to read init pid file\\\" error=\\\"open /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/528cb2db4ad5e63b632464177dc3dfb7c2b24d39c62b711913a98e7639be10f7/init.pid: no such file or directory\\\"\\n\"" component=containerd
time="2022-04-04 16:01:17" level=info msg="time=\"2022-04-04T16:01:17.134890736Z\" level=error msg=\"copy shim log\" error=\"read /proc/self/fd/16: file already closed\"" component=containerd
time="2022-04-04 16:01:17" level=info msg="time=\"2022-04-04T16:01:17.148822858Z\" level=error msg=\"RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjg4b,Uid:618987ab-547f-49c6-8b87-c91f63c022cd,Namespace:kube-system,Attempt:0,} failed, error\" error=\"failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\"" component=containerd
time="2022-04-04 16:01:17" level=info msg="E0404 16:01:17.149381     193 remote_runtime.go:209] \"RunPodSandbox from runtime service failed\" err=\"rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\"" component=kubelet
time="2022-04-04 16:01:17" level=info msg="E0404 16:01:17.149533     193 kuberuntime_sandbox.go:70] \"Failed to create sandbox for pod\" err=\"rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\" pod=\"kube-system/kube-proxy-hjg4b\"" component=kubelet
time="2022-04-04 16:01:17" level=info msg="E0404 16:01:17.149636     193 kuberuntime_manager.go:833] \"CreatePodSandbox for pod failed\" err=\"rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\" pod=\"kube-system/kube-proxy-hjg4b\"" component=kubelet
time="2022-04-04 16:01:17" level=info msg="E0404 16:01:17.149819     193 pod_workers.go:949] \"Error syncing pod, skipping\" err=\"failed to \\\"CreatePodSandbox\\\" for \\\"kube-proxy-hjg4b_kube-system(618987ab-547f-49c6-8b87-c91f63c022cd)\\\" with CreatePodSandboxError: \\\"Failed to create sandbox for pod \\\\\\\"kube-proxy-hjg4b_kube-system(618987ab-547f-49c6-8b87-c91f63c022cd)\\\\\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/54/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\\\"\" pod=\"kube-system/kube-proxy-hjg4b\" podUID=618987ab-547f-49c6-8b87-c91f63c022cd" component=kubelet
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.038286149Z\" level=info msg=\"RunPodsandbox for &PodSandboxMetadata{Name:kube-router-x45sd,Uid:1a2a90df-7717-4f54-b428-f9d3fbff1fc2,Namespace:kube-system,Attempt:0,}\"" component=containerd
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.053875     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/352f3efb-639f-423d-934d-bc3376d4fe3b/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.054004     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/8a26e81d-ae98-4461-8e74-402ee627f356/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.054076     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/741795a2-cc96-4e3f-8033-bbbffd56a034/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.054182     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/6e999cea-b99d-4a3e-9b63-96ce3037fb8e/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.054244     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/c784bfde-f30b-400e-bd8e-dc4270a27501/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.054336     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/398fe08c-51cd-47a1-b25a-3c4e21b3ab96/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.054399     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/ad5644ab-a2f0-4f6e-94e1-61ccf3ccb68e/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.054462     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/2390774f-ca4b-4bb8-834d-f11b26df6b30/volumes\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.109810226Z\" level=info msg=\"starting signal loop\" namespace=k8s.io path=/run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/4cc65bf82b34c3bd084201235dc0bc1c0e6323a538f944ef97847b04ef7ea5b7 pid=2333" component=containerd
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.115315     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods pod6e999cea-b99d-4a3e-9b63-96ce3037fb8e] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.115133160Z\" level=info msg=\"shim disconnected\" id=4cc65bf82b34c3bd084201235dc0bc1c0e6323a538f944ef97847b04ef7ea5b7" component=containerd
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.115245727Z\" level=warning msg=\"cleaning up after shim disconnected\" id=4cc65bf82b34c3bd084201235dc0bc1c0e6323a538f944ef97847b04ef7ea5b7 namespace=k8s.io" component=containerd
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.115332483Z\" level=info msg=\"cleaning up dead shim\"" component=containerd
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.117298     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods podc784bfde-f30b-400e-bd8e-dc4270a27501] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.119402     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods burstable pod741795a2-cc96-4e3f-8033-bbbffd56a034] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.119540     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort podad5644ab-a2f0-4f6e-94e1-61ccf3ccb68e] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.120737     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod352f3efb-639f-423d-934d-bc3376d4fe3b] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.120901     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods burstable pod2390774f-ca4b-4bb8-834d-f11b26df6b30] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.124113     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod398fe08c-51cd-47a1-b25a-3c4e21b3ab96] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="I0404 16:01:19.126596     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod8a26e81d-ae98-4461-8e74-402ee627f356] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.149933783Z\" level=warning msg=\"cleanup warnings time=\\\"2022-04-04T16:01:19Z\\\" level=info msg=\\\"starting signal loop\\\" namespace=k8s.io pid=2343\\ntime=\\\"2022-04-04T16:01:19Z\\\" level=warning msg=\\\"failed to read init pid file\\\" error=\\\"open /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/4cc65bf82b34c3bd084201235dc0bc1c0e6323a538f944ef97847b04ef7ea5b7/init.pid: no such file or directory\\\"\\n\"" component=containerd
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.151639003Z\" level=error msg=\"copy shim log\" error=\"read /proc/self/fd/16: file already closed\"" component=containerd
time="2022-04-04 16:01:19" level=info msg="time=\"2022-04-04T16:01:19.163144792Z\" level=error msg=\"RunPodSandbox for &PodSandboxMetadata{Name:kube-router-x45sd,Uid:1a2a90df-7717-4f54-b428-f9d3fbff1fc2,Namespace:kube-system,Attempt:0,} failed, error\" error=\"failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\"" component=containerd
time="2022-04-04 16:01:19" level=info msg="E0404 16:01:19.163651     193 remote_runtime.go:209] \"RunPodSandbox from runtime service failed\" err=\"rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="E0404 16:01:19.163801     193 kuberuntime_sandbox.go:70] \"Failed to create sandbox for pod\" err=\"rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\" pod=\"kube-system/kube-router-x45sd\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="E0404 16:01:19.163890     193 kuberuntime_manager.go:833] \"CreatePodSandbox for pod failed\" err=\"rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\" pod=\"kube-system/kube-router-x45sd\"" component=kubelet
time="2022-04-04 16:01:19" level=info msg="E0404 16:01:19.164063     193 pod_workers.go:949] \"Error syncing pod, skipping\" err=\"failed to \\\"CreatePodSandbox\\\" for \\\"kube-router-x45sd_kube-system(1a2a90df-7717-4f54-b428-f9d3fbff1fc2)\\\" with CreatePodSandboxError: \\\"Failed to create sandbox for pod \\\\\\\"kube-router-x45sd_kube-system(1a2a90df-7717-4f54-b428-f9d3fbff1fc2)\\\\\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to mount rootfs component &{overlay overlay [index=off workdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/work upperdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/55/fs lowerdir=/var/lib/k0s/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown\\\"\" pod=\"kube-system/kube-router-x45sd\" podUID=1a2a90df-7717-4f54-b428-f9d3fbff1fc2" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038194     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/c784bfde-f30b-400e-bd8e-dc4270a27501/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038270     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/352f3efb-639f-423d-934d-bc3376d4fe3b/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038310     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/398fe08c-51cd-47a1-b25a-3c4e21b3ab96/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038343     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/8a26e81d-ae98-4461-8e74-402ee627f356/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038385     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/ad5644ab-a2f0-4f6e-94e1-61ccf3ccb68e/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038446     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/2390774f-ca4b-4bb8-834d-f11b26df6b30/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038514     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/741795a2-cc96-4e3f-8033-bbbffd56a034/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.038634     193 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/6e999cea-b99d-4a3e-9b63-96ce3037fb8e/volumes\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.102638     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods pod6e999cea-b99d-4a3e-9b63-96ce3037fb8e] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.103466     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods podc784bfde-f30b-400e-bd8e-dc4270a27501] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.107017     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod398fe08c-51cd-47a1-b25a-3c4e21b3ab96] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.107171     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods burstable pod741795a2-cc96-4e3f-8033-bbbffd56a034] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.107321     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods burstable pod2390774f-ca4b-4bb8-834d-f11b26df6b30] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.108129     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort podad5644ab-a2f0-4f6e-94e1-61ccf3ccb68e] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.109951     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod352f3efb-639f-423d-934d-bc3376d4fe3b] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="I0404 16:01:21.110001     193 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod8a26e81d-ae98-4461-8e74-402ee627f356] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:01:21" level=info msg="E0404 16:01:21.193658     193 kubelet.go:2347] \"Container runtime network not ready\" networkReady=\"NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized\"" component=kubelet

@jiridanek
Copy link
Author

jiridanek commented Apr 4, 2022

Following https://docs.k0sproject.io/head/troubleshooting/#k0s-will-not-start-on-zfs-based-systems, I see

bash-5.1# k0s ctr -a /run/k0s/containerd.sock plugins ls
TYPE                            ID                       PLATFORMS      STATUS    
io.containerd.content.v1        content                  -              ok        
io.containerd.snapshotter.v1    aufs                     linux/amd64    skip      
io.containerd.snapshotter.v1    btrfs                    linux/amd64    skip
io.containerd.snapshotter.v1    devmapper                linux/amd64    error     
io.containerd.snapshotter.v1    native                   linux/amd64    ok        
io.containerd.snapshotter.v1    overlayfs                linux/amd64    ok        
io.containerd.snapshotter.v1    zfs                      linux/amd64    skip
[...]

so the solution should be to somehow get the btrfs snapshotter to be unskipped

@jiridanek
Copy link
Author

time="2022-04-04 16:41:33" level=info msg="W0404 16:41:33.671196     194 fs.go:599] Unable to get btrfs mountpoint IDs: stat failed on /dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca with error: no such file or directory" component=kubelet

for that, I am adding, as per openshift/microshift#629, kubernetes-sigs/kind#2411

-v /var/lib/k0s -v /dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca:/dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca -v /dev/dm-0:/dev/dm-0

Next, I get in logs

bb6d-1c7ed6393476/4ffc20f2e9994e5a1e559dc95a987905caab0c5f839eff2600db5f0aef39dc9c WatchSource:0}: container \"4ffc20f2e9994e5a1e559dc95a987905caab0c5f839eff2600db5f0aef39dc9c\" in namespace \"k8s.io\": not found" component=kubelet
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.242952902Z\" level=info msg=\"RunPodsandbox for &PodSandboxMetadata{Name:kube-router-dhwr8,Uid:62d1311e-bd77-4647-96f1-61d1e8ff98a2,Namespace:kube-system,Attempt:0,}\"" component=containerd
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.256545     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/352f3efb-639f-423d-934d-bc3376d4fe3b/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.256669     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/2390774f-ca4b-4bb8-834d-f11b26df6b30/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.256728     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/741795a2-cc96-4e3f-8033-bbbffd56a034/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.256791     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/6e999cea-b99d-4a3e-9b63-96ce3037fb8e/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.256865     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/c784bfde-f30b-400e-bd8e-dc4270a27501/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.256955     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/398fe08c-51cd-47a1-b25a-3c4e21b3ab96/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.257045     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/8a26e81d-ae98-4461-8e74-402ee627f356/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.257105     227 kubelet_getters.go:300] \"Path does not exist\" path=\"/var/lib/k0s/kubelet/pods/ad5644ab-a2f0-4f6e-94e1-61ccf3ccb68e/volumes\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.300240276Z\" level=info msg=\"starting signal loop\" namespace=k8s.io path=/run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970 pid=1388" component=containerd
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.317917     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods pod6e999cea-b99d-4a3e-9b63-96ce3037fb8e] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.322025     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods podc784bfde-f30b-400e-bd8e-dc4270a27501] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.324347     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod398fe08c-51cd-47a1-b25a-3c4e21b3ab96] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.324544     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods burstable pod741795a2-cc96-4e3f-8033-bbbffd56a034] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.325959     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods burstable pod2390774f-ca4b-4bb8-834d-f11b26df6b30] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.326828     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod8a26e81d-ae98-4461-8e74-402ee627f356] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.327678     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort pod352f3efb-639f-423d-934d-bc3376d4fe3b] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="I0404 16:48:55.327678     227 pod_container_manager_linux.go:184] \"Failed to kill all the processes attached to cgroup\" cgroupName=[kubepods besteffort podad5644ab-a2f0-4f6e-94e1-61ccf3ccb68e] err=\"os: process not initialized\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.382522649Z\" level=error msg=\"loading cgroup2 for 1409\" error=\"cgroups: invalid group path\"" component=containerd
time="2022-04-04 16:48:55" level=info msg="panic: runtime error: invalid memory address or nil pointer dereference" component=containerd
time="2022-04-04 16:48:55" level=info msg="[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x68fa78]" component=containerd
time="2022-04-04 16:48:55" level=info component=containerd
time="2022-04-04 16:48:55" level=info msg="goroutine 40 [running]:" component=containerd
time="2022-04-04 16:48:55" level=info msg="github.com/containerd/cgroups/v2.(*Manager).RootControllers(0xc00005a240)" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/vendor/github.com/containerd/cgroups/v2/manager.go:225 +0x18" component=containerd
time="2022-04-04 16:48:55" level=info msg="github.com/containerd/containerd/runtime/v2/runc/v2.(*service).Start(0xc000124280, {0x90ea88, 0xc0001809c0}, 0xc0001a8140)" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/runtime/v2/runc/v2/service.go:392 +0x2a5" component=containerd
time="2022-04-04 16:48:55" level=info msg="github.com/containerd/containerd/runtime/v2/task.RegisterTaskService.func3({0x90ea88, 0xc0001809c0}, 0xc000190200)" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/runtime/v2/task/shim.pb.go:3470 +0x98" component=containerd
time="2022-04-04 16:48:55" level=info msg="github.com/containerd/ttrpc.defaultServerInterceptor({0x90ea88, 0xc0001809c0}, 0x17, 0xc0001ac0c0, 0x5)" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/vendor/github.com/containerd/ttrpc/interceptor.go:45 +0x26" component=containerd
time="2022-04-04 16:48:55" level=info msg="github.com/containerd/ttrpc.(*serviceSet).dispatch(0xc00018a030, {0x90ea88, 0xc0001809c0}, {0xc000198060, 0x17}, {0xc000194068, 0x5}, {0xc0001bc000, 0x42, 0x50})" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/vendor/github.com/containerd/ttrpc/services.go:95 +0x1be" component=containerd
time="2022-04-04 16:48:55" level=info msg="github.com/containerd/ttrpc.(*serviceSet).call(0x0, {0x90ea88, 0xc0001809c0}, {0xc000198060, 0x0}, {0xc000194068, 0x0}, {0xc0001bc000, 0x42, 0x50})" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/vendor/github.com/containerd/ttrpc/services.go:64 +0x71" component=containerd
time="2022-04-04 16:48:55" level=info msg="github.com/containerd/ttrpc.(*serverConn).run.func2(0x3)" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/vendor/github.com/containerd/ttrpc/server.go:438 +0xe5" component=containerd
time="2022-04-04 16:48:55" level=info msg="created by github.com/containerd/ttrpc.(*serverConn).run" component=containerd
time="2022-04-04 16:48:55" level=info msg="\t/go/src/github.com/containerd/containerd/vendor/github.com/containerd/ttrpc/server.go:434 +0x808" component=containerd
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.402852645Z\" level=info msg=\"shim disconnected\" id=06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970" component=containerd
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.402950108Z\" level=warning msg=\"cleaning up after shim disconnected\" id=06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970 namespace=k8s.io" component=containerd
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.402971275Z\" level=info msg=\"cleaning up dead shim\"" component=containerd
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.403217203Z\" level=error msg=\"Failed to delete sandbox container \\\"06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970\\\"\" error=\"ttrpc: closed: unknown\"" component=containerd
time="2022-04-04 16:48:55" level=info msg="time=\"2022-04-04T16:48:55.413240140Z\" level=error msg=\"RunPodSandbox for &PodSandboxMetadata{Name:kube-router-dhwr8,Uid:62d1311e-bd77-4647-96f1-61d1e8ff98a2,Namespace:kube-system,Attempt:0,} failed, error\" error=\"failed to start sandbox container task \\\"06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970\\\": ttrpc: closed: unknown\"" component=containerd
time="2022-04-04 16:48:55" level=info msg="E0404 16:48:55.413700     227 remote_runtime.go:209] \"RunPodSandbox from runtime service failed\" err=\"rpc error: code = Unknown desc = failed to start sandbox container task \\\"06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970\\\": ttrpc: closed: unknown\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="E0404 16:48:55.413826     227 kuberuntime_sandbox.go:70] \"Failed to create sandbox for pod\" err=\"rpc error: code = Unknown desc = failed to start sandbox container task \\\"06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970\\\": ttrpc: closed: unknown\" pod=\"kube-system/kube-router-dhwr8\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="E0404 16:48:55.413893     227 kuberuntime_manager.go:833] \"CreatePodSandbox for pod failed\" err=\"rpc error: code = Unknown desc = failed to start sandbox container task \\\"06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970\\\": ttrpc: closed: unknown\" pod=\"kube-system/kube-router-dhwr8\"" component=kubelet
time="2022-04-04 16:48:55" level=info msg="E0404 16:48:55.414065     227 pod_workers.go:949] \"Error syncing pod, skipping\" err=\"failed to \\\"CreatePodSandbox\\\" for \\\"kube-router-dhwr8_kube-system(62d1311e-bd77-4647-96f1-61d1e8ff98a2)\\\" with CreatePodSandboxError: \\\"Failed to create sandbox for pod \\\\\\\"kube-router-dhwr8_kube-system(62d1311e-bd77-4647-96f1-61d1e8ff98a2)\\\\\\\": rpc error: code = Unknown desc = failed to start sandbox container task \\\\\\\"06bb44e30679ce3c6f736113410056b2571d18e9fcb03f1c1722c9f57b9b6970\\\\\\\": ttrpc: closed: unknown\\\"\" pod=\"kube-system/kube-router-dhwr8\" podUID=62d1311e-bd77-4647-96f1-61d1e8ff98a2" component=kubelet
bash-5.1# k0s ctr -a /run/k0s/containerd.sock images pull docker.io/library/alpine:latest
docker.io/library/alpine:latest:                                                  resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:f22945d45ee2eb4dd463ed5a431d9f04fcd80ca768bb1acf898d91ce51f7bf04:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:1e014f84205d569a5cc3be4e108ca614055f7e21d11928946113ab3f36054801: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:76c8fb57b6fc8599de38027112c47170bd19f99e7945392bd78d6816db01f4ad:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:40e059520d199e1a1a259089077f2a0c879951c9a4540490bad3a0d7714c6ae7:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 6.1 s                                                                    total:  2.7 Mi (451.1 KiB/s)                                     
unpacking linux/amd64 sha256:f22945d45ee2eb4dd463ed5a431d9f04fcd80ca768bb1acf898d91ce51f7bf04...
done: 387.723396ms
bash-5.1# k0s ctr -a /run/k0s/containerd.sock run -t --rm docker.io/library/alpine:latest foo
/ # Error: ttrpc: closed: unknown

I haven't figured how to deal with that.

@jnummelin
Copy link
Member

hmm, dunno much about podman but at least when running k0s in Docker you also need some extra cgroups flags if the host is running cgroups v2:

--cgroupns=host -v /sys/fs/cgroup:/sys/fs/cgroup:rw

See https://docs.k0sproject.io/head/k0s-in-docker/#1-initiate-k0s

@jiridanek
Copy link
Author

jiridanek commented Apr 11, 2022

@jnummelin Thanks, that worked!

bash-5.1# k0s kubectl get nodes
NAME   STATUS   ROLES           AGE    VERSION
k0s    Ready    control-plane   115s   v1.23.5+k0s

So, my command ended up being

sudo podman run \
  --cgroupns=host -v /sys/fs/cgroup:/sys/fs/cgroup:rw \
  -v /var/lib/k0s -v /dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca:/dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca -v /dev/dm-0:/dev/dm-0 \
  --name k0s --hostname k0s --privileged --replace -p 6443:6443 docker.io/k0sproject/k0s:latest k0s controller --enable-worker

the -v /var/lib/k0s [...] line is there for openshift/microshift#629, kubernetes-sigs/kind#2411

@jnummelin
Copy link
Member

@jiridanek glad it worked out. I've added the cgroupns details into the docs too now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants