Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to startup pod after "re-install" #19

Closed
liyimeng opened this issue Feb 7, 2019 · 11 comments
Closed

Failed to startup pod after "re-install" #19

liyimeng opened this issue Feb 7, 2019 · 11 comments

Comments

@liyimeng
Copy link
Contributor

liyimeng commented Feb 7, 2019

My initial run goes OK. However, when I realize I am running out of disk.
I delete /var/lib/rancher and mount a zfs volume into this dir, restart k3s, I always get error like this:

INFO[2019-02-07T20:27:43.854571633+01:00] shim reaped id=31bf8c59ef07ec0e779fdb210dce68ae9743ef549820cd212cd19d6b28b306dd
ERRO[2019-02-07T20:27:44.012645877+01:00] RunPodSandbox for &PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,} failed, error error="failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown"
INFO[2019-02-07T20:27:55.742334514+01:00] RunPodSandbox with config &PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,},Hostname:coredns-7748f7f6df-6p99f,LogDirectory:/var/log/pods/8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,DnsConfig:&DNSConfig{Servers:[147.214.252.30 147.214.9.30],Searches:[ki.sw.ericsson.se],Options:[],},PortMappings:[&PortMapping{Protocol:UDP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:9153,HostPort:0,HostIp:,}],Labels:map[string]string{io.kubernetes.pod.name: coredns-7748f7f6df-6p99f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,k8s-app: kube-dns,pod-template-hash: 7748f7f6df,},Annotations:map[string]string{kubernetes.io/config.seen: 2019-02-07T20:21:20.14626935+01:00,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},}
INFO[2019-02-07T20:27:55.852741235+01:00] shim containerd-shim started address=/containerd-shim/k8s.io/ce6315d62ca83305c1a7426bdcf4a865336c5a2906e6ab7156b55c51954536c6/shim.sock debug=false pid=7104
INFO[2019-02-07T20:27:55.856834137+01:00] shim reaped id=ce6315d62ca83305c1a7426bdcf4a865336c5a2906e6ab7156b55c51954536c6
ERRO[2019-02-07T20:27:56.024630042+01:00] RunPodSandbox for &PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,} failed, error error="failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown"
INFO[2019-02-07T20:28:10.742295323+01:00] RunPodSandbox with config &PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:coredns-7748f7f6df-6p99f,Uid:8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,Namespace:kube-system,Attempt:0,},Hostname:coredns-7748f7f6df-6p99f,LogDirectory:/var/log/pods/8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,DnsConfig:&DNSConfig{Servers:[147.214.252.30 147.214.9.30],Searches:[ki.sw.ericsson.se],Options:[],},PortMappings:[&PortMapping{Protocol:UDP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:9153,HostPort:0,HostIp:,}],Labels:map[string]string{io.kubernetes.pod.name: coredns-7748f7f6df-6p99f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,k8s-app: kube-dns,pod-template-hash: 7748f7f6df,},Annotations:map[string]string{kubernetes.io/config.seen: 2019-02-07T20:21:20.14626935+01:00,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod8c47a1c3-2b0d-11e9-9734-3417ebd33b3b,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},}
INFO[2019-02-07T20:28:10.834958570+01:00] shim containerd-shim started address=/containerd-shim/k8s.io/5cebff3e7e0aecd3d7c6e7ef8bdfe3c449a5fa26ad821700fc004ad5f23ed7c7/shim.sock debug=false pid=7371
INFO[2019-02-07T20:28:10.838955158+01:00] shim reaped

@ibuildthecloud
Copy link
Contributor

Does the agent/server die? Is there any FATAL message?

@ibuildthecloud
Copy link
Contributor

@liyimeng Can you try again with rc3 I just built. Quite a few issues have been address but specifically there are some fixes in containerd that may address this issue.

@liyimeng
Copy link
Contributor Author

liyimeng commented Feb 8, 2019

The agent should be running
ps -ef | grep k3s root 4112 21296 0 07:36 pts/18 00:00:00 sudo nohup ./k3s server root 4113 4112 22 07:36 pts/18 00:00:25 ./k3s server root 4287 4113 1 07:36 pts/18 00:00:01 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
rc3 have the same problem. I will attach log files soon.

@liyimeng
Copy link
Contributor Author

liyimeng commented Feb 8, 2019

pod.log

Here is the pod status

@liyimeng
Copy link
Contributor Author

liyimeng commented Feb 8, 2019

server.log

@liyimeng
Copy link
Contributor Author

liyimeng commented Feb 8, 2019

Note, running RC3

@ibuildthecloud
Copy link
Contributor

ibuildthecloud commented Feb 8, 2019

@liyimeng It looks like you are running k3s on top of overlayfs or a different overlay filesystem. overlayfs on most kernels will not nest nicely. Specifically /var/lib/rancher/k3s needs to be a regular file system like ext4/xfs. If you are running this inside Docker that means you need to add -v /var/lib/rancher/k3s

@liyimeng
Copy link
Contributor Author

Thanks, I guess so as well. However, my file system is ZFS, and I run directly on the host.

I will try again later to see if problem persistent.

@tomoyat1
Copy link

tomoyat1 commented Mar 17, 2019

I run ZFS on Linux, I run k3s directly on the host, and I seem to be getting the same error as well.
Also, AFAIK, ZFS implementation on linux has nothing to do with any overlay filesystems.

@tomoyat1
Copy link

The snapshotter for ZFS (https://github.com/containerd/zfs) seems to not to be contained in the main containerd repository. Maybe k3s doesn't bundle external snapshotter plugins, and that's why k3s fails to work on ZFS?

@deniseschannon
Copy link

I'm closing this in favor of #66

Per @ibuildthecloud

We specifically removed the ZFS snapshotter. The reason being that we don't intend to include the ZFS user space as I believe that is not portable across kernel versions. So we can include the ZFS snapshotter and you would be required to first install the ZFS tools, which is common place already.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants