Skip to content
This repository has been archived by the owner on Dec 7, 2023. It is now read-only.

Running ignite on ZoL (ZFS on Linux) fails with: failed to mount /tmp/containerd-mount #630

Open
morancj opened this issue Jun 29, 2020 · 9 comments
Labels
area/runtime Issues related to container runtimes kind/bug Categorizes issue or PR as related to a bug.

Comments

@morancj
Copy link
Member

morancj commented Jun 29, 2020

Using containerd:

➜ sudo ignite ps -a
VM ID	IMAGE	KERNEL	SIZE	CPUS	MEMORY	CREATED	STATUS	IPS	PORTS	NAME

➜ LOG_START=$( date '+%Y-%m-%d %H:%M:%S' ) ; export LOG_START

➜ sudo ignite ps -a
VM ID	IMAGE	KERNEL	SIZE	CPUS	MEMORY	CREATED	STATUS	IPS	PORTS	NAME

➜ sudo ignite run --runtime containerd weaveworks/ignite-ubuntu \
  --interactive \
  --name containerd-ignite-ubuntu \
  --cpus 4 \
  --ssh \
  --memory 2GB \
  --size 10G
INFO[0001] Created VM with ID "58196d97b056a6c2" and name "containerd-ignite-ubuntu"myhost
FATA[0001] failed to start container for VM "58196d97b056a6c2": failed to mount /tmp/containerd-mount263297979: invalid argumentmyhost

➜ sudo ignite ps -a
VM ID			IMAGE				KERNEL				SIZE	CPUS	MEMORY	CREATED	STATUS	IPS	PORTS	NAME
58196d97b056a6c2	weaveworks/ignite-ubuntu:latest	weaveworks/ignite-kernel:4.19.125	10.0 GB	4	2.0 GB	4s ago	Stopped			containerd-ignite-ubuntu

➜ sudo ignite vm rm containerd-ignite-ubuntu
INFO[0000] Removing the container with ID "ignite-58196d97b056a6c2" from the "cni" networkmyhost
INFO[0000] CNI failed to retrieve network namespace path: container "ignite-58196d97b056a6c2" in namespace "firecracker": not foundmyhost
INFO[0000] Removed VM with name "containerd-ignite-ubuntu" and ID "58196d97b056a6c2"myhost

➜ journalctl -k -S "$LOG_START" --no-pager
-- Logs begin at Thu 2020-01-09 12:05:37 GMT, end at Mon 2020-06-29 13:12:37 BST. --
Jun 29 13:12:31 myhost kernel: EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
Jun 29 13:12:32 myhost kernel: overlayfs: upper fs does not support RENAME_WHITEOUT.
Jun 29 13:12:32 myhost kernel: overlayfs: upper fs missing required features.

➜

Using Docker:

➜ LOG_START=$( date '+%Y-%m-%d %H:%M:%S' ) ; export LOG_START

➜ sudo ignite ps -a
VM ID	IMAGE	KERNEL	SIZE	CPUS	MEMORY	CREATED	STATUS	IPS	PORTS	NAME

➜ sudo ignite run --runtime docker weaveworks/ignite-ubuntu \
  --interactive \
  --name docker-ignite-ubuntu \
  --cpus 4 \
  --ssh \
  --memory 2GB \
  --size 10G
INFO[0001] Created VM with ID "beebb8f7112b1703" and name "docker-ignite-ubuntu"myhost
INFO[0002] Networking is handled by "cni"myhost
INFO[0002] Started Firecracker VM "beebb8f7112b1703" in a container with ID "aa2eccd08da5a6ce5134b6eb86beb047739b626b50018e5395a1b149c10425f2"myhost
INFO[0002] Waiting for the ssh daemon within the VM to start...myhost
beebb8f7112b1703

Ubuntu 20.04 LTS beebb8f7112b1703 ttyS0

beebb8f7112b1703 login: read escape sequence

➜ sudo ignite ps -a
VM ID			IMAGE				KERNEL					SIZE	CPUS	MEMORY	CREATED	STATUS	IPS		PORTS	NAME
beebb8f7112b1703	weaveworks/ignite-ubuntu:latest	weaveworks/ignite-kernel:4.19.125	10.0 GB	4	2.0 GB	10s ago	Up 10s	10.61.0.10		docker-ignite-ubuntu

➜ sudo ignite vm --runtime docker stop docker-ignite-ubuntu
INFO[0000] Removing the container with ID "ignite-beebb8f7112b1703" from the "cni" networkmyhost
INFO[0002] Stopped VM with name "docker-ignite-ubuntu" and ID "beebb8f7112b1703"myhost

➜ sudo ignite vm --runtime docker rm docker-ignite-ubuntu
INFO[0000] Removing the container with ID "ignite-beebb8f7112b1703" from the "cni" networkmyhost
INFO[0000] CNI failed to retrieve network namespace path: Error: No such container: ignite-beebb8f7112b1703myhost
INFO[0000] Removed VM with name "docker-ignite-ubuntu" and ID "beebb8f7112b1703"myhost

➜ journalctl -k -S "$LOG_START" --no-pager
-- Logs begin at Thu 2020-01-09 12:05:37 GMT, end at Mon 2020-06-29 13:19:36 BST. --
Jun 29 13:19:08 myhost kernel: EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
Jun 29 13:19:10 myhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7cc424bc: link becomes ready
Jun 29 13:19:10 myhost kernel: ignite0: port 1(veth7cc424bc) entered blocking state
Jun 29 13:19:10 myhost kernel: ignite0: port 1(veth7cc424bc) entered disabled state
Jun 29 13:19:10 myhost kernel: device veth7cc424bc entered promiscuous mode
Jun 29 13:19:10 myhost kernel: ignite0: port 1(veth7cc424bc) entered blocking state
Jun 29 13:19:10 myhost kernel: ignite0: port 1(veth7cc424bc) entered forwarding state
Jun 29 13:19:10 myhost kernel: br_eth0: port 1(vm_eth0) entered blocking state
Jun 29 13:19:10 myhost kernel: br_eth0: port 1(vm_eth0) entered disabled state
Jun 29 13:19:10 myhost kernel: device vm_eth0 entered promiscuous mode
Jun 29 13:19:10 myhost kernel: br_eth0: port 1(vm_eth0) entered blocking state
Jun 29 13:19:10 myhost kernel: br_eth0: port 1(vm_eth0) entered forwarding state
Jun 29 13:19:10 myhost kernel: br_eth0: port 2(eth0) entered blocking state
Jun 29 13:19:10 myhost kernel: br_eth0: port 2(eth0) entered disabled state
Jun 29 13:19:10 myhost kernel: device eth0 entered promiscuous mode
Jun 29 13:19:10 myhost kernel: br_eth0: port 2(eth0) entered blocking state
Jun 29 13:19:10 myhost kernel: br_eth0: port 2(eth0) entered forwarding state
Jun 29 13:19:23 myhost kernel: br_eth0: port 2(eth0) entered disabled state
Jun 29 13:19:23 myhost kernel: ignite0: port 1(veth7cc424bc) entered disabled state
Jun 29 13:19:23 myhost kernel: device eth0 left promiscuous mode
Jun 29 13:19:23 myhost kernel: br_eth0: port 2(eth0) entered disabled state
Jun 29 13:19:23 myhost kernel: device veth7cc424bc left promiscuous mode
Jun 29 13:19:23 myhost kernel: ignite0: port 1(veth7cc424bc) entered disabled state
Jun 29 13:19:24 myhost kernel: br_eth0: port 1(vm_eth0) entered disabled state
Jun 29 13:19:26 myhost kernel: device vm_eth0 left promiscuous mode
Jun 29 13:19:26 myhost kernel: br_eth0: port 1(vm_eth0) entered disabled state

➜

Config:

➜ sudo cat /etc/docker/daemon.json
{
    "storage-driver": "zfs",
    "log-driver": "journald"
}

➜ containerd config dump | grep -i -B1 -A3 zfs
    [plugins."io.containerd.grpc.v1.cri".containerd]
      snapshotter = "zfs"
      default_runtime_name = "runc"
      no_pivot = false
      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]

➜ zfs list -r -t all -o quota,mountpoint zfsroot/containerd
QUOTA  MOUNTPOINT
  10G  /var/lib/containerd/io.containerd.snapshotter.v1.zfs

➜ ls -la /var/lib/containerd/io.containerd.snapshotter.v1.zfs
total 2
drwxr-xr-x  2 root root  2 Jun 29 10:33 .
drwx--x--x 11 root root 11 Jun 29 11:29 ..

➜ lsb_release -a ; uname -o -r -m -s
LSB Version:	1.4
Distributor ID:	Arch
Description:	Arch Linux
Release:	rolling
Codename:	n/a
Linux 5.7.6-arch1-1 x86_64 GNU/Linux
➜ ignite version
Ignite version: version.Info{Major:"0", Minor:"7", GitVersion:"v0.7.0", GitCommit:"0e3459476130fa360fcd058d4cf8a8ef7fdb68a0", GitTreeState:"clean", BuildDate:"2020-06-02T23:22:10Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"linux/amd64", SandboxImage:version.Image{Name:"weaveworks/ignite", Tag:"v0.7.0", Delimeter:":"}, KernelImage:version.Image{Name:"weaveworks/ignite-kernel", Tag:"4.19.125", Delimeter:":"}}
Firecracker version: v0.21.1
Runtime: containerd


➜ sudo ignited version
Ignite version: version.Info{Major:"0", Minor:"7", GitVersion:"v0.7.0", GitCommit:"0e3459476130fa360fcd058d4cf8a8ef7fdb68a0", GitTreeState:"clean", BuildDate:"2020-06-02T23:22:15Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"linux/amd64", SandboxImage:version.Image{Name:"weaveworks/ignite", Tag:"v0.7.0", Delimeter:":"}, KernelImage:version.Image{Name:"weaveworks/ignite-kernel", Tag:"4.19.125", Delimeter:":"}}
Firecracker version: v0.21.1
Runtime: containerd

➜ docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 100
 Server Version: 19.03.11-ce
 Storage Driver: zfs
  Zpool: zfsroot
  Zpool Health: ONLINE
  Parent Dataset: zfsroot/docker
  Space Used By Parent: 15529604608
  Space Available: 16682650112
  Parent Quota: 32212254720
  Compression: off
 Logging Driver: journald
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d76c121f76a5fc8a462dc64594aea72fe18e1178.m
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.7.6-arch1-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.22GiB
 Name: myhost
 ID: YL7Z:MUXV:SUE5:BDII:NV6F:26JS:RUXA:34D4:7KNK:Q6IJ:2OQT:REHK
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
➜
@morancj
Copy link
Member Author

morancj commented Jun 29, 2020

[EDIT: ctr does not seem to respect the snapshotter setting in /etc/containerd/config.toml]


This looks like a problem with containerd, or my config for it:

➜ sudo ctr run --rm --tty --snapshotter native docker.io/library/alpine:latest alpine-latest
/ # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.12.0
PRETTY_NAME="Alpine Linux v3.12"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/ # uname -a
Linux myhost 5.7.6-arch1-1 #1 SMP PREEMPT Thu, 25 Jun 2020 00:14:47 +0000 x86_64 Linux
/ # %

➜ sudo ctr run --rm --tty --snapshotter zfs docker.io/library/alpine:latest alpine-latest
/ # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.12.0
PRETTY_NAME="Alpine Linux v3.12"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/ # uname -a
Linux myhost 5.7.6-arch1-1 #1 SMP PREEMPT Thu, 25 Jun 2020 00:14:47 +0000 x86_64 Linux
/ # %

➜ sudo ctr run --rm --tty docker.io/library/alpine:latest alpine-latest   ctr: failed to mount /tmp/containerd-mount194254420: invalid argument

~ using ☁️ sts
➜

@twelho
Copy link
Contributor

twelho commented Jun 29, 2020

Thanks for reporting! Ignite has not been tested on ZFS before, so it might also be some configuration mistake on our end 😅 Just to paint a more complete picture for reference, could you also post the output of your ignite run command with --log-level trace set?

@twelho twelho added area/runtime Issues related to container runtimes kind/bug Categorizes issue or PR as related to a bug. labels Jun 29, 2020
@morancj
Copy link
Member Author

morancj commented Jun 29, 2020

Sure:

➜ sudo ignite ps -a
VM ID	IMAGE	KERNEL	SIZE	CPUS	MEMORY	CREATED	STATUS	IPS	PORTS	NAME

➜ sudo ignite run --runtime containerd weaveworks/ignite-ubuntu \
  --interactive \
  --name containerd-ignite-ubuntu \
  --cpus 4 \
  --ssh \
  --memory 2GB \
  --size 10G \
  --log-level trace
TRAC[0000] Populating providers...
TRAC[0000] Initializing the containerd runtime provider...
TRAC[0000] Initializing the CNI provider...
DEBU[0000] Ensuring image weaveworks/ignite-ubuntu:latest exists, or importing it...
TRAC[0000] Client.Find; GVK: ignite.weave.works/__internal, Kind=Image
TRAC[0000] index: counted 0 Image object(s)
TRAC[0000] cache: miss when listing: ignite.weave.works/__internal, Kind=Image
TRAC[0000] cache: Get Image with UID "b9df7be7f853bfa5"
TRAC[0000] index: cache miss for Image with UID "b9df7be7f853bfa5"
TRAC[0000] index: storing Image object with UID "weaveworks/ignite-ubuntu:latest", meta: false
DEBU[0000] Found image with UID b9df7be7f853bfa5
DEBU[0000] Ensuring kernel weaveworks/ignite-kernel:4.19.125 exists, or importing it...
TRAC[0000] Client.Find; GVK: ignite.weave.works/__internal, Kind=Kernel
TRAC[0000] index: counted 0 Kernel object(s)
TRAC[0000] cache: miss when listing: ignite.weave.works/__internal, Kind=Kernel
TRAC[0000] cache: Get Kernel with UID "dce4f17790c4b419"
TRAC[0000] index: cache miss for Kernel with UID "dce4f17790c4b419"
TRAC[0000] index: storing Kernel object with UID "weaveworks/ignite-kernel:4.19.125", meta: false
DEBU[0000] Found kernel with UID dce4f17790c4b419
TRAC[0000] index: counted 0 VM object(s)
TRAC[0000] cache: miss when listing: ignite.weave.works/__internal, Kind=VM
TRAC[0000] index: counted 0 VM object(s)
TRAC[0000] cache: miss when listing: ignite.weave.works/__internal, Kind=VM
TRAC[0000] Client.Set; UID: "57b9e0bf6cf0228a", GVK: ignite.weave.works/__internal, Kind=VM
TRAC[0000] cache: Set VM with UID "57b9e0bf6cf0228a"
TRAC[0000] index: storing VM object with UID "containerd-ignite-ubuntu", meta: false
TRAC[0000] Client.Find; GVK: ignite.weave.works/__internal, Kind=Image
TRAC[0000] index: counted 1 Image object(s)
TRAC[0000] cache: hit when listing: ignite.weave.works/__internal, Kind=Image
TRAC[0000] index: listing ignite.weave.works/__internal, Kind=Image objects, meta: true
TRAC[0000] cacheObject: "weaveworks/ignite-ubuntu:latest" checksum: "1593438566484574090"
TRAC[0000] cache: Get Image with UID "b9df7be7f853bfa5"
TRAC[0000] index: cache hit for Image with UID "b9df7be7f853bfa5"
TRAC[0000] cacheObject: "weaveworks/ignite-ubuntu:latest" checksum: "1593438566484574090"
TRAC[0000] Client.Find; GVK: ignite.weave.works/__internal, Kind=Image
TRAC[0000] index: counted 1 Image object(s)
TRAC[0000] cache: hit when listing: ignite.weave.works/__internal, Kind=Image
TRAC[0000] index: listing ignite.weave.works/__internal, Kind=Image objects, meta: true
TRAC[0000] cacheObject: "weaveworks/ignite-ubuntu:latest" checksum: "1593438566484574090"
TRAC[0000] cache: Get Image with UID "b9df7be7f853bfa5"
TRAC[0000] index: cache hit for Image with UID "b9df7be7f853bfa5"
TRAC[0000] cacheObject: "weaveworks/ignite-ubuntu:latest" checksum: "1593438566484574090"
TRAC[0000] Client.Find; GVK: ignite.weave.works/__internal, Kind=Kernel
TRAC[0000] index: counted 1 Kernel object(s)
TRAC[0000] cache: hit when listing: ignite.weave.works/__internal, Kind=Kernel
TRAC[0000] index: listing ignite.weave.works/__internal, Kind=Kernel objects, meta: true
TRAC[0000] cacheObject: "weaveworks/ignite-kernel:4.19.125" checksum: "1593438582041233891"
TRAC[0000] cache: Get Kernel with UID "dce4f17790c4b419"
TRAC[0000] index: cache hit for Kernel with UID "dce4f17790c4b419"
TRAC[0000] cacheObject: "weaveworks/ignite-kernel:4.19.125" checksum: "1593438582041233891"
INFO[0001] Created VM with ID "57b9e0bf6cf0228a" and name "containerd-ignite-ubuntu"
TRAC[0001] Client.Find; GVK: ignite.weave.works/__internal, Kind=Image
TRAC[0001] index: counted 1 Image object(s)
TRAC[0001] cache: hit when listing: ignite.weave.works/__internal, Kind=Image
TRAC[0001] index: listing ignite.weave.works/__internal, Kind=Image objects, meta: true
TRAC[0001] cacheObject: "weaveworks/ignite-ubuntu:latest" checksum: "1593438566484574090"
TRAC[0001] cache: Get Image with UID "b9df7be7f853bfa5"
TRAC[0001] index: cache hit for Image with UID "b9df7be7f853bfa5"
TRAC[0001] cacheObject: "weaveworks/ignite-ubuntu:latest" checksum: "1593438566484574090"
TRAC[0001] Client.Find; GVK: ignite.weave.works/__internal, Kind=Kernel
TRAC[0001] index: counted 1 Kernel object(s)
TRAC[0001] cache: hit when listing: ignite.weave.works/__internal, Kind=Kernel
TRAC[0001] index: listing ignite.weave.works/__internal, Kind=Kernel objects, meta: true
TRAC[0001] cacheObject: "weaveworks/ignite-kernel:4.19.125" checksum: "1593438582041233891"
TRAC[0001] cache: Get Kernel with UID "dce4f17790c4b419"
TRAC[0001] index: cache hit for Kernel with UID "dce4f17790c4b419"
TRAC[0001] cacheObject: "weaveworks/ignite-kernel:4.19.125" checksum: "1593438582041233891"
DEBU[0001] containerd: Inspecting image "weaveworks/ignite:v0.7.0"
DEBU[0001] Writing "/var/lib/firecracker/vm/57b9e0bf6cf0228a/runtime.containerd.resolv.conf" with new hash: "fbe74b53a9d2380c8212bed5097146735021280bbc84d9e176139552a999fd25", old hash: ""
FATA[0001] failed to start container for VM "57b9e0bf6cf0228a": failed to mount /tmp/containerd-mount920509976: invalid argument

➜

@morancj
Copy link
Member Author

morancj commented Jun 29, 2020

ZFS config:

➜ zfs list -d 1 -o space,quota,com.sun:auto-snapshot,mountpoint | grep -E '^NAME|var'
NAME                   AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  QUOTA  COM.SUN:AUTO-SNAPSHOT  MOUNTPOINT
zfsroot/containerd     9.99G  5.86M       13K     88K             0B      5.76M    10G  false                  /var/lib/containerd/io.containerd.snapshotter.v1.zfs
zfsroot/docker         15.5G  14.5G     10.0M    554M             0B      13.9G    30G  false                  /var/lib/docker
zfsroot/firecracker    32.8G   392M        0B    392M             0B         0B    50G  false                  /var/lib/firecracker

Since I currently use zfs-auto-snapshot, and don't want snapshots containing /var getting hugely bloated with added/removed images.

@morancj
Copy link
Member Author

morancj commented Jul 30, 2020

I know you're very busy with other projects: if I can be of further help (perhaps you want me to create and mount an ext4 filesystem somewhere), let me know.

@morancj
Copy link
Member Author

morancj commented Jan 20, 2021

Workaround

Use an ext4 filesystem for /var/lib/containerd:

  • Stop containerd
  • Change snapshotter from zfs to overlayfs
  • Create a ZFS volume, format as ext4, mount at /var/lib/containerd
  • Restart containerd
  • Verify snapshotter is overlayfs
  • Create VM with ignite

Example

➜ sudo systemctl stop containerd.service
➜ sudo sed -i 's/snapshotter\ =\ \"zfs\"/snapshotter\ =\ \"\overlayfs"/g' /etc/containerd/config.toml
➜ sudo zfs create -b 4096 -o com.sun:auto-snapshot=false -V 8G zfsroot/volumes/containerd.ext4
➜ sudo mkfs.ext4 -b 4096 -L containerd /dev/zvol/zfsroot/volumes/containerd.ext4
➜ sudo mv /var/lib/containerd /var/lib/containerd.zfsparent
➜ sudo mkdir /var/lib/containerd
➜ sudo mount /dev/zvol/zfsroot/volumes/containerd /var/lib/containerd
➜ sudo systemctl start containerd.service
➜ containerd config dump | grep -i -B1 overlay    
    [plugins."io.containerd.grpc.v1.cri".containerd]
      snapshotter = "overlayfs"
➜ 

sudo ignite run ... now runs as expected. Let me know if you want me to switch back to ZFS for testing.

I'm guessing openzfs/zfs#9414 would permanently resolve this, otherwise detecting ZFS and changing overlayfs behaviour is probably required.

Note

For testing, one could mount a new ext4 filesystem at /var/lib/containerd.ext4, stop containerd, update the config, and use the sudo containerd --root /var/lib/containerd.ext4 instead, remembering to remove local ignite artifacts after changing containerd´s root. This is as per @bboreham´s issue: containerd/containerd#2402 (comment). To aid testing, one could also create one config.toml file for ext4, one for ZFS, create both filesystems and switch configs with a stop containerd → update symlink → start containerd loop.

My mounts for these look like this:

➜ findmnt --submounts /var/lib
TARGET                                                       SOURCE                                                                      FSTYPE OPTIONS
/var/lib                                                     zfsroot/ROOT/arch-linux/var/lib                                             zfs    rw,nosuid,xattr,posixacl
├─/var/lib/containerd.zfs                                    zfsroot/ROOT/arch-linux/var/lib/containerd                                  zfs    rw,nosuid,xattr,posixacl
│ └─/var/lib/containerd.zfs/io.containerd.snapshotter.v1.zfs zfsroot/ROOT/arch-linux/var/lib/containerd/io.containerd.snapshotter.v1.zfs zfs    rw,nosuid,xattr,posixacl
├─/var/lib/libvirt                                           zfsroot/ROOT/arch-linux/var/lib/libvirt                                     zfs    rw,nosuid,noexec,relatime,xattr,posixacl
├─/var/lib/docker                                            zfsroot/docker                                                              zfs    rw,relatime,xattr,noacl
│ └─/var/lib/docker/volumes                                  zfsroot/volumes/docker                                                      zfs    rw,relatime,xattr,noacl
└─/var/lib/containerd.ext4                                   /dev/zd16                                                                   ext4   rw,relatime

and the config diff:

➜ diff /etc/containerd/config.toml.ext4 /etc/containerd/config.toml.zfs
➜ diff /etc/containerd/config.toml.ext4 /etc/containerd/config.toml.zfs
2c2
< root = "/var/lib/containerd.ext4"
---
> root = "/var/lib/containerd.zfs"
5c5
< disabled_plugins = ["io.containerd.snapshotter.v1.btrfs", "io.containerd.snapshotter.v1.aufs", "io.containerd.snapshotter.v1.zfs"]
---
> disabled_plugins = ["io.containerd.snapshotter.v1.btrfs", "io.containerd.snapshotter.v1.aufs", "io.containerd.snapshotter.v1.devmapper"]
72c72
<       snapshotter = "overlayfs"
---
>       snapshotter = "zfs"

(only disabling the various snapshotter plugins due to paranoia)

@hh
Copy link

hh commented Dec 17, 2022

@morancj I'm hitting this as well.

@hh
Copy link

hh commented Dec 17, 2022

Found a similar researched with rkt and containerd:

From a function added to rtk:
https://github.com/rkt/rkt/pull/2600/files#diff-a70f47b70b2d542f3eaa01a097d0f6d97e1f49cee567e258d4afc2a79f07af92R324-L339

// FSSupportsOverlay checks whether the filesystem under which
// a specified path resides is compatible with OverlayFS
func FSSupportsOverlay(path string) bool {
	var data syscall.Statfs_t
	err := syscall.Statfs(path, &data)
	if err != nil {
		return false
	}

	if data.Type == FsMagicAUFS ||
		data.Type == FsMagicZFS {
		return false
	}

	return true
}

@hh
Copy link

hh commented Dec 17, 2022

I was able to use zfs snapshotting as long as /var/lib/containerd itself was ext4. Then mount a zfs legacy mount under /var/lib/containerd/io.containerd.snapshotter.v1.zfs.

systemctl stop containerd.service
mv /var/lib/containerd /var/lib/containerd.zfsparent

# create and format a zfs block as ext4
zfs create -b 4096 -o com.sun:auto-snapshot=false -V 80G rpool/containerd.ext4
mkfs.ext4 -b 4096 -L containerd /dev/zvol/rpool/containerd.ext4


# Mount ext4 under /var/lib/containerd
mkdir /var/lib/containerd
mount /dev/zvol/rpool/containerd.ext4 /var/lib/containerd


# Mount zfs under io.containerd.snapshotter.v1.zfs
mkdir /var/lib/containerd/io.containerd.snapshotter.v1.zfs
zfs create rpool/containerd-zfs -o mountpoint=legacy
mount -t zfs rpool/containerd-zfs /var/lib/containerd/io.containerd.snapshotter.v1.zfs


# containerd should come up with snapshotter zfs
systemctl start containerd.service

Config dump at this point shows ifs enabled:

containerd config dump | grep -i -B6 snapshotter\ =
  [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "zfs"

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/runtime Issues related to container runtimes kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants