Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

crun 1.4+ CreateContainer in sandbox failed for volumeDevices container #917

Closed
Omar007 opened this issue May 6, 2022 · 10 comments · Fixed by #960
Closed

crun 1.4+ CreateContainer in sandbox failed for volumeDevices container #917

Omar007 opened this issue May 6, 2022 · 10 comments · Fixed by #960

Comments

@Omar007
Copy link

Omar007 commented May 6, 2022

Starting with crun 1.4 and up, Kubernetes is unable to initialize containers such as the Rook/Ceph OSD's blkdevmapper initContainer, which are using a volumeDevice. The creation fails with the following set of log messages:

May 06 22:14:52 othala kubelet[3591]: E0506 22:14:52.032881    3591 remote_runtime.go:416] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = container create failed: mknod `/scsi-hdd-data-7msn99`: Not a directory\n" podSandboxID="f3c28d182d527dd7b18c8043fddadfb09fe6643c2a2bf757f34e7904396c1acc"
May 06 22:14:52 othala kubelet[3591]: E0506 22:14:52.033126    3591 kuberuntime_manager.go:919] init container &Container{Name:blkdevmapper,Image:quay.io/ceph/ceph:v16.2.7,Command:[/bin/bash -c
May 06 22:14:52 othala kubelet[3591]: set -xe
May 06 22:14:52 othala kubelet[3591]: PVC_SOURCE=/scsi-hdd-data-7msn99
May 06 22:14:52 othala kubelet[3591]: PVC_DEST=/var/lib/ceph/osd/ceph-7/block-tmp
May 06 22:14:52 othala kubelet[3591]: CP_ARGS=(--archive --dereference --verbose)
May 06 22:14:52 othala kubelet[3591]: if [ -b "$PVC_DEST" ]; then
May 06 22:14:52 othala kubelet[3591]:         PVC_SOURCE_MAJ_MIN=$(stat --format '%t%T' $PVC_SOURCE)
May 06 22:14:52 othala kubelet[3591]:         PVC_DEST_MAJ_MIN=$(stat --format '%t%T' $PVC_DEST)
May 06 22:14:52 othala kubelet[3591]:         if [[ "$PVC_SOURCE_MAJ_MIN" == "$PVC_DEST_MAJ_MIN" ]]; then
May 06 22:14:52 othala kubelet[3591]:                 CP_ARGS+=(--no-clobber)
May 06 22:14:52 othala kubelet[3591]:         else
May 06 22:14:52 othala kubelet[3591]:                 echo "PVC's source major/minor numbers changed"
May 06 22:14:52 othala kubelet[3591]:                 CP_ARGS+=(--remove-destination)
May 06 22:14:52 othala kubelet[3591]:         fi
May 06 22:14:52 othala kubelet[3591]: fi
May 06 22:14:52 othala kubelet[3591]: cp "${CP_ARGS[@]}" "$PVC_SOURCE" "$PVC_DEST"
May 06 22:14:52 othala kubelet[3591]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},memory: {{4294967296 0} {<nil>} 4Gi BinarySI},},Requests:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},memory: {{4294967296 0} {<nil>} 4Gi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scsi-hdd-data-7msn99-bridge,ReadOnly:false,MountPath:/var/lib/ceph/osd/ceph-7,SubPath:ceph-7,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lg65k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[MKNOD],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{VolumeDevice{Name:scsi-hdd-data-7msn99,DevicePath:/scsi-hdd-data-7msn99,},},StartupProbe:nil,} start failed in pod rook-ceph-osd-7-699b8484fc-4t8wp_rook-ceph(ac446563-9566-4f3d-b69b-1b52cfe68b47): CreateContainerError: container create failed: mknod `/scsi-hdd-data-7msn99`: Not a directory
May 06 22:14:52 othala kubelet[3591]: E0506 22:14:52.033222    3591 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"blkdevmapper\" with CreateContainerError: \"container create failed: mknod `/scsi-hdd-data-7msn99`: Not a directory\\n\"" pod="rook-ceph/rook-ceph-osd-7-699b8484fc-4t8wp" podUID=ac446563-9566-4f3d-b69b-1b52cfe68b47

When reverting back to 1.3, they come online normally.

As specifically reverting crun back to 1.3 and not touching anything else resolves this issue, I suspect the problem is with crun.
However, I also don't have much more information to go on at this time so if any other/additional information is desired, I'll gladly receive some pointers on where to look or what info to supply.

@giuseppe
Copy link
Member

giuseppe commented May 9, 2022

could you share the pod spec you've used to create the Kubernetes pods?

If it is easy to reproduce for you, would it be possible to run a git bisect on crun to see exactly what commit introduced the problem?

@Omar007
Copy link
Author

Omar007 commented May 9, 2022

The Rook container one is basically in the above log but I'll see if I can create a minimal test PodSpec to bring it completely back to the bare minimum that still results in a failure, hopefully somewhere tomorrow.

It's not what I'd call hard to reproduce; just switch out crun and Kubernetes either starts these containers successfully or it doesn't (with the above error/logs as a result). There's nothing to change other than the crun version used by the cluster/node that would host said container.
The hard/time consuming part is going to be; bisecting crun, switching crun out underneath the cluster/node for each bisect version and then checking if Kubernetes starts the containers or not. Shouldn't be too difficult though 🤔
I'll have to check if I can make a start on that later this week.

@giuseppe
Copy link
Member

would it be easier for you to share the config.json that crun creates under /run/crun/$CONTAINER_ID/config.json?

@Omar007
Copy link
Author

Omar007 commented May 14, 2022

I have bisected the build between tag 1.3 and 1.4. Had to exclude/mark bad three commits because they didn't even build and am left with commit 4eb1f03 as the first bad one, which sadly was also one that just didn't even build:

...
  CCLD     tests/tests_libcrun_fuzzer
  CCLD     libcrun.la
  CCLD     python_crun.la
  CCLD     crun
/usr/bin/ld: src/crun-crun.o: in function `print_version':
crun.c:(.text+0x11d): undefined reference to `print_handlers_feature_tags'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:1252: crun] Error 1
make[2]: Leaving directory '/build/crun/src/crun'
make[1]: *** [Makefile:2164: all-recursive] Error 1
make[1]: Leaving directory '/build/crun/src/crun'
make: *** [Makefile:893: all] Error 2

@giuseppe
Copy link
Member

Thanks for doing this. Could you please tell me the previous commits you have used? I doubt this commit could introduce the regression. Is there an easy way to reproduce the issue with a podman command line? Or share the pod spec and imagd you have used?

@Omar007
Copy link
Author

Omar007 commented May 14, 2022

The config.json for an instance of the container in the original message would be this one:

{
	"ociVersion": "1.0.2-dev",
	"process": {
		"user": {
			"uid": 0,
			"gid": 0
		},
		"args": [
			"/bin/bash",
			"-c",
			"\nset -xe\n\nPVC_SOURCE=/scsi-hdd-data-7msn99\nPVC_DEST=/var/lib/ceph/osd/ceph-7/block-tmp\nCP_ARGS=(--archive --dereference --verbose)\n\nif [ -b \"$PVC_DEST\" ]; then\n\tPVC_SOURCE_MAJ_MIN=$(stat --format '%t%T' $PVC_SOURCE)\n\tPVC_DEST_MAJ_MIN=$(stat --format '%t%T' $PVC_DEST)\n\tif [[ \"$PVC_SOURCE_MAJ_MIN\" == \"$PVC_DEST_MAJ_MIN\" ]]; then\n\t\tCP_ARGS+=(--no-clobber)\n\telse\n\t\techo \"PVC's source major/minor numbers changed\"\n\t\tCP_ARGS+=(--remove-destination)\n\tfi\nfi\n\ncp \"${CP_ARGS[@]}\" \"$PVC_SOURCE\" \"$PVC_DEST\"\n"
		],
		"env": [
			"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
			"TERM=xterm",
			"HOSTNAME=rook-ceph-osd-7-699b8484fc-ft6wr",
			"KUBERNETES_SERVICE_PORT=443",
			"ROOK_CEPH_NFS_CEPH_NFS_A_PORT_2049_TCP_PROTO=tcp",
			"ROOK_CEPH_MGR_DASHBOARD_SERVICE_PORT=7000",
			"ROOK_CEPH_MGR_PORT_9283_TCP_PROTO=tcp",
			"ROOK_CEPH_MON_P_PORT_6789_TCP_ADDR=10.50.19.189",
			"ROOK_CEPH_MON_U_SERVICE_HOST=10.50.16.56",
			"ROOK_CEPH_MON_P_PORT_6789_TCP_PROTO=tcp",
			"ROOK_CEPH_MGR_DASHBOARD_PORT_7000_TCP_PORT=7000",
			"KUBERNETES_PORT=tcp://10.50.16.1:443",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8081_TCP_ADDR=10.50.16.248",
			"CSI_RBDPLUGIN_METRICS_SERVICE_PORT=8080",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_PORT_80_TCP_ADDR=10.50.17.6",
			"ROOK_CEPH_MON_U_PORT=tcp://10.50.16.56:6789",
			"CSI_RBDPLUGIN_METRICS_PORT_8080_TCP_PORT=8080",
			"ROOK_CEPH_NFS_CEPH_NFS_A_SERVICE_PORT_NFS=2049",
			"ROOK_CEPH_MON_T_SERVICE_PORT_TCP_MSGR1=6789",
			"ROOK_CEPH_MGR_DASHBOARD_PORT_7000_TCP_PROTO=tcp",
			"CSI_RBDPLUGIN_METRICS_SERVICE_HOST=10.50.19.173",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8081_TCP_PROTO=tcp",
			"CSI_RBDPLUGIN_METRICS_PORT_8081_TCP=tcp://10.50.19.173:8081",
			"CSI_RBDPLUGIN_METRICS_PORT_8081_TCP_PORT=8081",
			"ROOK_CEPH_MGR_DASHBOARD_PORT=tcp://10.50.18.72:7000",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_PORT=tcp://10.50.17.6:80",
			"CSI_RBDPLUGIN_METRICS_PORT_8080_TCP=tcp://10.50.19.173:8080",
			"CSI_RBDPLUGIN_METRICS_PORT_8080_TCP_PROTO=tcp",
			"ROOK_CEPH_MON_T_PORT_3300_TCP_PROTO=tcp",
			"ROOK_CEPH_MON_U_SERVICE_PORT=6789",
			"ROOK_CEPH_MON_T_SERVICE_PORT_TCP_MSGR2=3300",
			"CSI_CEPHFSPLUGIN_METRICS_SERVICE_PORT_CSI_GRPC_METRICS=8081",
			"ROOK_CEPH_MGR_PORT_9283_TCP_PORT=9283",
			"ROOK_CEPH_MGR_PORT_9283_TCP=tcp://10.50.17.148:9283",
			"ROOK_CEPH_ADMISSION_CONTROLLER_PORT_443_TCP=tcp://10.50.16.222:443",
			"ROOK_CEPH_MON_P_PORT_6789_TCP_PORT=6789",
			"CSI_CEPHFSPLUGIN_METRICS_SERVICE_PORT=8080",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8080_TCP=tcp://10.50.16.248:8080",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8080_TCP_ADDR=10.50.16.248",
			"CSI_RBDPLUGIN_METRICS_PORT_8080_TCP_ADDR=10.50.19.173",
			"ROOK_CEPH_MGR_PORT=tcp://10.50.17.148:9283",
			"ROOK_CEPH_MON_P_PORT_3300_TCP_ADDR=10.50.19.189",
			"ROOK_CEPH_MON_U_SERVICE_PORT_TCP_MSGR1=6789",
			"CSI_RBDPLUGIN_METRICS_PORT_8081_TCP_PROTO=tcp",
			"ROOK_CEPH_MON_U_PORT_6789_TCP_PORT=6789",
			"ROOK_CEPH_MON_U_PORT_3300_TCP_ADDR=10.50.16.56",
			"ROOK_CEPH_ADMISSION_CONTROLLER_SERVICE_PORT=443",
			"ROOK_CEPH_MGR_DASHBOARD_SERVICE_HOST=10.50.18.72",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8081_TCP_PORT=8081",
			"ROOK_CEPH_ADMISSION_CONTROLLER_PORT_443_TCP_ADDR=10.50.16.222",
			"ROOK_CEPH_NFS_CEPH_NFS_A_PORT_2049_TCP=tcp://10.50.17.203:2049",
			"ROOK_CEPH_MON_T_PORT=tcp://10.50.18.185:6789",
			"ROOK_CEPH_MON_P_PORT_3300_TCP_PROTO=tcp",
			"CSI_CEPHFSPLUGIN_METRICS_SERVICE_HOST=10.50.16.248",
			"ROOK_CEPH_ADMISSION_CONTROLLER_PORT_443_TCP_PORT=443",
			"ROOK_CEPH_MON_T_PORT_6789_TCP_PORT=6789",
			"ROOK_CEPH_MON_T_PORT_3300_TCP_ADDR=10.50.18.185",
			"KUBERNETES_PORT_443_TCP=tcp://10.50.16.1:443",
			"ROOK_CEPH_MON_U_SERVICE_PORT_TCP_MSGR2=3300",
			"ROOK_CEPH_MGR_SERVICE_PORT=9283",
			"ROOK_CEPH_MON_P_SERVICE_PORT=6789",
			"ROOK_CEPH_MON_P_PORT=tcp://10.50.19.189:6789",
			"ROOK_CEPH_MON_T_PORT_6789_TCP_PROTO=tcp",
			"ROOK_CEPH_MON_P_PORT_6789_TCP=tcp://10.50.19.189:6789",
			"ROOK_CEPH_MGR_DASHBOARD_PORT_7000_TCP=tcp://10.50.18.72:7000",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_SERVICE_HOST=10.50.17.6",
			"ROOK_CEPH_NFS_CEPH_NFS_A_PORT_2049_TCP_ADDR=10.50.17.203",
			"KUBERNETES_SERVICE_HOST=10.50.16.1",
			"ROOK_CEPH_MON_U_PORT_6789_TCP=tcp://10.50.16.56:6789",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8080_TCP_PORT=8080",
			"ROOK_CEPH_MON_T_PORT_3300_TCP_PORT=3300",
			"KUBERNETES_SERVICE_PORT_HTTPS=443",
			"KUBERNETES_PORT_443_TCP_ADDR=10.50.16.1",
			"ROOK_CEPH_NFS_CEPH_NFS_A_PORT=tcp://10.50.17.203:2049",
			"ROOK_CEPH_MON_T_SERVICE_PORT=6789",
			"KUBERNETES_PORT_443_TCP_PORT=443",
			"ROOK_CEPH_MON_U_PORT_3300_TCP_PROTO=tcp",
			"CSI_RBDPLUGIN_METRICS_SERVICE_PORT_CSI_HTTP_METRICS=8080",
			"CSI_RBDPLUGIN_METRICS_SERVICE_PORT_CSI_GRPC_METRICS=8081",
			"ROOK_CEPH_MON_U_PORT_3300_TCP=tcp://10.50.16.56:3300",
			"ROOK_CEPH_MON_T_SERVICE_HOST=10.50.18.185",
			"ROOK_CEPH_MGR_DASHBOARD_SERVICE_PORT_HTTP_DASHBOARD=7000",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_PORT_80_TCP=tcp://10.50.17.6:80",
			"KUBERNETES_PORT_443_TCP_PROTO=tcp",
			"ROOK_CEPH_MON_P_PORT_3300_TCP_PORT=3300",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_PORT_80_TCP_PORT=80",
			"ROOK_CEPH_MON_U_PORT_3300_TCP_PORT=3300",
			"ROOK_CEPH_MGR_SERVICE_PORT_HTTP_METRICS=9283",
			"ROOK_CEPH_ADMISSION_CONTROLLER_SERVICE_HOST=10.50.16.222",
			"ROOK_CEPH_MON_P_SERVICE_HOST=10.50.19.189",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_SERVICE_PORT_HTTP=80",
			"ROOK_CEPH_MON_U_PORT_6789_TCP_PROTO=tcp",
			"ROOK_CEPH_ADMISSION_CONTROLLER_PORT_443_TCP_PROTO=tcp",
			"ROOK_CEPH_MON_T_PORT_6789_TCP=tcp://10.50.18.185:6789",
			"CSI_CEPHFSPLUGIN_METRICS_SERVICE_PORT_CSI_HTTP_METRICS=8080",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8080_TCP_PROTO=tcp",
			"CSI_CEPHFSPLUGIN_METRICS_PORT_8081_TCP=tcp://10.50.16.248:8081",
			"ROOK_CEPH_NFS_CEPH_NFS_A_PORT_2049_TCP_PORT=2049",
			"ROOK_CEPH_MON_P_SERVICE_PORT_TCP_MSGR2=3300",
			"CSI_RBDPLUGIN_METRICS_PORT=tcp://10.50.19.173:8080",
			"ROOK_CEPH_MGR_SERVICE_HOST=10.50.17.148",
			"ROOK_CEPH_MGR_PORT_9283_TCP_ADDR=10.50.17.148",
			"ROOK_CEPH_NFS_CEPH_NFS_A_SERVICE_PORT=2049",
			"ROOK_CEPH_MON_P_PORT_3300_TCP=tcp://10.50.19.189:3300",
			"CSI_RBDPLUGIN_METRICS_PORT_8081_TCP_ADDR=10.50.19.173",
			"ROOK_CEPH_MON_T_PORT_6789_TCP_ADDR=10.50.18.185",
			"ROOK_CEPH_MGR_DASHBOARD_PORT_7000_TCP_ADDR=10.50.18.72",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_PORT_80_TCP_PROTO=tcp",
			"CSI_CEPHFSPLUGIN_METRICS_PORT=tcp://10.50.16.248:8080",
			"ROOK_CEPH_ADMISSION_CONTROLLER_PORT=tcp://10.50.16.222:443",
			"ROOK_CEPH_NFS_CEPH_NFS_A_SERVICE_HOST=10.50.17.203",
			"ROOK_CEPH_MON_U_PORT_6789_TCP_ADDR=10.50.16.56",
			"ROOK_CEPH_MON_T_PORT_3300_TCP=tcp://10.50.18.185:3300",
			"ROOK_CEPH_MON_P_SERVICE_PORT_TCP_MSGR1=6789",
			"ROOK_CEPH_RGW_CEPH_OBJECTSTORE_OSD_EC_6_2_HDD_SERVICE_PORT=80",
			"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
			"I_AM_IN_A_CONTAINER=1",
			"CEPH_VERSION=pacific",
			"CEPH_POINT_RELEASE=-16.2.7",
			"CEPH_DEVEL=false",
			"CEPH_REF=pacific",
			"OSD_FLAVOR=default"
		],
		"cwd": "/",
		"capabilities": {
			"bounding": [
				"CAP_MKNOD",
				"CAP_CHOWN",
				"CAP_DAC_OVERRIDE",
				"CAP_FSETID",
				"CAP_FOWNER",
				"CAP_SETGID",
				"CAP_SETUID",
				"CAP_SETPCAP",
				"CAP_NET_BIND_SERVICE",
				"CAP_KILL"
			],
			"effective": [
				"CAP_MKNOD",
				"CAP_CHOWN",
				"CAP_DAC_OVERRIDE",
				"CAP_FSETID",
				"CAP_FOWNER",
				"CAP_SETGID",
				"CAP_SETUID",
				"CAP_SETPCAP",
				"CAP_NET_BIND_SERVICE",
				"CAP_KILL"
			],
			"inheritable": [
				"CAP_MKNOD",
				"CAP_CHOWN",
				"CAP_DAC_OVERRIDE",
				"CAP_FSETID",
				"CAP_FOWNER",
				"CAP_SETGID",
				"CAP_SETUID",
				"CAP_SETPCAP",
				"CAP_NET_BIND_SERVICE",
				"CAP_KILL"
			],
			"permitted": [
				"CAP_MKNOD",
				"CAP_CHOWN",
				"CAP_DAC_OVERRIDE",
				"CAP_FSETID",
				"CAP_FOWNER",
				"CAP_SETGID",
				"CAP_SETUID",
				"CAP_SETPCAP",
				"CAP_NET_BIND_SERVICE",
				"CAP_KILL"
			]
		},
		"oomScoreAdj": -997
	},
	"root": {
		"path": "/var/lib/containers/storage/btrfs/subvolumes/f6e9a78dfcef7aa700b4d19ada4624a0e10fe4ebd2ee0b5de705aa25c8a1ede3"
	},
	"hostname": "rook-ceph-osd-7-699b8484fc-ft6wr",
	"mounts": [
		{
			"destination": "/proc",
			"type": "proc",
			"source": "proc",
			"options": [
				"nosuid",
				"noexec",
				"nodev"
			]
		},
		{
			"destination": "/dev",
			"type": "tmpfs",
			"source": "tmpfs",
			"options": [
				"nosuid",
				"strictatime",
				"mode=755",
				"size=65536k"
			]
		},
		{
			"destination": "/dev/pts",
			"type": "devpts",
			"source": "devpts",
			"options": [
				"nosuid",
				"noexec",
				"newinstance",
				"ptmxmode=0666",
				"mode=0620",
				"gid=5"
			]
		},
		{
			"destination": "/dev/mqueue",
			"type": "mqueue",
			"source": "mqueue",
			"options": [
				"nosuid",
				"noexec",
				"nodev"
			]
		},
		{
			"destination": "/sys",
			"type": "sysfs",
			"source": "sysfs",
			"options": [
				"nosuid",
				"noexec",
				"nodev",
				"ro"
			]
		},
		{
			"destination": "/sys/fs/cgroup",
			"type": "cgroup",
			"source": "cgroup",
			"options": [
				"nosuid",
				"noexec",
				"nodev",
				"relatime",
				"ro"
			]
		},
		{
			"destination": "/dev/shm",
			"type": "bind",
			"source": "/dev/shm",
			"options": [
				"rw",
				"bind"
			]
		},
		{
			"destination": "/etc/resolv.conf",
			"type": "bind",
			"source": "/run/containers/storage/btrfs-containers/9a0acc14281429862d6c5fe57cf019c8b3a4316f30930afdd4197c5fa688543c/userdata/resolv.conf",
			"options": [
				"rw",
				"bind",
				"nodev",
				"nosuid",
				"noexec"
			]
		},
		{
			"destination": "/etc/hostname",
			"type": "bind",
			"source": "/run/containers/storage/btrfs-containers/9a0acc14281429862d6c5fe57cf019c8b3a4316f30930afdd4197c5fa688543c/userdata/hostname",
			"options": [
				"rw",
				"bind"
			]
		},
		{
			"destination": "/run/.containerenv",
			"type": "bind",
			"source": "/run/containers/storage/btrfs-containers/9a0acc14281429862d6c5fe57cf019c8b3a4316f30930afdd4197c5fa688543c/userdata/.containerenv",
			"options": [
				"rw",
				"bind"
			]
		},
		{
			"destination": "/etc/hosts",
			"type": "bind",
			"source": "/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/etc-hosts",
			"options": [
				"rw",
				"rbind",
				"rprivate",
				"bind"
			]
		},
		{
			"destination": "/dev/termination-log",
			"type": "bind",
			"source": "/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/containers/blkdevmapper/118f603b",
			"options": [
				"rw",
				"rbind",
				"rprivate",
				"bind"
			]
		},
		{
			"destination": "/var/lib/ceph/osd/ceph-7",
			"type": "bind",
			"source": "/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/volume-subpaths/scsi-hdd-data-7msn99-bridge/blkdevmapper/0",
			"options": [
				"rw",
				"rbind",
				"rprivate",
				"bind"
			]
		},
		{
			"destination": "/var/run/secrets/kubernetes.io/serviceaccount",
			"type": "bind",
			"source": "/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/volumes/kubernetes.io~projected/kube-api-access-5zqhg",
			"options": [
				"ro",
				"rbind",
				"rprivate",
				"bind"
			]
		}
	],
	"annotations": {
		"io.container.manager": "cri-o",
		"io.kubernetes.container.hash": "1c132dac",
		"io.kubernetes.container.name": "blkdevmapper",
		"io.kubernetes.container.restartCount": "36",
		"io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
		"io.kubernetes.container.terminationMessagePolicy": "File",
		"io.kubernetes.cri-o.Annotations": "{\"io.kubernetes.container.hash\":\"1c132dac\",\"io.kubernetes.container.restartCount\":\"36\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}",
		"io.kubernetes.cri-o.ContainerID": "b0ccdef6af6079e97fc3dc6e27ed43a188acbf901b32647d4518aa635c9e6891",
		"io.kubernetes.cri-o.ContainerType": "container",
		"io.kubernetes.cri-o.Created": "2022-05-14T19:40:11.398847975+02:00",
		"io.kubernetes.cri-o.IP.0": "10.50.6.154",
		"io.kubernetes.cri-o.IP.1": "2a10:3781:9a:50:10:50:0:69a",
		"io.kubernetes.cri-o.Image": "cc266d6139f4d044d28ace2308f7befcdfead3c3e88bc3faed905298cae299ef",
		"io.kubernetes.cri-o.ImageName": "quay.io/ceph/ceph:v16.2.7",
		"io.kubernetes.cri-o.ImageRef": "cc266d6139f4d044d28ace2308f7befcdfead3c3e88bc3faed905298cae299ef",
		"io.kubernetes.cri-o.Labels": "{\"io.kubernetes.container.name\":\"blkdevmapper\",\"io.kubernetes.pod.name\":\"rook-ceph-osd-7-699b8484fc-ft6wr\",\"io.kubernetes.pod.namespace\":\"rook-ceph\",\"io.kubernetes.pod.uid\":\"8a74643a-984c-4ff7-a38b-8aaf8cbc6045\"}",
		"io.kubernetes.cri-o.LogPath": "/var/log/pods/rook-ceph_rook-ceph-osd-7-699b8484fc-ft6wr_8a74643a-984c-4ff7-a38b-8aaf8cbc6045/blkdevmapper/36.log",
		"io.kubernetes.cri-o.Metadata": "{\"name\":\"blkdevmapper\",\"attempt\":36}",
		"io.kubernetes.cri-o.MountPoint": "/var/lib/containers/storage/btrfs/subvolumes/f6e9a78dfcef7aa700b4d19ada4624a0e10fe4ebd2ee0b5de705aa25c8a1ede3",
		"io.kubernetes.cri-o.Name": "k8s_blkdevmapper_rook-ceph-osd-7-699b8484fc-ft6wr_rook-ceph_8a74643a-984c-4ff7-a38b-8aaf8cbc6045_36",
		"io.kubernetes.cri-o.ResolvPath": "/run/containers/storage/btrfs-containers/9a0acc14281429862d6c5fe57cf019c8b3a4316f30930afdd4197c5fa688543c/userdata/resolv.conf",
		"io.kubernetes.cri-o.SandboxID": "9a0acc14281429862d6c5fe57cf019c8b3a4316f30930afdd4197c5fa688543c",
		"io.kubernetes.cri-o.SandboxName": "k8s_rook-ceph-osd-7-699b8484fc-ft6wr_rook-ceph_8a74643a-984c-4ff7-a38b-8aaf8cbc6045_0",
		"io.kubernetes.cri-o.SeccompProfilePath": "",
		"io.kubernetes.cri-o.Stdin": "false",
		"io.kubernetes.cri-o.StdinOnce": "false",
		"io.kubernetes.cri-o.TTY": "false",
		"io.kubernetes.cri-o.Volumes": "[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/containers/blkdevmapper/118f603b\",\"readonly\":false},{\"container_path\":\"/var/lib/ceph/osd/ceph-7\",\"host_path\":\"/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/volume-subpaths/scsi-hdd-data-7msn99-bridge/blkdevmapper/0\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8a74643a-984c-4ff7-a38b-8aaf8cbc6045/volumes/kubernetes.io~projected/kube-api-access-5zqhg\",\"readonly\":true}]",
		"io.kubernetes.pod.name": "rook-ceph-osd-7-699b8484fc-ft6wr",
		"io.kubernetes.pod.namespace": "rook-ceph",
		"io.kubernetes.pod.terminationGracePeriod": "30",
		"io.kubernetes.pod.uid": "8a74643a-984c-4ff7-a38b-8aaf8cbc6045",
		"kubernetes.io/config.seen": "2022-05-14T19:31:25.514066608+02:00",
		"kubernetes.io/config.source": "api",
		"org.systemd.property.After": "['crio.service']",
		"org.systemd.property.CollectMode": "'inactive-or-failed'",
		"org.systemd.property.DefaultDependencies": "true",
		"org.systemd.property.TimeoutStopUSec": "uint64 30000000"
	},
	"linux": {
		"resources": {
			"devices": [
				{
					"allow": false,
					"access": "rwm"
				},
				{
					"allow": true,
					"type": "b",
					"major": 254,
					"minor": 6,
					"access": "mrw"
				}
			],
			"memory": {
				"limit": 4294967296,
				"swap": 4294967296
			},
			"cpu": {
				"shares": 1024,
				"quota": 200000,
				"period": 100000
			},
			"pids": {
				"limit": 1024
			},
			"hugepageLimits": [
				{
					"pageSize": "2MB",
					"limit": 0
				},
				{
					"pageSize": "1GB",
					"limit": 0
				}
			]
		},
		"cgroupsPath": "kubepods-burstable-pod8a74643a_984c_4ff7_a38b_8aaf8cbc6045.slice:crio:b0ccdef6af6079e97fc3dc6e27ed43a188acbf901b32647d4518aa635c9e6891",
		"namespaces": [
			{
				"type": "pid"
			},
			{
				"type": "network",
				"path": "/run/netns/92f2da59-038a-41ff-b69e-75f8d3459519"
			},
			{
				"type": "ipc",
				"path": "/run/ipcns/92f2da59-038a-41ff-b69e-75f8d3459519"
			},
			{
				"type": "uts",
				"path": "/run/utsns/92f2da59-038a-41ff-b69e-75f8d3459519"
			},
			{
				"type": "mount"
			},
			{
				"type": "cgroup"
			}
		],
		"devices": [
			{
				"path": "/scsi-hdd-data-7msn99",
				"type": "b",
				"major": 254,
				"minor": 6,
				"uid": 0,
				"gid": 0
			}
		],
		"maskedPaths": [
			"/proc/acpi",
			"/proc/kcore",
			"/proc/keys",
			"/proc/latency_stats",
			"/proc/timer_list",
			"/proc/timer_stats",
			"/proc/sched_debug",
			"/proc/scsi",
			"/sys/firmware"
		],
		"readonlyPaths": [
			"/proc/asound",
			"/proc/bus",
			"/proc/fs",
			"/proc/irq",
			"/proc/sys",
			"/proc/sysrq-trigger"
		]
	}
}

@Omar007
Copy link
Author

Omar007 commented May 14, 2022

The commit it gave me before 4eb1f03 was commit 8523d6b and that one was good.

The other earlier ones it gave me during the bisect process were good aside from fd0e171 and e6fda97, which didn't build either.

Terminal exempt from the bisection (excludes the initial good and bad marking of the 1.3 and 1.4 tag respectively). The first good mark here was for commit 9c014c6

...
%  git bisect good
Bisecting: 35 revisions left to test after this (roughly 5 steps)
[e6fda97a22ad98bb472433f69fd3f865f27a940d] build: define CRUN_LIBDIR
%  git bisect bad
Bisecting: 17 revisions left to test after this (roughly 4 steps)
[74a21ed84730023c4341c34ddaf2f6494f3e11a0] ebpf: handle missing access string
%  git bisect good
Bisecting: 8 revisions left to test after this (roughly 3 steps)
[f918fda3636a0d0b3f7e10918a89e4c8f03c4dab] Merge pull request #804 from giuseppe/handler-cleanups
%  git bisect good
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[fd0e171a6ebd1f87f095bfbcd669afeb83763e94] handler: split libcrun_configure_wasm
%  git bisect bad
Bisecting: 1 revision left to test after this (roughly 1 step)
[8523d6b0aef4e483f3cef3141bb778b415072b96] Merge pull request #805 from hydai/update_wasmedge_header_path
%  git bisect good
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[4eb1f037a767e6a06d19d5d75dd77c3a303b8d98] container: move custom handlers code to new file
%  git bisect bad
4eb1f037a767e6a06d19d5d75dd77c3a303b8d98 is the first bad commit

@giuseppe
Copy link
Member

Thanks for the extra information. I am out of office next week but I will try to take a deeper look as soon as possible

giuseppe added a commit to giuseppe/crun that referenced this issue Jul 3, 2022
fix the creation of devices nodes in the container rootfs.

commit d583bdc introduced the regression.

Closes: containers#917

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@giuseppe
Copy link
Member

giuseppe commented Jul 3, 2022

sorry for the delay, opened a PR here: #960

@Omar007
Copy link
Author

Omar007 commented Jul 4, 2022

Tried a run with a build of the latest master containing it (3417536) , LGTM 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants