-
-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support default-cgroupns-mode=private #720
Comments
FYI, It's a showstopper for |
+1 to this feature. |
👍 |
I think the more compelling argument is that related issues appear to be coming up with buildx #764 / rancher-sandbox/rancher-desktop#5363 that also appear to be related to the cgroups under alpine/openrc, not just with kind and It appears that cgroups are off under this distro and that should either be fixed or the distro switched. AFAICT lima + debian,ubuntu VMs work fine. |
|
@acuteaura I'm assuming that you posted that link to point to the documentation on how to enabled cgroupsv2? Have you tried enabling cgroups via the rc.conf file and also running those commands? I tried many things the other day and it doesn't seem possible. |
I haven't had the time to spend debugging this speficially for Alpine, but looking at my notes when I did for Fedora (we were working on a reproducible kind+cilium+kubeproxyless mode setup), Docker does something quite specific to Ubuntu and is/was broken on Fedora with I don't think Alpine is likely to blame for this and adjusting the mount point should let it run with pure cgroup2. |
I haven't tried too hard, but the documented way to mount
My motivation for this is also to get cilium working with docker in docker, but with nomad rather than kind/k8s. I'd also be in support of having systemd based distro as mentioned in #369 as I'd like to be able to run dind with docker setting |
This is the standard mount point for v1* and v2. It's not Ubuntu specific. In v2 only mode they should be mounted here as well. /sys/fs/cgroup/unified should be v2 cgroups in mixed v1+v2 "hybrid" mode which is not recommended by pretty much any project.
e.g. RHEL also documents v2-only at /sys/fs/cgroup https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/using-cgroups-v2-to-control-distribution-of-cpu-time-for-applications_managing-monitoring-and-updating-the-kernel |
I do not believe this is a feature at this point but a bug/issue. It has come to affect two things for me when working with colima that I have recently run into:
Right now the temporary solution for 2 above is to use |
Various users are switching from colima to other docker environment solutions in order to avoid these errors. Similar issue threads like these: dagger/dagger#5593 |
+1 |
There is a workaround for this I discovered in another thread related to this issue: If you are on docker buildx create \
--name fixed_builder \
--driver-opt 'image=moby/buildkit:v0.12.1-rootless' \
--bootstrap --use Before running the above command, can remove the older buildx runners with Solution from here: Using a rootless buildx builder is a good security practice anyways. |
Description
Private CGroup modes are required for some (privileged) workloads, like Cilium's eBPF kube-proxy replacement:
Official documentation: https://docs.cilium.io/en/v1.13/installation/kind/#install-cilium (in the large notice box)
Related issue at Cilium: cilium/cilium#25479
Setting
default-cgroupns-mode: private
inside the docker key in colima's config leads to runc crash:You can confirm cgroupv2 per-container slices by running the above command twice and confirming different IDs are assigned; colima without options produces the same ID.
The text was updated successfully, but these errors were encountered: