-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The podman driver should not require sudo or root #7480
Comments
i.e. for the docker driver we require that the user is part of the root-equivalent https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user $ docker ps but for the podman driver we instead require that the user has passwordless access to $ sudo podman ps This basically allows the same level of root access as docker, that is you still need to be an admin Neither driver should try to wrap Running minikube (or kubernetes) entirely as a non-root user is currently not supported. See e.g. https://github.com/rootless-containers/usernetes Running minikube requires a user with enough privileges. |
ping @medyagh |
Related thread: https://twitter.com/rawkode/status/1239885013241470979 We can detect the necessity to run root via:
|
We don't support running the docker-in-podman with rootless podman, only with the regular root one. Eventually we might default to crio-in-podman, but for now that requires using |
Now works OK, when running the $ ./out/minikube kubectl get nodes
E0413 17:36:02.552882 31952 api_server.go:169] unable to get freezer state: cat: /sys/fs/cgroup/freezer/libpod_parent/libpod-a2ebc4baa2e317a42afb6ff01d07fab6ec608a5ab2893425e6e58b8203748c3d/kubepods/burstable/pod6c3eb4fb6e1f774f1248d50ac61483b5/076cdc3a21654ba0d67ccf4a21af54e7f2a31a3edaf0d719947635c36586f55a/freezer.state: No such file or directory
NAME STATUS ROLES AGE VERSION
minikube Ready master 20s v1.18.0
$ ./out/minikube kubectl -- get pods --all-namespaces
E0413 17:36:23.219211 32401 api_server.go:169] unable to get freezer state: cat: /sys/fs/cgroup/freezer/libpod_parent/libpod-a2ebc4baa2e317a42afb6ff01d07fab6ec608a5ab2893425e6e58b8203748c3d/kubepods/burstable/pod6c3eb4fb6e1f774f1248d50ac61483b5/076cdc3a21654ba0d67ccf4a21af54e7f2a31a3edaf0d719947635c36586f55a/freezer.state: No such file or directory
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-4sktx 1/1 Running 0 30s
kube-system coredns-66bff467f8-fkgt9 1/1 Running 0 30s
kube-system etcd-minikube 1/1 Running 0 31s
kube-system kube-apiserver-minikube 1/1 Running 0 31s
kube-system kube-controller-manager-minikube 1/1 Running 0 31s
kube-system kube-proxy-49sxp 1/1 Running 0 31s
kube-system kube-scheduler-minikube 1/1 Running 0 31s
kube-system storage-provisioner 1/1 Running 0 35s Hangs when trying to use I think we might want to hide the cgroups thing (freezer). Or something similar, anyway the subdirectory is not available. Only:
|
I think the current podman behaviour is actually a layer bug... podman var volume
docker var volume
So it has the first few items, from the low And then there is that UX bug, not being able to add mount options for any anonymous volumes.
Plus the main difference that any mount in podman containers starts out as noexec,nodev,nosuid But I suppose we could give it a name, instead of a cache path like now ? |
I went back to With a regular named mount, the image boots up just fine - no missing files |
Hi @afbjorklund , I am trying to run minikube with podman on RHEL 8.1 and I cannot run with podman driver without sudo. Is there a workaround to not wrap my minikube (and subsequent kubectl) commands in sudo and use the driver? |
Can you open up a new report about it ? We have tried with Fedora 32, but not with RHEL 8 yet |
Even though podman is run with sudo (docker uses a group instead), the driver should not.
This will lead to the same kind of ownership and path issues that is plaguing the none driver...
Instead only the
podman
commands should be wrapped insudo
(to not try to run as rootless)The text was updated successfully, but these errors were encountered: