-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vfs: pod start: Error: crun: make /path private: OCI permission denied #20332
Comments
Grumble. Exactly the same context (remote root, vfs, same subtest, same step in the subtest)... but a different message:
(That is: original report was "make vfs-dir private: EACCES", this one is "open vfs-dir: EACCES"). Even though the error is different, I'm betting it's the same root cause, so I'm assigning to the same issue. |
SELinux? |
It does not look like SELinux. This is today's failure:
...and here is its corresponding journal:
That's a broad selection, from before and after The story so far:
Seen in: sys remote fedora-38 root host boltdb+sqlite I've added the |
@giuseppe Any thoughts on why this could happen? |
one reason could be that 8ac2aa7 is not affecting vfs since we do not create a new mount point on the host, differently as we do with other drivers. |
Should we just make this a no-op if driver=vfs? Or does VFS always return not mounted? |
the patch above doesn't cause this issue just that it doesn't solve the problem with vfs since there is never a mount. Since the flake is happening always with podman remote and pods, it seems to be like the same scenario that 8ac2aa7 fixed for drivers that create a mount. I need to look more into it, I am not sure at the moment how we could do the same thing with vfs, we kill the podman cleanup process randomly, and the mount counter gets out of sync. |
A friendly reminder that this issue had no activity for 30 days. |
@giuseppe any update? |
Another one. Here's the list so far. It consistently seems to be
|
New variation:
(that is: the string "make /path private" does not appear in the error message).
|
Pretty please?
|
Still happening with new VMs: see the f39 failure today. This suggests that the failure is related to the
|
the root cause is the fact we have no mount for vfs so 8ac2aa7 is not working for vfs. We could use a bind mount but that kind of defeat the point of using vfs, which doesn't require one (it is just a requirement for the weird interaction of the podman cleanup process and pod cgroups). Maybe a topic for our next cabal, but how much do we care to test VFS in the CI? The VFS driver is a last resort; it shouldn't really be used for anything serious. |
Another VFS flake. Only one so far, in remote root:
The text was updated successfully, but these errors were encountered: