-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman may be leaking storage files after cleanup (rootless) #3799
Comments
Our first recommendation in these cases is usually to avoid using VFS, and instead use |
No, nothing lingering there:
Thanks for the hint about which backend is preferred. IIRC a while back I was running into errors and set vfs to work around those errors. Unfortunately, I no longer remember the exact error and didn't record it. Since my plan is to eventually move my container storage to another phsyical disk I will try fuse-overlayfs when I do so, but after I've tried to gather sufficient info here. |
Could these be images built with buildah or podman build? |
@phlogistonjohn Can you install |
|
@phlogistonjohn any idea, which version of podman was initially installed, when you started working on containers on this machine. I had also faced this issue, but somehow couldn't look into it. But I remember this happned after I upgraded my podman version. |
Yeah, it certainly could be an upgrade thing. There was at least one other time when I hit issues with podman after an update of the package but unfortunately I don't know what version. |
Same issue here. Lots of directories under ~/.local/share/containers/storage/vfs/dir/ that I cannot get rid of and I am running out of disk space. I also tried various prune commands like above (with only 2 images and no running containers) without effect. I just installed podman/buildah today, pulled one base image and iterated through various buildah commit/podman run cycles (all as non-root). So it should not be upgrade related either. All these directories contain various states of the root file system of the image I was working on. I am using podman/buildah under Ubuntu 19.04. Is there some workaround? Can I just remove these directories if no container is running or is there some manual way to check which of them are still used? |
Our strong recommendation continues to be to use As for the VFS issue... @nalind PTAL |
Try removing the files in I am sure they are owned by different users in your User Namespace. |
I removed all files and installed fuse-overlayfs. This works fine, thanks! |
This is happening on Fedora Silverblue 30. Is there any way to clean up these files, short of nuking the entire directory? Edit: Running |
Same on the Fedora 31, |
@XVilka are you seing these leaks with fuse-overlay? |
@rhatdan yes, still reproducible with fuse-overlay too. |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
can you show what directories are left? I've just tried to run 1000 containers, but once I do a
|
Hi there, is it possible that the reason that the container directories are not cleaned, is because of wrong/missing rights of removing a subdirectory? I tried as a podman-using user The reason for this is found with a
When I build an image for uploading into a gitlab-Container Registry some of these directories are actually used and some are updated. |
You need to enter the user namepsace |
@rhatdan Thank you for the answer! but after reading your article https://podman.io/blogs/2018/10/03/podman-remove-content-homedir.html I did as I wrote above already:
I understood that as root (sudo) or as buildah-unshare-root I can could handle those files the *nux-way with As I did all the container-building and registry-pushing as the user "myMyselAndI" I would imagine I am already inside the right user namespace. But stays the question: Eg. looking like |
We have added |
A friendly reminder that this issue had no activity for 30 days. |
Why was this closed? This issue still exists and still isn't fixed. |
Are you saying |
Nevermind. Seems like I need |
I just had the same issue on Ubuntu 20.04 with Podman 2.1.1. A However, I don't fully understand. Could anybody explain to me the difference between Shouldn't they do the same thing (image-wise) due to the |
|
Seems likely that we have a bug. |
On Fedora 33 with Podman:
and Buildah:
Doing the |
You can now remove containers created by other engines using podman rm
|
Does this also fix the problem |
I still do not believe that would clean containers/images in use by another container engine. |
What do you mean with "another container engine". Right now I am only running podman, no docker or anything. Thus, I should be fine, right? |
Sometime podman using buildah can leave a buildah container behind when doing podman build. |
Same here, was running in an WSL |
Would need more information on what happened. If this happens again, could you gather |
Today I spotted the same behavior. |
Could you attach To see if there is anything interesting under there? |
Just thought I'd report that |
I had similar issue on wsl2 with ubuntu 18.04. The leakage seems still be there after removing all those files in .local/share/xxxx.
|
Ran into this issue and this document helped cleaning it up: https://podman.io/blogs/2018/10/03/podman-remove-content-homedir.html
Neither the podman pruning commands nor podman system reset helped. |
I just also faced this issue (podman 4.1.1, vfs), podman was consuming ~ 100 GB with old images in
|
I've done all suggested commands including |
@Luap99 PTAL |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Unable to fully clean up the container storage of podman (rootless) using podman commands. In an effort to clean my homedir I tried to use podman commands to "clean up after itself" but a number of fairly large-ish dirs are left behind.
Willing to be told this is pebcak but and that I missed a command but I couldn't find one in the docs that jumped out at me. I sort of assumed that
podman system prune -a
would be the ultimate clean up command but 18G of data are still left behind in ~/.local/share/containers/storage.Steps to reproduce the issue:
podman container prune
podman image prune
podman image rm --force ...
podman system prune -a
Describe the results you received:
18G still used in ~/.local/share/containers/storage
Describe the results you expected:
Storage usage in the megabytes or below range.
Additional information you deem important (e.g. issue happens only occasionally):
du info and paths:
Output of
podman version
:Output of
podman info --debug
:Additional environment details (AWS, VirtualBox, physical, etc.):
fedora 29, physical host
The text was updated successfully, but these errors were encountered: