Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman may be leaking storage files after cleanup (rootless) #3799

Closed
phlogistonjohn opened this issue Aug 12, 2019 · 44 comments
Closed

Podman may be leaking storage files after cleanup (rootless) #3799

phlogistonjohn opened this issue Aug 12, 2019 · 44 comments
Assignees
Labels
do-not-close kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@phlogistonjohn
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Unable to fully clean up the container storage of podman (rootless) using podman commands. In an effort to clean my homedir I tried to use podman commands to "clean up after itself" but a number of fairly large-ish dirs are left behind.

Willing to be told this is pebcak but and that I missed a command but I couldn't find one in the docs that jumped out at me. I sort of assumed that podman system prune -a would be the ultimate clean up command but 18G of data are still left behind in ~/.local/share/containers/storage.

Steps to reproduce the issue:

  1. Delete all running containers.
  2. Remove all old containers podman container prune
  3. Remove most old storage podman image prune
  4. Force remove some aliased images: podman image rm --force ...
  5. Try to remove more podman system prune -a

Describe the results you received:
18G still used in ~/.local/share/containers/storage

Describe the results you expected:
Storage usage in the megabytes or below range.

Additional information you deem important (e.g. issue happens only occasionally):

du info and paths:

[jmulliga@popcorn ~/.local/share/containers/storage]$ sudo du -sh .
18G     .
[jmulliga@popcorn ~/.local/share/containers/storage]$ sudo du -sh vfs/dir/*
209M    vfs/dir/042f0e57cc358272ea11fb0d0f3caef848ee0bce9c3cea8f63269b80612da061
511M    vfs/dir/06507e0668ac7f050b4913208e2e0f2edca47a9489d4e02a78b68f1aaf664a78
511M    vfs/dir/0aafc6c55e06fb285d23d3df950c6a73c496fa803affb9d596083fa5a7aff88c
728M    vfs/dir/0d2713c3b786ff706733d8ea434594daa78737339e7fd138ca39c9c3ea80c366
150M    vfs/dir/15a244bb49eca26e25b5ffdb47b98c95eb6b2285913af8970f784502a93ed1d1
366M    vfs/dir/1a11e1453f3b00267b3db7110d69e4e875af7982efaa11a1a4f2e3d51ebba600
378M    vfs/dir/1cddfee79778381fee0cf250d168a766eb34b09e888abd765f698562d20d6941
324M    vfs/dir/28d6ab72867c6b41ec2241d255548cb1de326c1271055a68305cad218619ceea
461M    vfs/dir/2efe3d19536c9fa50ad12ab3a955e9894537a7599cb24d9e40d8dfdfc0dcf31d
517M    vfs/dir/328661f5ed5f53ed42342ef659a9c8412cde4ba475337549a58ae35fe182da73
155M    vfs/dir/42442b8f704229719018208e1209d59e8a1a0be56447589137795354324bf426
347M    vfs/dir/4478698b1e6885a99c86c55df8b3d8d9453b82a05becebfcf96dbf08e7cf561d
834M    vfs/dir/49366c0c9555a454a315ec4099727367cfcb02c2ebc51ec68e676381fa25e067
461M    vfs/dir/52b87403beb8b40af5b7293be169411ccc33fa636bc15d57cbad790504d2df43
238M    vfs/dir/57449bbe19ba1375b80cf4163251805f074229a6f2f26a6114185ef21229085f
761M    vfs/dir/5bda29dd8a0ae426fe1ac7b6227da0c9dd62bc6d71b5bd5389583fa0b773ae51
86M     vfs/dir/6063da5b498749d56c29c2bc0cc32b59454695b92a3766b036e27281511c3222
508M    vfs/dir/66921a25400861ca0b7c0dd1c5af0791bc811adc01c6a8f1aad6f2480b31d6d1
259M    vfs/dir/6c813be9b028415af6c54f27751b3af132a00a6a5add41e84ff8ced79d5a1499
511M    vfs/dir/730487ea007c1a0a651797fe3286e5ea762fa4db959f98f7058bb0e8063cf9ae
854M    vfs/dir/784a1a8d959e4bf2c80880c611a02f07136a0b60559ec56a11f44040f58a1460
581M    vfs/dir/7f74d141890e3f316cea7937bdf8581d9ac5dbbc1a57b7d174a98155fc0e0993
499M    vfs/dir/88e5a31ddaa5ddb5807a6365aa7876f3454b5f3cde6c37f3fe21973948b89630
128M    vfs/dir/8abae375a87f3385ee37b721e1076588403d3d730601eab1a425cab8094f73ee
727M    vfs/dir/8eadbdee0fb8cbdb48dba647703fb72cfe17c2d038b2c34cd92afeeea9c09283
508M    vfs/dir/96b67bd92d34ca03f64a186afe5c8fe2532a1f760f4d31986333045064f7a5ed
260M    vfs/dir/9a57692d1163a66e581bf8cbba7b697d4b51d2984facc48b4f0dd201cdb21938
362M    vfs/dir/9f7136a981c01276def81543093970c451fee09356aeb7ceee34e9cb8317b6f4
679M    vfs/dir/a1f54eff57f492124d9344d67776cf6351291eca315aad75eaca85c5cef37a87
378M    vfs/dir/a332da330995c7227dee07d33b62b0e7406c54e2ff71812c56cc1c6ff0e01fd8
328M    vfs/dir/ab6fad4ca0b902f1d4fb3d94e5d6fbba5bf9fd0922c97e4a63f7de7583679416
600M    vfs/dir/b3dd53d1377eee9c903feb5f651f763a067b9639edd1927ebf3572ab2bd2db73
326M    vfs/dir/be6479440c7e45990b8ee75105bc13a6a3a668cbc24249f72ce1f66a9cebe542
464M    vfs/dir/bf591d6d02a98c32d93a2bbdf923f322eb1c76a084b34ded04fa44fe2da8c02e
363M    vfs/dir/ccc27d324d4f59ee12143f0879a54a97bb77806818c6ed0e63e93ca920bad0c5
314M    vfs/dir/ccdb99cf27958e3489adea4060b630bb6f948b6807aa84a37b744c9f131de41c
92M     vfs/dir/d309c3b4543571f11e3787e8930ee4269eba201937e0b879ae5664e4298baf46
420M    vfs/dir/d83327b2e0431f627f28000477e11714b0305d26939b89fd3398de330b412177
505M    vfs/dir/d88e6eb48c802daed1b03df0712c90e792b5825c505b62f1fd444b7ee630c788
127M    vfs/dir/e70f0111d161001e0b708e21bb601aae74e4f7cf6a4fb5aeb7888233c9ac33c7
355M    vfs/dir/edcd13d9660aefcfaa844abcf5ae8355d7e01d0afa6e880a016e7de1c9fdffd6
348M    vfs/dir/edd4ffa57844615cb5ae2f4fb3919e931f467e5b95d078fa0e55b1e8ce665df0
452M    vfs/dir/f0194d0adcbc0dfd6bbe4e674eca9b98d1dff5b090aced01dfbb85f99d88fa1b
210M    vfs/dir/f2d1e4f0dc6a1fe8e9d2ac2f91449fafe22a5c55e7cbeed9e8887fc5405bd1a1
76M     vfs/dir/f737a0f5ebff523d44c123d3a67d0653a3f98d78b8f1e2fd9780d557f4e2db04

Output of podman version:

Version:            1.4.4
RemoteAPI Version:  1
Go Version:         go1.11.11
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.11.11
  podman version: 1.4.4
host:
  BuildahVersion: 1.9.0
  Conmon:
    package: podman-1.4.4-4.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.0.0-dev, commit: 130ae2bc326106e335a44fac012005a8654a76cc'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 7546503168
  MemTotal: 32940535808
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: 82f4855a8421018c9f4d74fbcf2da7f8ad1e11fa
      spec: 1.0.1-dev
  SwapFree: 16308228096
  SwapTotal: 16542330880
  arch: amd64
  cpus: 8
  hostname: popcorn
  kernel: 5.1.21-200.fc29.x86_64
  os: linux
  rootless: true
  uptime: 51h 54m 36.77s (Approximately 2.12 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/jmulliga/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions:
  - vfs.mount_program=/usr/bin/fuse-overlayfs
  GraphRoot: /home/jmulliga/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /run/user/1000
  VolumePath: /home/jmulliga/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):
fedora 29, physical host

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 12, 2019
@mheon
Copy link
Member

mheon commented Aug 12, 2019

Our first recommendation in these cases is usually to avoid using VFS, and instead use fuse-overlayfs. This definitely does look like a bug, though... Do you have any lingering images in podman images? Any lingering containers in podman ps -a?

@phlogistonjohn
Copy link
Author

No, nothing lingering there:

[jmulliga@popcorn ~/.local/share/containers/storage]$ podman images -a
REPOSITORY   TAG   IMAGE ID   CREATED   SIZE
[jmulliga@popcorn ~/.local/share/containers/storage]$ podman ps -a
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES

Thanks for the hint about which backend is preferred. IIRC a while back I was running into errors and set vfs to work around those errors. Unfortunately, I no longer remember the exact error and didn't record it. Since my plan is to eventually move my container storage to another phsyical disk I will try fuse-overlayfs when I do so, but after I've tried to gather sufficient info here.

@rhatdan
Copy link
Member

rhatdan commented Aug 13, 2019

Could these be images built with buildah or podman build?

@mheon
Copy link
Member

mheon commented Aug 13, 2019

podman images should still show the images - but temporary containers from buildah could be it?

@phlogistonjohn Can you install buildah (if you haven't already) and try buildah list --all?

@phlogistonjohn
Copy link
Author

[jmulliga@popcorn /home/bulk]$ buildah list --all
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
[jmulliga@popcorn /home/bulk]$ 

@kunalkushwaha
Copy link
Collaborator

@phlogistonjohn any idea, which version of podman was initially installed, when you started working on containers on this machine.

I had also faced this issue, but somehow couldn't look into it. But I remember this happned after I upgraded my podman version.

@phlogistonjohn
Copy link
Author

Yeah, it certainly could be an upgrade thing. There was at least one other time when I hit issues with podman after an update of the package but unfortunately I don't know what version.

@ak-1
Copy link

ak-1 commented Sep 20, 2019

Same issue here. Lots of directories under ~/.local/share/containers/storage/vfs/dir/ that I cannot get rid of and I am running out of disk space. I also tried various prune commands like above (with only 2 images and no running containers) without effect.

I just installed podman/buildah today, pulled one base image and iterated through various buildah commit/podman run cycles (all as non-root). So it should not be upgrade related either. All these directories contain various states of the root file system of the image I was working on.

I am using podman/buildah under Ubuntu 19.04.

Is there some workaround? Can I just remove these directories if no container is running or is there some manual way to check which of them are still used?

@mheon
Copy link
Member

mheon commented Sep 20, 2019

Our strong recommendation continues to be to use fuse-overlayfs (rootless) or kernel overlayfs (root) instead of VFS.

As for the VFS issue... @nalind PTAL

@rhatdan
Copy link
Member

rhatdan commented Sep 21, 2019

Try removing the files in
podman unshare

I am sure they are owned by different users in your User Namespace.

@ak-1
Copy link

ak-1 commented Sep 25, 2019

I removed all files and installed fuse-overlayfs. This works fine, thanks!

@hdonnay
Copy link

hdonnay commented Oct 4, 2019

This is happening on Fedora Silverblue 30.

Is there any way to clean up these files, short of nuking the entire directory?

Edit:

Running podman volume prune inside of a podman unshare shell seems to have knocked it down.

@XVilka
Copy link

XVilka commented Nov 1, 2019

Same on the Fedora 31, podman volume prune didn't help though.

@rhatdan
Copy link
Member

rhatdan commented Nov 1, 2019

@XVilka are you seing these leaks with fuse-overlay?

@XVilka
Copy link

XVilka commented Nov 1, 2019

@rhatdan yes, still reproducible with fuse-overlay too.

@github-actions
Copy link

github-actions bot commented Dec 2, 2019

This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.

@giuseppe
Copy link
Member

giuseppe commented Dec 2, 2019

@rhatdan yes, still reproducible with fuse-overlay too.

can you show what directories are left?

I've just tried to run 1000 containers, but once I do a podman rm -fa; podman rmi -fa I see only:

$ cd; find .local/share/containers/
.local/share/containers/
.local/share/containers/storage
.local/share/containers/storage/overlay-containers
.local/share/containers/storage/overlay-containers/containers.lock
.local/share/containers/storage/overlay-containers/containers.json
.local/share/containers/storage/overlay-images
.local/share/containers/storage/overlay-images/images.json
.local/share/containers/storage/overlay-images/images.lock
.local/share/containers/storage/tmp
.local/share/containers/storage/overlay
.local/share/containers/storage/overlay/l
.local/share/containers/storage/mounts
.local/share/containers/storage/libpod
.local/share/containers/storage/libpod/bolt_state.db
.local/share/containers/storage/storage.lock
.local/share/containers/storage/overlay-layers
.local/share/containers/storage/overlay-layers/layers.json
.local/share/containers/storage/overlay-layers/layers.lock
.local/share/containers/cache
.local/share/containers/cache/blob-info-cache-v1.boltdb

@matacino
Copy link

matacino commented Dec 13, 2019

Hi there,

is it possible that the reason that the container directories are not cleaned, is because of wrong/missing rights of removing a subdirectory?

I tried as a podman-using user du -hs ~/.local/share/containers/storage/*
witch told me it can not access the directoryies like for example `/home/myMyselAndI/.local/share/containers/storage/vfs/dir/6d98fbb11a04ed60ee823aa1133a0481822810a007aec44ca35af593490a6e4e/var/cache/apt/archives/partial' because of missing rights…

The reason for this is found with a ls:

/home/myMyselAndI/.local/share/containers/storage/vfs/dir/2a567f2ad7fb6e902011911383f2a792de3ae39830b58084353d947981da4b0b/var/cache/apt/archives/:
insgesamt 4,0K
-rw-r----- 1 myMyselAndI   myMyselAndI    0 Nov 23 01:00 lock
drwx------ 2 100099 myMyselAndI 4,0K Nov 23 01:00 partial`

/home/myMyselAndI/.local/share/containers/storage/vfs/dir/51f4237d89596ad592cdc809ea897fc002ce3ae38e903e623cd2ae0835cc87db/var/cache/apt/archives/:
insgesamt 4,0K
-rw-r----- 1 myMyselAndI   myMyselAndI    0 Nov 23 01:00 lock
drwx------ 2 100099 myMyselAndI 4,0K Nov 23 01:00 partial

/home/myMyselAndI/.local/share/containers/storage/vfs/dir/c203a5e0f15b2b1bc9f87453a1473ad7481a724c5e5414c2551f7c3cbf3de3e5/var/cache/apt/archives/:
insgesamt 4,0K
-rw-r----- 1 myMyselAndI   myMyselAndI    0 Nov 23 01:00 lock
drwx------ 2 100099 myMyselAndI 4,0K Nov 23 01:01 partial
…

When I build an image for uploading into a gitlab-Container Registry some of these directories are actually used and some are updated.
That is fine, but I can not find a possibility to clear this apparent image cache.
I could do a buildah unshare and then remove them manually, but isn't there something like
buildah cache clear -a??

@rhatdan
Copy link
Member

rhatdan commented Dec 13, 2019

You need to enter the user namepsace
podman unshare du -hs /home/USERNAME/.local/share/containers/storage/*

@matacino
Copy link

@rhatdan Thank you for the answer! but after reading your article https://podman.io/blogs/2018/10/03/podman-remove-content-homedir.html I did as I wrote above already:

I could do a buildah unshare and then remove them manually

I understood that as root (sudo) or as buildah-unshare-root I can could handle those files the *nux-way with rm … or du … or whatever.

As I did all the container-building and registry-pushing as the user "myMyselAndI" I would imagine I am already inside the right user namespace.

But stays the question:
If the files in .local/share/containers/storage/vfs… are created while using buildah or podman commands (like buildah bud …) - is there or will there be a podman or buildah - command where one could handle/remove all the secret hidden stashes of the podman-stack something like dnf clean packages or even dnf clean all ?

Eg. looking like podman image-builder-cache clean all?

@rhatdan
Copy link
Member

rhatdan commented Dec 13, 2019

We have added
podman system reset
in the upstream repo, which will clean up all changes in your system.
If run as a user it basically does the equivalent of
podman unshare rm -rf ~/.local/share/container ~/.config/containers

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan rhatdan closed this as completed Jan 13, 2020
@FSMaxB
Copy link

FSMaxB commented Jun 18, 2020

Why was this closed? This issue still exists and still isn't fixed.

@rhatdan
Copy link
Member

rhatdan commented Jun 19, 2020

Are you saying podman system reset does not work for you?

@FSMaxB
Copy link

FSMaxB commented Jun 19, 2020

Nevermind. Seems like I need --rm and --rmi and that's why it was leaking.

@svdHero
Copy link

svdHero commented Nov 25, 2020

I just had the same issue on Ubuntu 20.04 with Podman 2.1.1. A podman rmi -fa worked for me.

However, I don't fully understand. Could anybody explain to me the difference between
podman system prune -fa and podman rmi -fa?

Shouldn't they do the same thing (image-wise) due to the -a flag?

@zhangguanzhang
Copy link
Collaborator

zhangguanzhang commented Nov 25, 2020

I just had the same issue on Ubuntu 20.04 with Podman 2.1.1. A podman rmi -fa worked for me.

However, I don't fully understand. Could anybody explain to me the difference between
podman system prune -fa and podman rmi -fa?

Shouldn't they do the same thing (image-wise) due to the -a flag?

seem related #7990 @rhatdan

@rhatdan
Copy link
Member

rhatdan commented Nov 25, 2020

Seems likely that we have a bug.

@FilBot3
Copy link

FilBot3 commented Dec 14, 2020

On Fedora 33 with Podman:

podman version
Version: 2.2.1
API Version: 2.1.0
Go Version: go1.15.5
Built: Tue Dec 8 08:37:50 2020
OS/Arch: linux/amd64

and Buildah:

buildah version
Version: 1.18.0
Go Version: go1.15.2
Image Spec: 1.0.1-dev
Runtime Spec: 1.0.2-dev
CNI Spec: 0.4.0
libcni Version:
Image Version: 5.8.0
Git Commit:
Built: Wed Dec 31 18:00:00 1969
OS/Arch: linux/amd64

Doing the buildah rm --all did clear up my podman ps --all --storage listing and a lot of space for me. I want to say that this was holding on to some Image I had that wouldn't let me delete it after I had removed all my containers.

@rhatdan
Copy link
Member

rhatdan commented Dec 14, 2020

You can now remove containers created by other engines using podman rm

$ buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
podman (stdin) $ buildah from fedora
fedora-working-container
$ buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
4090977b76f7     *     79fd58dc7611 registry.fedoraproject.org/fe... fedora-working-container
$ podman rm fedora-working-container
fedora-working-container
$ buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME

@svdHero
Copy link

svdHero commented Dec 15, 2020

Does this also fix the problem
with podman system prune -fa not cleaning up completely?

@rhatdan
Copy link
Member

rhatdan commented Dec 15, 2020

I still do not believe that would clean containers/images in use by another container engine.

@svdHero
Copy link

svdHero commented Dec 15, 2020

What do you mean with "another container engine". Right now I am only running podman, no docker or anything. Thus, I should be fine, right?

@rhatdan
Copy link
Member

rhatdan commented Dec 15, 2020

Sometime podman using buildah can leave a buildah container behind when doing podman build.

@simonsan
Copy link

Same here, was running in an WSL Ubuntu-20.04, image size blew up to 40 GB all of a sudden, ran all the commands recommended in here. Was not working and reinstalled the image for WSL.

@rhatdan
Copy link
Member

rhatdan commented Mar 23, 2021

Would need more information on what happened. If this happens again, could you gather podman info and the size of /var/lib/containers or ~/.local/lib/share/containers

@grzegorzk
Copy link

grzegorzk commented Apr 3, 2021

Today I spotted the same behavior. podman ps -a and podman images show empty lists, however I noticed 16Gb taken up in my .local/share/containers/storage. However podman system reset removed all unused artifacts.

@rhatdan
Copy link
Member

rhatdan commented Apr 5, 2021

Could you attach
du -a -m ~/.local/share/containers/

To see if there is anything interesting under there?

@tarjei
Copy link

tarjei commented May 6, 2021

Just thought I'd report that podman volume prune might be a good idea in cases where podman prune does not give the desired result.

@iamyb
Copy link

iamyb commented Dec 10, 2021

I had similar issue on wsl2 with ubuntu 18.04. The leakage seems still be there after removing all those files in .local/share/xxxx.
last I found one solution here and the problem solved.

wsl --shutdown
diskpart
# open window Diskpart
select vdisk file="C:\WSL-Distros\…\ext4.vhdx"
attach vdisk readonly
compact vdisk
detach vdisk
exit

@va1entin
Copy link

Ran into this issue and this document helped cleaning it up: https://podman.io/blogs/2018/10/03/podman-remove-content-homedir.html

buildah unshare
cd /home/myuser
rm -rf .local/share/containers/

Neither the podman pruning commands nor podman system reset helped.

@pat-s
Copy link

pat-s commented Nov 29, 2022

I just also faced this issue (podman 4.1.1, vfs), podman was consuming ~ 100 GB with old images in containers/storage/overlay.

podman system reset cleared the space.

@mailinglists35
Copy link

mailinglists35 commented Jun 17, 2023

I've done all suggested commands including podman system reset; did not delete the vnet38 network interface, I still see it, despite .local/share/containers shrinking to 120KiB
oracle linux 9.2

@rhatdan
Copy link
Member

rhatdan commented Jun 20, 2023

@Luap99 PTAL

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 19, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
do-not-close kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests