-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to remove image, image is in use by container #7889
Comments
Is there a way to manually remove these dangling images? |
Thanks for reaching out, @kafji! Can you do a |
@vrothberg Yep, there are a lot of them.
|
Thanks for coming back so quickly!
Yes, could can do that. Doing a I am going to close the issue but feel free to continue the conversation. |
Appreciate the help. Thanks @vrothberg. |
I also used |
I have the same issue, but don't have buildah installed. Is there no podman build command to remove intermediate containers left over from a failed build (assuming --force-rm=false) ? Perhaps there should be? |
@rhatdan WDYT? |
podman rm will remove these containers. But we don't have a flag podman rm --external. Would be the suggested commands to do this. I prefer the second. |
Is it even possible to safely prune these containers? We don’t really know
anything about them so we can’t easily tell if they are in use or not...
…On Wed, Jan 6, 2021 at 09:05 Daniel J Walsh ***@***.***> wrote:
podman rm will remove these containers. But we don't have a flag
podman rm --external.
Or
podman containers prune --external
Would be the suggested commands to do this. I prefer the second.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#7889 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCGRPCGLB65DZPEMLGLSYRUY5ANCNFSM4SBS655Q>
.
|
I wonder why Buildah isn't cleaning up intermediate containers in a failed build. @nalind @TomSweeneyRedHat do you know? |
The default for |
I thought we changed this to deafault to true for podman
So this should only happen if the user said to leave them around. |
man podman build states that the default for --force-rm is false. |
Sorry to be a necro, but... The podman rmi -f says it has deleted the image, but all it has managed to do is suppress the false "container in use" message. Doing a list of containers still shows the dangling container. I ran the podman ps --all --storage and see the dangling processes that were created a few days ago. I have to run buildah rm --all before I successfully run the podman rmi -f command. So my question is, do we have a comparable podman command that does the same thing as buildah rm --all? |
@bridgesense I have used podman image prune --all with success. It left a dangling container that reported it was being used, which I then removed with podman rm --force container_id. |
The rm --all and prune --all will ONLY remove podman containers, not buildah containers. You can remove the Buildah container if you specify the container id directlry. |
This bug seems to still exist in 3.3.0? Interrupting a |
Please open an new issue. |
CLOUDBLD-10965 In exceptional cases [1], podman builds may leave behind intermediate buildah containers. These containers are not managed by podman, so our current pruner job ignores them. This can sometimes prevent the pruner from removing images as well - it only removes images that do not have associated containers. Add a searate job for pruning buildah containers. It is implemented as a custom script, because the `podman container prune` command does not support removing buildah containers. [1]: containers/podman#7889 Signed-off-by: Adam Cmiel <acmiel@redhat.com>
CLOUDBLD-10965 In exceptional cases [1], podman builds may leave behind intermediate buildah containers. These containers are not managed by podman, so our current pruner job ignores them. This can sometimes prevent the pruner from removing images as well - it only removes images that do not have associated containers. Add a searate job for pruning buildah containers. It is implemented as a custom script, because the `podman container prune` command does not support removing buildah containers. [1]: containers/podman#7889 Signed-off-by: Adam Cmiel <acmiel@redhat.com>
I am still getting this? Any pointers? |
Does podman rmi --force work? |
You could check to see if there are any |
Please open a new issue with a reproducer. |
This worked perfectly |
/kind bug
Description
Failed to remove image with error: image is in use by container but I have 0 containers running.
Steps to reproduce the issue:
Sorry, don't have reproduce steps.
Describe the results you received:
Describe the results you expected:
Image successfully removed.
Additional information you deem important (e.g. issue happens only occasionally):
I don't know what caused this to happened. My guess is because I canceled image build (ctrl+c) and then run
podman system prune
.Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Physical machine running Ubuntu 20.04. Podman binary from openSUSE Kubic.
The text was updated successfully, but these errors were encountered: