-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interrupted build leaves hard-to-remove containers running #14523
Comments
We've seen quite a few issues with Build leaving containers around, but I've never heard of it leaving running containers. That's very problematic for Podman as we don't really have a way to stop them ourselves, not knowing the PID of the container. @nalind Interrupting the build should kill the build container, right? |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
@flouthoc still working on this one? |
This seems to be similar to what I was running into here: #23683 |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
In the vein of #11472
Steps to reproduce the issue:
podman run -d busybox sleep 1d
printf '%s\n' 'FROM busybox' 'RUN cat /dev/zero > a' | podman build -f - /var/empty
podman image prune -af --external
(This doesn't do anything)podman container prune -f
(This doesn't do anything)podman system prune -af
(This doesn't do anything)watch du -hd1 ~/.local/share/containers/storage
Describe the results you received:
None of the prune commands remove anything. Disk usage continues rising as fast as the build container can write zeroes.
Describe the results you expected:
Interrupting
podman build
should not have left the build container alive!Failing that, I'd expect to have a more obviously reasonable clean up build containers than the handful I could find:
buildah rm
directly. This is the most obviously reasonable way to recover from the situation. Unfortunately, CoreOS does not ship the buildah CLI, so this isn't an option there.podman ps --external
andpodman rm
the offending containers. My best attempt ispodman rm -f $(podman ps --external -qf status=unknown)
, which seems hideously obscure and potentially dangerous.podman rmi -f
build container's image. This is kind of a bad option, because that image can be an ancestor of other containers you don't want to be deleting (e.g. the sleep container in above example).podman system reset
. This does make the problem go away, but has obvious consequences.Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
The text was updated successfully, but these errors were encountered: