-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage utilisation doesn't seem to add-up... #13516
Comments
Thanks for opening the issue, @srcshelton! It seems very odd that the volumes aren't removed with @containers/podman-maintainers FYI |
I'll see what I can do reproducer-wise, but it's probably going to be a tricky one to catch other than anecdotally... |
Just from general use, tonight's output:
(78GB is obviously not 0% of 86%, and 4.7MB for 24 containers seems a little low? Also, the output is showing more Active Local Volumes than Total Local Volumes...)
… so |
The calculate the percentage we need floating point numbers. The current code however casted the result of reclaimable/size to an int first. Casting to an int in go will just discard the decimal points, thus the result was either 0 or 1 so if multiplied by 100 it would show up as 0% or 100%. To fix this we have to multiply by 100 first before casting the result to an int. Also add a check for div by zero which results in NaN and use math.Round() to correctly round a number. Ref containers#13516 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
#13575 fixes the percent calculation but it does not fix the incorrect values for the volume |
The calculate the percentage we need floating point numbers. The current code however casted the result of reclaimable/size to an int first. Casting to an int in go will just discard the decimal points, thus the result was either 0 or 1 so if multiplied by 100 it would show up as 0% or 100%. To fix this we have to multiply by 100 first before casting the result to an int. Also add a check for div by zero which results in NaN and use math.Round() to correctly round a number. Ref containers#13516 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
@rhatdan I think this is the issue that sally was describing to us a few weeks back. Seems eerily similar and concerning that the same thing is happening to two people. I will look at this. |
currently, podman system df incorrectly calculates the reclaimable storage for volumes, using a cumulative reclaimable variable that is incremented and placed into each report entry causing values to rise above 100%. Switch this variables to be in the context of the loop, so it resets per volume just like the size variable does. resolves containers#13516 Signed-off-by: Charlie Doern <cdoern@redhat.com>
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Could anyone explain this output?
… as a side-note, I've often seem
podman system prune
returning what appear to be feasibly high figures (… larger than the containing filesystem, in some cases) when reporting on storage savings - I always assumed this was double-counting shared layers, or similar...The output above seems to indicate that
system prune
did not actually "Remove all unused pod, container, image and volume data" but only removed images and stopped containers, but then somehow we were left with a state wherepodman
considered there to be 3 times as much reclaimable volume space as the amount of storage that existing volumes were actually consuming, and in any case reported this as200%
(which doesn't appear to indicate "200% more on top", as the container reclaimable space of 3.8GB out of 3.8GB is listed as 100%. You'd assume that for volumes with 570MB reclaimable the percentage should likewise be 100%, so 1.585GB should be about 279%… although, from the final output, the true figure looks as if it should have been 34.7%?)Could having two containers which both inherit the same volume from a (terminated) progenitor container cause some form of double-counting error?
Also, in this case all running containers are paused - is that perhaps causing the reclaimable Container storage to be misrepresented as 100%?
Steps to reproduce the issue:
podman system prune -f
;Observe amount of space reported to have been cleared;
Observe
podman system df
output.Describe the results you received:
The prune operation did not reclaim all reclaimable space;
The sizes and percentages don't appear to be internally consistent, or map to real-world disk utilisation.
Describe the results you expected:
A
system prune
operation should surely reclaim the maximum amount of storage (handling image/container/volume dependencies as necessary)?Percentages and utilisation figures should match-up, reports of storage space consumed and freed should match filesystem usage data.
Output of
podman version
:Output of
podman info --debug
:Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
The text was updated successfully, but these errors were encountered: