-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: copying system image from manifest list: trying to reuse blob */diff: no such file or directory #21810
Comments
…fault folder to /home/podman/containers for "dir" mode (related to containers/podman#21810)
And now, when I tried to replicate the same directory structure on my ZFS-based installations, I encounter the issue when the folders are mounted to e.g. /home/podman/containers/{compose,config,storage,images,volumes,...}.
What the hell ???? Update: And of course on another server the error does NOT occur with ZFS and the new Folder Structure. |
And for headscale-ui yet another error:
This was tested on another VPS and the issue could be repeated. However, as soon as I remove the So I can install either headscale-ui, or redis, NOT both ! |
Renaming issue since it seems more general that home root folder data storage. Old title: Home folder ~/storage folder for graphRoot causes issues with some containers (e.g. redis:alpine) |
@giuseppe PTAL |
I've tried your script on Debian and I encounter a lot of errors, the last one being:
and it stops the execution. Can you simplify the reproducer to not require such a complex configuration? Is it enough to override some of the paths used by Podman? Can you reproduce it on Fedora? |
I'll have to check what went wrong there. I'm using this script myself to remember what I did / what to do as there were/are several steps required in order to get Podman Rootless working. It's NOT (or was at least) plug & play.
The latest findings seem to indicate that it's not necessarily caused by the folder location. /home/podman/storage works in some cases, in some cases not. /home/podman/containers/storage works in some cases, in other cases not. But yeah ... it might be enough to change the storage / graphRoot path, as that seems where the problem occurs with overlay and overlay-images.
I don't know, I don't use Fedora (and Fedora IIRC causes more issues related to SELinux being enabled by default, which I have no experience since I last used Fedora maybe 15 years ago ...) |
OK there was a syntax error indeed (missing 1 "]" in "if" statement initiated by "[[") in I also updated Fixed in the commit |
I just re-ran
Seems the path is quite long ?
This part seems normal: /home/podman/containers/images/ This may make the path too long ??? Not sure what the issue is.
|
OK I acknowledge there were several errors, as you @giuseppe reported. Hopefully I fixed most of them now in my latest commits, but I ended up in a new one that I never saw before. Keep in mind this is on a Raspberry Pi 2 (armv7l) so it might also be related to ARM 32 bit instruction set (even more limited than armhf - yeah confusing Raspberry Pi logic in the architecture naming).
And the key error is |
Any update ? |
who owns Can your user create that directory? |
I cannot say it is an issue during deployment. It was an error during SETUP, something that didn't occur in the past
It's fair for you to say that it's difficult to replicate as I'm "only" encountering it 50% of the time. So it's kinda hit-or-miss. I fixed some bugs in the scripts (thank you for reporting that), but it also doesn't always happen. However, I can say now that both ZFS+rbind ("zfs") and EXT4 ("dir") are affected. And not only on AMD64. |
At least during The errors are due to me not specifying the image name:tag properly or it's not available for armv7.
|
Tried to update Authentik on my Remote VPS and surely, when Redis attemped upgrade, the issue appeared again .... |
Did a
|
It just seems to hate anything that has to do with Alpine images really. But it's not consistent:
Any idea on how to move on ? |
And is this expected behaviour ? Trying to run
When permissions are
And for instance for the latest error message:
As root everything works correctly:
|
Is it possible some customization in My
|
Apparently all containers are failing now.
|
Latest
|
A quick search about the error brings up #16882. However, I don't have such .json file. Any idea ? Otherwise is it possible to somehow disable "reusing blob", since it keeps generating issues ? |
Hit it again on yet another machine (EXT4). Might be related to some Podman Regressions after upgrade to Podman 4.9.3 ? Or possibly something to do with Buildah etc ? |
@giuseppe Any idea ? |
podman_pull_certbot_podman_dns-cloudflare_latest.log Now it's complaining about storage corruption. Even though I ran a These are some potential clues
|
What other logs should I provide ? It's still not working ... |
I can't think of something in particular that can cause that. My suggestion is to start from a default configuration and iteratively change each path until you find what causes the error. |
Well that's a bummer ... I have some indications that specifying directly the folder in storage.conf instead of a --rbind mount seems to work better. But I also saw in the past that it stopped working after a while. |
@giuseppe , @rhatdan: I don't know if you want to do something about this. For what I'm concerned, removing / commenting the I'm not running happily with up to 20-40 Containers and the Issue didn't reapper after one Month. On some Hosts where the Issue still appeared, I checked I could replicate the Issue on AMD64 ZFS, AMD64 EXT4 and ARM64/AARCH64 ZFS, so I would say that it's not ARCH nor Filesystem dependent. After a few Containers are pulled / ran, the Issue would show up. Not consistently (sometimes it shows after 2 Containers, sometimes after 5), but definitively before reaching any kind of "sensible" (~ 10 ?) Containers. If you do NOT want to find the Root Cause of the Issue (which IMHO is fair, probably not the highest priority for you), then I propose you add to the Documentation, in BOLD, RED, and preceded by 3x "IMPORTANT" Words that enabling I still cannot pinpoint exactly on which Images the issue shows up, it's possible when it's trying to install a newer Image / Update of a blob of an existing Image. The feeling still holds that it occurs more with I cannot however pin-point the Issue more precisely, that's why I suggest you update the Documentation accordingly, if Investigating the Root Cause is not a Priority for you. I'd be happy if this was at least a "Documented BUG", rather than an "Obscure Feature" 👍. If somebody wants to replicate on their end, try maybe in a VM and enable For ZFS it's Then do a few pulls (suggested |
Issue Description
I am facing a very weird issue.
My standard folder structure is like this (the idea behind splitting into several folders was to make it easier to handle e.g. ZFS based snapshots and backups):
The issue does NOT show up with ZFS. Everything seems to work fine there, with zdata/PODMAN/ mounted with --rbind to /home/podman/:
The issue appears only on EXT4.
Here, no mount --rbind is used, and the folders are just plain folders within the user home directory.
When trying to install some images (redis:alpine, redis:bookworm, possibly also headscale and headscale-ui, not sure), they usually fail with the following messages:
Debug level log: https://pastebin.com/F65rwZuU
I also tried
podman system reset
which mostly failed to delete the storage folder:Steps to reproduce the issue
Steps to reproduce the issue
a. This can be done mostly automatically using my helper script: https://github.com/luckylinux/podman-tools
b. ./setup_podman_debian.sh "podman" "dir" "/home/podman"
podman pull redis:alpine
Describe the results you received
Debug level log: https://pastebin.com/F65rwZuU
Describe the results you expected
Podman pulling redis:alpine image (and others) normally.
Putting the storage folder graphRoot inside e.g. a subfolder within the user home directory works correctly apparently:
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
VPS on KVM AMD64.
Debian Bookwork 12 with Podman 4.9.3 pinned from Debian Testing/Trixie.
Additional information
I quickly tested in my local KVM (Proxmox VE) ZFS-based storage for podman:
podman pull redis:alpine
works correctly.The text was updated successfully, but these errors were encountered: