-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman/buildah cannot see each other's local images #13970
Comments
Thanks for reaching out, @NetDwarf.
In order to share images, the tools need to use the same storage driver. May I ask why you're using |
This was not intentional at all. I was just presented with the problem above and had to do some digging to find the culprit. It was actually a bit more complicated than that, however to keep the bug report somewhat relevant/concise I picked the resulting behavior. The part that I did not add also because I am missing some information like previous storage-driver settings: Actually it worked at the start while
(or similar, that's the one that is still in buffer) I assume PEBKAC, however the resulting behavior still presents as a bug. |
`ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve This error is usually caused by someone changing the storage driver after they have pulled an image. Podman records the storage driver in its internal database. If you want to change the storage driver, then you need to do a This can also happen on older systems where the user used a rootless container without fuse-overlayfs being installed, which would default to vfs. Later installing fuse-overlayfs will cause podman to switch to overlay and potentially show this error. |
Neither of you even read the issue. |
I considered the issue done given the above answers. As mentioned above, the same driver must be used. If something's left unaddressed please point it out; ideally without sarcasm. |
I did not explicitly use a different driver leading to the described behavior. The issue is either:
As the expected behavior is not necessarily obvious you can pick one of the above. Given the described behavior in this issue can happen without explicitly setting a storage driver (and is not worked around by |
Thanks for elaborating! Replying in-line below.
Curious what @rhatdan thinks. Some users may desire running with multiple storage drivers simultaneously, where a warning would harm the experience.
The storage driver is a global setting. Supporting the simultaneously is out of scope.
That is indeed an interesting case:
Explanation: buildah and podman default to using
I concur.
I do not know how/if the storage drivers can change like that without a user/caller setting it explicitly, or stemming from an upgrade. |
Possibly this is the bug, as podman was set to vfs albeit overlayfs was supported. I also did
|
Very likely that's the case. It's stored in
They use the very same code. I think the tools should probably only default to But I may very well miss some background. @rhatdan @giuseppe WDYT? |
Hmm, it happened after removing that folder (correlation, not necessarily causation) and I tried also some combinations with removing that folder again and |
The only way to fix this would be to write the storage.conf file to the users homedir if it did not exists, when you executed $ podman --storage-driver=vfs pull alpine > /dev/null With "vfs" as the storage driver. The problem with this is storage.conf does not inherit. So once this is written any global settings from /usr/share/containers/storage.conf and /etc/containers/storage.conf are ignored. We did this back when we supported libpod.conf, and it ended up being an update headache. We could change storage.conf to inherit missing fields, like we do with containers.conf, but that would be a fairly big change. I am not sure this is worth the risk for the benefit, or a real corner case. |
@rhatdan, I think we can behave slightly different, see my upper comment:
|
Yeah I figured that it is not easy to have a global setting or adaptive behavior. Is an error message in a very specific case adequate instead? That is when podman does not find the image with that storage driver, but there exists an image with the same name and another storage driver. Something like: |
@vrothberg I think this might be more difficult then you think. If I did a second buildah --storage-driver=btrfs pull alpine What is the default? I am not crazy about containers/storage guessing what has happened in the directory beforehand and guessing at the driver. |
@rhatdan, I'd think overlayfs > btrfs > vfs. Isn't there a preference list in c/storage? |
As I understood it, the issue is that podman overwrites the storage driver depending on what it has in its database:
should we just make the error clearer and mention podman might not see images created from other tools? |
I like the idea. @rhatdan WDYT? |
SGTM |
This is a bit of a mixture now. The error message that giuseppe quoted is not the one you get when the image is in another format. It was an error message leading to the described behavior. This is the actual error message:
Among possibly other error handling options you can:
The first option is the worst in my opinion as it would be included whenever someone selects the wrong image for example if you select |
A friendly reminder that this issue had no activity for 30 days. |
@giuseppe could you change the error message so we can close this issue? |
opened a PR: #14499 |
make the error clearer and state that images created by other tools might not be visible to Podman when it overrides the graph driver. Closes: containers#13970 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The error message the user is going to see is this:
So, the fix does not change anything as the new (and still cryptic) error message is in a different place. And I called attention to that just before the PR. I am underwhelmed to say the least. |
make the error clearer and state that images created by other tools might not be visible to Podman when it overrides the graph driver. Closes: containers#13970 [NO NEW TESTS NEEDED] Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
I just ran into this and found this issue after searching online. Here's one workaround:
At this point they are both configured to use the same storage driver. However every time I run
I installed these tools through system packages and did not create any new configuration files or tweak any existing ones.
I guess I need to create a |
You need to remove the Podman database to remove that error. Config file will not help. Podman has detected a change that will break all existing containers, pods, etc, and it is rejecting it until |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
buildah
andpodman
can have a different storage-driver on the same host system and in that case cannot see each other's images. This in itself is possibly expected behavior, but it presents as a bug to the user. (See expected results section)Steps to reproduce the issue:
buildah
andpodman
have different storage-driver (i.e. overlay and vfs)buildah commit $(buildah from scratch) foo
podman run --rm -it localhost/foo
Describe the results you received:
Describe the results you expected:
This is not cut and dry as there are at least three conceivable results.
<x>
in order to use itbuildah
andpodman
share the same storage-driver (setting)podman
uses a different storage-driver when it sees an image in another oneAdditional information you deem important (e.g. issue happens only occasionally):
The obvious workaround is to set the driver i.e.:
Use
podman run --storage-driver=<driver> ...
Or
STORAGE_DRIVER=<driver> podman run ...
Or set it permanently in
~/.config/containers/storage.conf
and execute
podman system reset
, but be aware that it removes all current containers and images.Output of
podman version
:Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
No (not latest version)
Additional environment details (AWS, VirtualBox, physical, etc.):
This is on Debian/testing and in rootless mode, however I think this happens with other settings as well as long as the storage-drivers are different.
The text was updated successfully, but these errors were encountered: