-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman image digest showing incorrect value in some cases #3761
Comments
@vrothberg Poke - does this look like the c/storage digest mangling from pipes that you found? |
@mheon, v1.4.4 is not affected by the pipes issue. We're actually checking digests and error out if they don't correspond to what the image claims. Maybe, quay is using different mirrors which are not in sync? @dustymabe is this reproducible in some way? If so, can you try a |
I was the other co-worker that replicated the problem on Silverblue 30. Although, trying to reproduce with the My reproducer used the
|
seems to be at least on the machines affected
|
one last bit of data:
|
Thanks @dustymabe and @miabbott! Oh dear ... I could only guess now but will have a look with a fresh brain tomorrow. @mtrmac, have you seen something like that? @dustymabe, could you paste the entire output from |
I guess this is related to how https://github.com/containers/libpod/blob/09cedd152d5c5827520635b10498d15225999e19/libpod/image/image.go#L316 lies. AFAICS, once an image is pulled with one manifest, and later the same image (same “ID” ~ layers+config) is pulled with a different manifest (does not matter whether it is from the same or from a different registry/repo), We don’t currently even always record the digest used when pulling by tag into IIRC CRI-O has the same problem. I don’t think it is something structurally baked in, we just never got around to fixing this. Could the above explain the behavior? |
That's a sharp observation and sounds plausible to me. Some layers were already present and hence skipped during pull which may be an indicator for an older image already being present. Seeing the full inspect output would be helpful. Let's follow @mtrmac's theory a bit. @dustymabe, @miabbott, can you please retry by first deleting all images (or all images listed in the |
I went overboard and nuked all my containers + images for you @vrothberg :) Looks like @mtrmac may be on to something...
|
Thanks @miabbott! |
Here ya go:
|
So you're saying that if I had previously pulled a tagged container from a registry (any registry) and then subsequently pulled either an updated container (i.e. the tag had been updated) or pulled from a different registry then the original digest could have stayed? I nuked all images on my system and then:
So that is the right checksum (the one I would expect). So here is what I think may have happened??? When I originally hit this bug it was with the However for |
Looks like I can still reproduce with the latest
full
full
|
I uploaded a reproducer to my quay repository:
Note that I only managed to reproduce it with a schema 1 image; altering the |
So, we're always overwriting the digest. |
Retried again with a clean storage where the initial digest is not overwritten. |
@mtrmac, @nalind, we are recording the digests already in the image but are not exposing them in inspect here. So this part seems straight forward to fix. However, I am not sure how we should treat the main Digest. It is set once and not being updated afterwards. Shall we change this behaviour? Once the RepoDigests are fixed, this might be enough? |
No; names of the images don’t matter in the
Yes; images with the same content (but a different representation, e.g. differently-compressed or different manifest) would exhibit this behavior
This. My best guess at what has happened is that you have somehow pulled (or locally built) an image from/for the |
Interesting; I can’t reproduce it (with |
I don’t think we are; or rather, we do record digests via s.imageRef.transport.store.SetImageBigData(img.ID, …, s.manifest, manifest.Digest) but that does not include the repository name, so again
IIRC the single I think we want to
That’s not going to fix previously-pulled images; hopefully they are going to be eventually replaced by updated versions, and eventually |
@mtrmac Are you going to make a PR to do this in containers/image? |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
@mtrmac, shall we create a dedicated for c/image? |
#3761 (comment) says edit the consumers first, then add the producing side. Quickly re-reading that, I’m not quite sure that ordering is necessary, but adding extra names with digests can affect the current code manufacturing artificial |
Here are the case of incorrect Digest value in OCP 4.2.12 by podman
After podman pushed image, It generated new sha256 value 617ac31a8a7716639486a991b6173f13548d369a702f7774b216950bcbfcb26d in registry(docker.io/library/registry:2) server directory, such as in the /docker/registry/v2/repositories/openshift-release-dev/ocp-release/_manifests/tags/4.2.12/index/sha256/617ac31a8a7716639486a991b6173f13548d369a702f7774b216950bcbfcb26d directory. But docker will generate the correct Digest value
"Digest": "sha256:77ade34c373062c6a6c869e0e56ef93b2faaa373adadaac1430b29484a24d843", |
@YuLimin That’s unrelated to the issue discussed here, and completely expected behavior (image digests are created during |
I am facing a similar issue with incorrect digests. On my fedora workstation fc34, digests listed via Is this expected behavior? Is it normal to have two Here is an example:
Summary But interestingly the sha256 on hub.docker.com DOES appear in the
I'm experiencing this with simple popular images from hub.docker.com like hello-world, ubuntu, debian. |
A friendly reminder that this issue had no activity for 30 days. |
I'm on the same boat too. |
Since @mtrmac says this is fixed in the main branch, I am going to close. Please reopen if you see this problem on podman 3.4 or later. |
I am currently experiencing this on version 3.2.3 on rhel 8.4. Are there any plans to back-port this fix? |
In #3761 (comment), I was only reporting that the The original bug report is, per #3761 (comment) , now behaving differently but not actually fixed. #3761 (comment) is what would need to happen to fix the original bug. |
A friendly reminder that this issue had no activity for 30 days. |
I'm facing same issue as stated HERE
I built an image using
On checking the digest of the image on the env, I got below result
After pushing the image to local repository, I pulled the image again using
I then pulled the same image using
I'm not sure why the difference in the digests.
|
@SarthakGhosh16 That seems reasonable at a first glance; |
@mheon please find the bugzilla for this and link it here. |
I'm facing the same issue currently with Podman v4.1.1: Depending on which image I pulled first, Podman will only show its digest. I could reproduce it like this: # Pull nginx image from Docker Hub
❯ podman pull docker.io/library/nginx:1.23.1
...
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5
❯ podman image list --digests
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
docker.io/library/nginx 1.23.1 sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79 b692a91e4e15 2 weeks ago 146 MB
# Pull image from quay:
❯ podman pull quay.io/testing-farm/nginx:latest
...
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5
# Both images appear to have the same digest
❯ podman image list --digests
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
quay.io/testing-farm/nginx latest sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79 b692a91e4e15 2 weeks ago 146 MB
docker.io/library/nginx 1.23.1 sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79 b692a91e4e15 2 weeks ago 146 MB
# Clean up and try the other way around
❯ podman system prune -a
❯ podman pull quay.io/testing-farm/nginx:latest
...
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5
# The image from quay actually has a different digest!
❯ podman image list --digests
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
quay.io/testing-farm/nginx latest sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261 b692a91e4e15 2 weeks ago 146 MB
❯ podman pull docker.io/library/nginx:1.23.1
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5
# Now the image from Docker Hub appears to have the digest from the quay image
❯ podman image list --digests
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
docker.io/library/nginx 1.23.1 sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261 b692a91e4e15 2 weeks ago 146 MB
quay.io/testing-farm/nginx latest sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261 b692a91e4e15 2 weeks ago 146 MB The problem is: I would like to rely on the output to later pull the image by its digest, like: ❯ podman pull quay.io/testing-farm/nginx@sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261 However, I currently can't rely on the output of the digest, as depending on the pull order I might try the wrong one: ❯ podman pull quay.io/testing-farm/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79
Trying to pull quay.io/testing-farm/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79...
Error: initializing source docker://quay.io/testing-farm/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79: reading manifest sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79 in quay.io/testing-farm/nginx: manifest unknown: manifest unknown Which of course doesn't work. |
This message gave me a glimpse of hope. So I tried Is there any way to get the hash of what EDIT: adding |
Yes. Or use the built-in signature support: |
Any update on this issue? |
@mtrmac @vrothberg any update |
Unfortunately not |
Why hasn't it been fixed? |
Lack of time and priority. Are you interested in looking into fixing it. |
/kind bug
Description
podman inspect sometimes shows different digest than what is correct.
Steps to reproduce the issue:
Look at the
30-x86_64
tag from the fedora/fedora repo on quay.io and notice the current tag is atsha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b
.sudo podman pull quay.io/fedora/fedora:30-x86_64
sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'
Describe the results you received:
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
This is the weird part. I've noticed this is not consistent. For example:
podman-1.4.4-4.fc29.x86_64
) DOES have the problempodman-1.4.4-4.fc30.x86_64
) DOES NOT have the problempodman-1.4.4-4.fc30.x86_64
) DOES have the problemOutput of
podman version
:Output of
podman info --debug
:The text was updated successfully, but these errors were encountered: