Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman image digest showing incorrect value in some cases #3761

Open
dustymabe opened this issue Aug 8, 2019 · 71 comments
Open

podman image digest showing incorrect value in some cases #3761

dustymabe opened this issue Aug 8, 2019 · 71 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@dustymabe
Copy link
Contributor

/kind bug

Description

podman inspect sometimes shows different digest than what is correct.

Steps to reproduce the issue:

  1. Look at the 30-x86_64 tag from the fedora/fedora repo on quay.io and notice the current tag is at sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b.

  2. sudo podman pull quay.io/fedora/fedora:30-x86_64

  3. sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'

Describe the results you received:

$ sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'
"sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130"

Describe the results you expected:

$ sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'
"sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b"

Additional information you deem important (e.g. issue happens only occasionally):

This is the weird part. I've noticed this is not consistent. For example:

  • My Home Fedora 29 Workstation machine (podman-1.4.4-4.fc29.x86_64) DOES have the problem
  • My Work Laptop Fedora 29 Silverblue Machine (`podman-1.4.4-4.fc29.x86_64) DOES NOT have the problem
  • My Spare Laptop Fedora 30 Siverblue Machine (podman-1.4.4-4.fc30.x86_64) DOES NOT have the problem
  • My coworkers Laptop Fedora 30 Silverblue Machine (podman-1.4.4-4.fc30.x86_64) DOES have the problem

Output of podman version:

$ sudo podman version
Version:            1.4.4
RemoteAPI Version:  1
Go Version:         go1.11.11
OS/Arch:            linux/amd64

Output of podman info --debug:

$ sudo podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.11.11
  podman version: 1.4.4
host:
  BuildahVersion: 1.9.0
  Conmon:
    package: podman-1.4.4-4.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.0.0-dev, commit: 130ae2bc326106e335a44fac012005a8654a76cc'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 6809538560
  MemTotal: 20999897088
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: 82f4855a8421018c9f4d74fbcf2da7f8ad1e11fa
      spec: 1.0.1-dev
  SwapFree: 4262981632
  SwapTotal: 4294963200
  arch: amd64
  cpus: 8
  hostname: media
  kernel: 5.1.21-200.fc29.x86_64
  os: linux
  rootless: false
  uptime: 52h 8m 44.94s (Approximately 2.17 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: overlay
  GraphOptions:
  - overlay.mountopt=nodev,metacopy=on
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  ImageStore:
    number: 11
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 8, 2019
@mheon
Copy link
Member

mheon commented Aug 8, 2019

@vrothberg Poke - does this look like the c/storage digest mangling from pipes that you found?

@vrothberg
Copy link
Member

@mheon, v1.4.4 is not affected by the pipes issue. We're actually checking digests and error out if they don't correspond to what the image claims. Maybe, quay is using different mirrors which are not in sync?

@dustymabe is this reproducible in some way? If so, can you try a podman pull quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130 on the "unhealthy" machines? And after that do a podman pull quay.io/fedora/fedora:30-x86_64 and see which digest we're getting? I'm trying to check if it's us or not.

@miabbott
Copy link
Contributor

miabbott commented Aug 8, 2019

I was the other co-worker that replicated the problem on Silverblue 30. Although, trying to reproduce with the fedora images on quay has been unsuccessful.

My reproducer used the coreos-assembler:latest image:

$ rpm -q podman
podman-1.4.4-4.fc30.x86_64

$ podman version
Version:            1.4.4
RemoteAPI Version:  1
Go Version:         go1.12.7
OS/Arch:            linux/amd64

$ sudo podman pull quay.io/coreos-assembler/coreos-assembler:latest
Trying to pull quay.io/coreos-assembler/coreos-assembler:latest...Getting image source signatures
Copying blob 8962e0aa3b30 done                   
Copying blob ee7278430ac0 done                   
Copying blob 2389ea3c2b72 done
Copying blob 07e90dbef7f3 done
Copying blob 319c67bf57ef done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob 69e87d4a8da8 done
Copying blob a343c633a1c0 done       
Copying blob ba98169a6017 done
Copying blob c23463c17ef0 done                                  
Copying blob 18a42a950c32 done                                                                                                                                                                                                                                                              
Copying blob 158573027f99 done                                    
Copying blob 60af1d2f690b done                                                                                                                                                                                                                                                              
Copying blob d0fe950e58bb done                                    
Copying blob a3ed95caeb02 skipped: already exists             
Copying blob 36adc34a4dfe skipped: already exists
Copying blob 9ca01cf9bdc2 done                                 
Copying blob e9c35b24d915 done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob a3ed95caeb02 skipped: already exists
Writing manifest to image destination
Storing signatures                               
6149d96cfec3fdc2a94e83240e9e37776f3ce0e9c704e828b38b0aa40b5498bc

$ sudo podman inspect quay.io/coreos-assembler/coreos-assembler:latest | jq '.[]["Digest"]'     
"sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8"

$ skopeo inspect docker://quay.io/coreos-assembler/coreos-assembler:latest | jq '.["Digest"]'   
"sha256:42339020ff0d64cb8340485cb9e0615cd4915adfbfbe1ec25989bebfdbadd0c0"

$ sudo podman pull quay.io/coreos-assembler/coreos-assembler@sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8
Trying to pull quay.io/coreos-assembler/coreos-assembler@sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8...ERRO[0000] Error pulling image ref //quay.io/coreos-assembler/coreos-assembler@sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8: E
rror initializing source docker://quay.io/coreos-assembler/coreos-assembler@sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8: Error reading manifest sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8 in quay.io/coreos-assembler/coreos-assem
bler: manifest unknown: manifest unknown 
Failed
Error: error pulling image "quay.io/coreos-assembler/coreos-assembler@sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8": unable to pull quay.io/coreos-assembler/coreos-assembler@sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8: unable to 
pull image: Error initializing source docker://quay.io/coreos-assembler/coreos-assembler@sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8: Error reading manifest sha256:8af5b6ba875c2cb95bb7c6b95bb936b803b2db8a607159d10ba69ee37f7fedb8 in quay.io/coreos-assembler
/coreos-assembler: manifest unknown: manifest unknown

$ sudo podman pull quay.io/coreos-assembler/coreos-assembler@sha256:42339020ff0d64cb8340485cb9e0615cd4915adfbfbe1ec25989bebfdbadd0c0
Trying to pull quay.io/coreos-assembler/coreos-assembler@sha256:42339020ff0d64cb8340485cb9e0615cd4915adfbfbe1ec25989bebfdbadd0c0...Getting image source signatures
Copying blob 2389ea3c2b72 skipped: already exists
Copying blob 69e87d4a8da8 skipped: already exists
Copying blob 8962e0aa3b30 skipped: already exists
Copying blob ee7278430ac0 skipped: already exists
Copying blob 319c67bf57ef skipped: already exists
Copying blob 07e90dbef7f3 skipped: already exists
Copying blob c23463c17ef0 skipped: already exists
Copying blob ba98169a6017 skipped: already exists
Copying blob a343c633a1c0 skipped: already exists
Copying blob 18a42a950c32 skipped: already exists
Copying blob 158573027f99 skipped: already exists
Copying blob 60af1d2f690b skipped: already exists
Copying blob d0fe950e58bb skipped: already exists
Copying blob 9ca01cf9bdc2 skipped: already exists
Copying blob e9c35b24d915 skipped: already exists
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob a3ed95caeb02 done
Copying blob 36adc34a4dfe done
Copying blob a3ed95caeb02 done
Writing manifest to image destination
Storing signatures
6149d96cfec3fdc2a94e83240e9e37776f3ce0e9c704e828b38b0aa40b5498bc

$ sudo podman inspect quay.io/coreos-assembler/coreos-assembler@sha256:42339020ff0d64cb8340485cb9e0615cd4915adfbfbe1ec25989bebfdbadd0c0 | jq '.[]["Id"]'
"6149d96cfec3fdc2a94e83240e9e37776f3ce0e9c704e828b38b0aa40b5498bc"

$ sudo podman inspect quay.io/coreos-assembler/coreos-assembler:latest | jq '.[]["Id"]'
"6149d96cfec3fdc2a94e83240e9e37776f3ce0e9c704e828b38b0aa40b5498bc"

@dustymabe
Copy link
Contributor Author

@dustymabe is this reproducible in some way? If so, can you try a podman pull quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130 on the "unhealthy" machines? And after that do a podman pull quay.io/fedora/fedora:30-x86_64 and see which digest we're getting? I'm trying to check if it's us or not.

seems to be at least on the machines affected

[dustymabe@media ~]$ sudo podman images | grep x86_64
quay.io/fedora/fedora                       30-x86_64   17ebc5dcdd31   3 months ago    312 MB
[dustymabe@media ~]$ sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'
"sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130"
[dustymabe@media ~]$ 
[dustymabe@media ~]$ sudo podman pull quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130
Trying to pull quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130...ERRO[0000] Error pulling image ref //quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130: Error initializing source docker://quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130: Error reading manifest sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130 in quay.io/fedora/fedora: manifest unknown: manifest unknown 
Failed
Error: error pulling image "quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130": unable to pull quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130: unable to pull image: Error initializing source docker://quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130: Error reading manifest sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130 in quay.io/fedora/fedora: manifest unknown: manifest unknown
[dustymabe@media ~]$ 
[dustymabe@media ~]$ sudo podman pull quay.io/fedora/fedora@sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b
Trying to pull quay.io/fedora/fedora@sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b...Getting image source signatures
Copying blob 149da9c683f1 done
Writing manifest to image destination
Storing signatures
17ebc5dcdd3105c78dd255acad37ad021958a8863715123b39b89b55da75cb98
[dustymabe@media ~]$ 
[dustymabe@media ~]$ sudo podman images | grep x86_64
quay.io/fedora/fedora                       30-x86_64                17ebc5dcdd31   3 months ago    312 MB
[dustymabe@media ~]$ 
[dustymabe@media ~]$ sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'
"sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130"

@dustymabe
Copy link
Contributor Author

one last bit of data:

$ sudo podman inspect quay.io/fedora/fedora@sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b | jq '.[]["Digest"]'
"sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130"

@vrothberg
Copy link
Member

Thanks @dustymabe and @miabbott!

Oh dear ... I could only guess now but will have a look with a fresh brain tomorrow. @mtrmac, have you seen something like that?

@dustymabe, could you paste the entire output from sudo podman inspect quay.io/fedora/fedora@sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b?

@mtrmac
Copy link
Collaborator

mtrmac commented Aug 8, 2019

I guess this is related to how https://github.com/containers/libpod/blob/09cedd152d5c5827520635b10498d15225999e19/libpod/image/image.go#L316 lies. AFAICS, once an image is pulled with one manifest, and later the same image (same “ID” ~ layers+config) is pulled with a different manifest (does not matter whether it is from the same or from a different registry/repo), RepoDigests will report all of those locations using the first digest ever encountered. (Apparently it will do it even for explicit pulls by digest, discarding the true, and actually known and recorded, value!)

We don’t currently even always record the digest used when pulling by tag into Names; if we did, fixing RepoDigests should be fairly easy.

IIRC CRI-O has the same problem. I don’t think it is something structurally baked in, we just never got around to fixing this.


Could the above explain the behavior?

@vrothberg
Copy link
Member

vrothberg commented Aug 9, 2019

Could the above explain the behavior?

That's a sharp observation and sounds plausible to me. Some layers were already present and hence skipped during pull which may be an indicator for an older image already being present. Seeing the full inspect output would be helpful.

Let's follow @mtrmac's theory a bit. @dustymabe, @miabbott, can you please retry by first deleting all images (or all images listed in the RepoDigests array in the inspect output) and then pull again? The expectation then is that the correct digest will be set in the inspect output.

@miabbott
Copy link
Contributor

miabbott commented Aug 9, 2019

I went overboard and nuked all my containers + images for you @vrothberg :)

Looks like @mtrmac may be on to something...

$ sudo podman ps -a
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES

$ sudo podman images -a
REPOSITORY   TAG   IMAGE ID   CREATED   SIZE


$ sudo podman pull quay.io/coreos-assembler/coreos-assembler:latest
Trying to pull quay.io/coreos-assembler/coreos-assembler:latest...Getting image source signatures
Copying blob d94a8977810d done
Copying blob 325b83b06e86 done
Copying blob a8143a39da88 done
Copying blob 106be0da1a86 done
Copying blob 07e90dbef7f3 done
Copying blob ac8dc1a14d31 done
Copying blob a3ed95caeb02 done
Copying blob 2ae8c46ce848 done
Copying blob f29f2511c1bd done
Copying blob 9e899cf16088 done
Copying blob 157152515a42 done
Copying blob eddd61b0b7d9 done
Copying blob b0b196a02428 done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob c3c860175cfa done
Copying blob 36adc34a4dfe done
Copying blob 8d987c764c06 done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob a3ed95caeb02 skipped: already exists
Copying blob d5e7cda47d3c done
Writing manifest to image destination
Storing signatures
d2ea57bd6dbaeedca3c715d93554ea8437df068c9a06298d612ee80f83068dcb

$ sudo podman inspect quay.io/coreos-assembler/coreos-assembler:latest | jq '.[]["Digest"]'
"sha256:7344b2012da0733f6faa9981d3144df0f1017ec1bab7943911910ddd20f5d9ed"

$ skopeo inspect docker://quay.io/coreos-assembler/coreos-assembler:latest | jq '.["Digest"]'
"sha256:7344b2012da0733f6faa9981d3144df0f1017ec1bab7943911910ddd20f5d9ed"

$ sudo podman pull quay.io/fedora/fedora:30-x86_64
Trying to pull quay.io/fedora/fedora:30-x86_64...Getting image source signatures
Copying blob 149da9c683f1 done
Writing manifest to image destination
Storing signatures
17ebc5dcdd3105c78dd255acad37ad021958a8863715123b39b89b55da75cb98

$ sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'
"sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b"

$ skopeo inspect docker://quay.io/fedora/fedora:30-x86_64 | jq '.["Digest"]'
"sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b"

@vrothberg
Copy link
Member

Thanks @miabbott!

@dustymabe
Copy link
Contributor Author

@dustymabe, could you paste the entire output from sudo podman inspect quay.io/fedora/fedora@sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b?

Here ya go:

$ sudo podman inspect quay.io/fedora/fedora@sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b
[
    {
        "Id": "17ebc5dcdd3105c78dd255acad37ad021958a8863715123b39b89b55da75cb98",
        "Digest": "sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130",
        "RepoTags": [
            "quay.io/fedora/fedora@sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b",
            "quay.io/fedora/fedora:30-x86_64"
        ],
        "RepoDigests": [
            "quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130",
            "quay.io/fedora/fedora@sha256:9a1bdd9da2e0723b1a4f6f016311c3228c0fadb867ef6fdd9496ab665c1d5130"
        ],
        "Parent": "",
        "Comment": "Created by Image Factory",
        "Created": "2019-04-27T12:36:28Z",
        "Config": {
            "Env": [
                "DISTTAG=f30container",
                "FGC=f30"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Labels": {
                "license": "MIT",
                "name": "fedora",
                "vendor": "Fedora Project",
                "version": "30"
            }
        },
        "Version": "1.10.1",
        "Author": "",
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 311619110,
        "VirtualSize": 311619110,
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "MergedDir": "/var/lib/containers/storage/overlay/386083e14cf1e8d428ced60b52bdc0a13e5fba3e235d792d215666509da1c3fe/merged",
                "UpperDir": "/var/lib/containers/storage/overlay/386083e14cf1e8d428ced60b52bdc0a13e5fba3e235d792d215666509da1c3fe/diff",
                "WorkDir": "/var/lib/containers/storage/overlay/386083e14cf1e8d428ced60b52bdc0a13e5fba3e235d792d215666509da1c3fe/work"
            }
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                ""
            ]
        },
        "Labels": {
            "license": "MIT",
            "name": "fedora",
            "vendor": "Fedora Project",
            "version": "30"
        },
        "Annotations": {},
        "ManifestType": "application/vnd.docker.distribution.manifest.v1+prettyjws",
        "User": "",
        "History": [
            {
                "created": "2019-04-27T12:36:28Z",
                "comment": "Created by Image Factory"
            }
        ]
    }
]

@dustymabe
Copy link
Contributor Author

Could the above explain the behavior?

So you're saying that if I had previously pulled a tagged container from a registry (any registry) and then subsequently pulled either an updated container (i.e. the tag had been updated) or pulled from a different registry then the original digest could have stayed?

I nuked all images on my system and then:

[dustymabe@media ~]$ sudo podman inspect quay.io/fedora/fedora:30-x86_64
Error: error getting image "quay.io/fedora/fedora:30-x86_64": unable to find 'quay.io/fedora/fedora:30-x86_64' in local storage: no such image
[dustymabe@media ~]$ sudo podman pull quay.io/fedora/fedora:30-x86_64
Trying to pull quay.io/fedora/fedora:30-x86_64...Getting image source signatures
Copying blob 149da9c683f1 done
Writing manifest to image destination
Storing signatures
17ebc5dcdd3105c78dd255acad37ad021958a8863715123b39b89b55da75cb98
[dustymabe@media ~]$ sudo podman inspect quay.io/fedora/fedora:30-x86_64 | jq '.[]["Digest"]'
"sha256:5bc93c7ca1c526b2a73f7c97eae15638f40dfef4b44d528f4d0374302fcb9f2b"

So that is the right checksum (the one I would expect).

So here is what I think may have happened??? When I originally hit this bug it was with the uay.io/coreos-assembler/coreos-assembler:latest container. I pull it periodically. So maybe the digest was wrong because it gets written over all the time?

However for quay.io/fedora/fedora:30-x86_64 I was just searching for a container from quay which would be an easy reproducer. I chose that one because it hadn't been updated for 3 months and would likely stay on that sha256 long enough for others to reproduce here in this issue. However, on my system I also pull from the fedora project: registry.fedoraproject.org/fedora:30 and I do that often. So maybe I had already the content of quay.io/fedora/fedora:30-x86_64 cached on my system somewhere because I've definitely had that container in the past.

@dustymabe
Copy link
Contributor Author

Looks like I can still reproduce with the latest coreos-assembler image:

[dustymabe@media ~]$ sudo podman image prune
[dustymabe@media ~]$ sudo podman images
REPOSITORY                          TAG   IMAGE ID       CREATED      SIZE
registry.fedoraproject.org/fedora   30    1cabdcebde84   8 days ago   253 MB
[dustymabe@media ~]$ 
[dustymabe@media ~]$ sudo podman pull quay.io/coreos-assembler/coreos-assembler:latest 
Trying to pull quay.io/coreos-assembler/coreos-assembler:latest...Getting image source signatures
Copying blob 07e90dbef7f3 skipped: already exists
Copying blob 325b83b06e86 done
Copying blob ac8dc1a14d31 done
Copying blob 106be0da1a86 done
Copying blob 9e899cf16088 done
Copying blob d94a8977810d done
Copying blob a8143a39da88 done
Copying blob 157152515a42 done
Copying blob a3ed95caeb02 done
Copying blob 2ae8c46ce848 done
Copying blob f29f2511c1bd done
Copying blob b0b196a02428 done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob eddd61b0b7d9 done
Copying blob c3c860175cfa done
Copying blob 36adc34a4dfe done
Copying blob a3ed95caeb02 skipped: already exists
Copying blob a3ed95caeb02 skipped: already exists
Copying blob d5e7cda47d3c done
Copying blob 8d987c764c06 done
Writing manifest to image destination
Storing signatures
d2ea57bd6dbaeedca3c715d93554ea8437df068c9a06298d612ee80f83068dcb
[dustymabe@media ~]$ 
[dustymabe@media ~]$ sudo podman inspect quay.io/coreos-assembler/coreos-assembler:latest | jq '.[]["Digest"]'
"sha256:e4092afc3560bbc5b07c809082a4e526bfa1b6d545c1d276d3298f3b6b416754"
[dustymabe@media ~]$ 
[dustymabe@media ~]$ sudo skopeo inspect docker://quay.io/coreos-assembler/coreos-assembler:latest | jq '.["Digest"]'
"sha256:7344b2012da0733f6faa9981d3144df0f1017ec1bab7943911910ddd20f5d9ed"

full podman inspect:

[dustymabe@media ~]$ sudo podman inspect quay.io/coreos-assembler/coreos-assembler:latest
[
    {
        "Id": "d2ea57bd6dbaeedca3c715d93554ea8437df068c9a06298d612ee80f83068dcb",
        "Digest": "sha256:e4092afc3560bbc5b07c809082a4e526bfa1b6d545c1d276d3298f3b6b416754",
        "RepoTags": [
            "quay.io/coreos-assembler/coreos-assembler:latest"
        ],
        "RepoDigests": [
            "quay.io/coreos-assembler/coreos-assembler@sha256:e4092afc3560bbc5b07c809082a4e526bfa1b6d545c1d276d3298f3b6b416754"
        ],
        "Parent": "",
        "Comment": "Created by Image Factory",
        "Created": "2019-08-09T00:28:54.315543298Z",
        "Config": {
            "User": "builder",
            "Env": [
                "DISTTAG=f30container",
                "FGC=f30",
                "container=oci",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Entrypoint": [
                "/usr/bin/dumb-init",
                "/usr/bin/coreos-assembler"
            ],
            "WorkingDir": "/srv/",
            "Labels": {
                "license": "MIT",
                "name": "fedora",
                "vendor": "Fedora Project",
                "version": "30"
            }
        },
        "Version": "18.02.0-ce",
        "Author": "",
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 3352713860,
        "VirtualSize": 3352713860,
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/var/lib/containers/storage/overlay/bdd17292b5f75fa7a1a1e6e80f71df39b1d16b7f557c03b4ac67c505aa701b2c/diff:/var/lib/containers/storage/overlay/da8acba41244ec7f2f340149cf8904f16d852424610f730258268345201c7241/diff:/var/lib/containers/storage/overlay/e27ff928c1484b3652d382f7d903726ec45b9cdcdb083614bceb1ce52f41656a/diff:/var/lib/containers/storage/overlay/4531cf56c7d04d0c3548dd462100e081fe1559e4475b9e3e8d27fc4d57ea2747/diff:/var/lib/containers/storage/overlay/8bd797198b49d497495c5d0e76f2b64da6a4ce69d5cbfb38d5457717c0d66edb/diff:/var/lib/containers/storage/overlay/bda201175a9dc8dd5e3c339e3d3993d3b1272b783d67224e0660015953c343b0/diff:/var/lib/containers/storage/overlay/8e112a01ae401754a57e29c1c4123f1f04af023fc4063052c4c88a1a6e70eefb/diff:/var/lib/containers/storage/overlay/4d3c794af3c56fc610d4134c17afff2100b6c862c608fa4d3bc6f70d7753b5d7/diff:/var/lib/containers/storage/overlay/4efcb9f50266cd46643c958ccfc7a859af0bc5696f9b495b221177bffb2d6221/diff:/var/lib/containers/storage/overlay/9d9cfd2501f57e143624c3f86bb4f2136a57c4edfe9705d376a945aefb916640/diff:/var/lib/containers/storage/overlay/c38400054d42198b2f123f662440b01aa1c9741c84bf35e142f7339313881056/diff:/var/lib/containers/storage/overlay/776977e4e231c366a9a3916aaeb8804b1ba88
7b649bc8aca31cb368bb192a0ed/diff:/var/lib/containers/storage/overlay/8a743ef04daf9262db0451d0e69ec17779b1c86e2fa798536298d49ac3a05551/diff:/var/lib/containers/storage/overlay/6361bab5dcba0bdff6bc76e8edd0a5c8c717b960eaeb8028b3b9efebb7d80a76/diff:/var/lib/containers/storage/overlay/8e33d64ed70c1741ee4f91f87f0cd26994db1b8d384e7527c170df9d20262acc/diff",
                "MergedDir": "/var/lib/containers/storage/overlay/3357a97f158a081d9c2c148703ac0836bcd6a1052f6c58462a4206d7137b4d6b/merged",
                "UpperDir": "/var/lib/containers/storage/overlay/3357a97f158a081d9c2c148703ac0836bcd6a1052f6c58462a4206d7137b4d6b/diff",
                "WorkDir": "/var/lib/containers/storage/overlay/3357a97f158a081d9c2c148703ac0836bcd6a1052f6c58462a4206d7137b4d6b/work"
            }
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                "",
                ""
            ]
        },
        "Labels": {
            "license": "MIT",
            "name": "fedora",
            "vendor": "Fedora Project",
            "version": "30"
        },
        "Annotations": {},
        "ManifestType": "application/vnd.docker.distribution.manifest.v1+prettyjws",
        "User": "builder",
        "History": [
            {
                "created": "2019-08-01T07:48:33Z",
                "comment": "Created by Image Factory"
            },
            {
                "created": "2019-08-09T00:21:19.783683681Z",
                "created_by": "/bin/sh -c #(nop) WORKDIR /root/containerbuild"
            },
            {
                "created": "2019-08-09T00:21:19.860127891Z",
                "created_by": "/bin/sh -c #(nop) COPY file:6d4b454b6f05589f9e6ee3b5a4522574389917ecf16c4f5fb5a8f0e8714b0024 in /root/containerbuild/src/ "
            },
            {
                "created": "2019-08-09T00:21:20.258810452Z",
                "created_by": "/bin/sh -c #(nop) COPY multi:2d65ff2d95100a463f808d7d9e679e57dbce07246130297370285e2c4c7bf478 in /root/containerbuild/ "
            },
            {
                "created": "2019-08-09T00:21:21.058789871Z",
                "created_by": "/bin/sh -c ./build.sh configure_yum_repos"
            },
            {
                "created": "2019-08-09T00:26:23.28003627Z",
                "created_by": "/bin/sh -c ./build.sh install_rpms"
            },
            {
                "created": "2019-08-09T00:26:26.16190362Z",
                "created_by": "/bin/sh -c #(nop) COPY dir:8e47808f43e4e8fe26832799aec28f2b07240841f76aa7a7cf205548cddf6142 in /root/containerbuild/ "
            },
            {
                "created": "2019-08-09T00:26:27.828845947Z",
                "created_by": "/bin/sh -c ./build.sh write_archive_info"
            },
            {
                "created": "2019-08-09T00:26:28.205328675Z",
                "created_by": "/bin/sh -c ./build.sh install_anaconda",
                "empty_layer": true
            },
            {
                "created": "2019-08-09T00:28:42.230948809Z",
                "created_by": "/bin/sh -c ./build.sh make_and_makeinstall"
            },
            {
                "created": "2019-08-09T00:28:45.048350216Z",
                "created_by": "/bin/sh -c ./build.sh configure_user"
            },
            {
                "created": "2019-08-09T00:28:50.73407124Z",
                "created_by": "/bin/sh -c make check"
            },
            {
                "created": "2019-08-09T00:28:51.624669044Z",
                "created_by": "/bin/sh -c make unittest"
            },
            {
                "created": "2019-08-09T00:28:52.894661043Z",
                "created_by": "/bin/sh -c make clean"
            },
            {
                "created": "2019-08-09T00:28:52.960903509Z",
                "created_by": "/bin/sh -c #(nop) WORKDIR /srv/",
                "empty_layer": true
            },
            {
                "created": "2019-08-09T00:28:53.326800954Z",
                "created_by": "/bin/sh -c chown builder: /srv"
            },
            {
                "created": "2019-08-09T00:28:53.83354153Z",
                "created_by": "/bin/sh -c rm -rf /root/containerbuild"
            },
            {
                "created": "2019-08-09T00:28:54.185066647Z",
                "created_by": "/bin/sh -c chmod g=u /etc/passwd"
            },
            {
                "created": "2019-08-09T00:28:54.249535605Z",
                "created_by": "/bin/sh -c #(nop)  USER builder",
                "empty_layer": true
            },
            {
                "created": "2019-08-09T00:28:54.315543298Z",
                "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/usr/bin/dumb-init\" \"/usr/bin/coreos-assembler\"]",
                "empty_layer": true
            }
        ]
    }
]


full skopeo inspect output:

{
    "Name": "quay.io/coreos-assembler/coreos-assembler",
    "Tag": "latest",
    "Digest": "sha256:7344b2012da0733f6faa9981d3144df0f1017ec1bab7943911910ddd20f5d9ed",
    "RepoTags": [
        "v0.3.1",
        "v0.4.0",
        "rhcos-4.1",
        "v0.5.0",
        "rhcos-4.2",
        "master",
        "latest"
    ],
    "Created": "2019-08-09T00:28:54.315543298Z",
    "DockerVersion": "18.02.0-ce",
    "Labels": {
        "license": "MIT",
        "name": "fedora",
        "vendor": "Fedora Project",
        "version": "30"
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Layers": [
        "sha256:07e90dbef7f366cec5bdd008b3b4e700dea36d939657cb8283c58008768684b3",
        "sha256:325b83b06e862042b689382a70da44c8a23386916a7737f4be70006835b579de",
        "sha256:106be0da1a865d66a35639b48e8a4f6f257be83bcc04781aec5cc8a27e82baac",
        "sha256:a8143a39da88250b3babf5aeb1fda29ba6fbe8b318f455dad04c6774004d1d2e",
        "sha256:d94a8977810d1cb0dd93cd792e98b7036aca580bbe1468c3642906dcbaa28647",
        "sha256:ac8dc1a14d31ed3e64fa178f3716e3818a4f14b8b70c9f1ca8f09c6fcea20fb7",
        "sha256:9e899cf1608887d1b2a36d722b68218c0fe626d60021736a4d5629a3637a6cc2",
        "sha256:2ae8c46ce848459356f6cdd0fb8d4d51667cf558c39f41cbecee916dcc929195",
        "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4",
        "sha256:f29f2511c1bd8a94f2b5a66783e57133848925f3ac6f79ba7bad890e94616cd6",
        "sha256:157152515a42ce59806346b96958dd5fec26f6c293b86e539c4ac5186ca7626c",
        "sha256:eddd61b0b7d960d99b93464c979a2cafb56f9d220ca67fbae8926013ed07a14c",
        "sha256:b0b196a0242874d7f80bedeb5db211c164c6fd9799cf31936ce1d1f3dde75b8a",
        "sha256:c3c860175cfa3f0b7c987b7041c9ec175e0438f44eb94d81183bb7cd582d21f1",
        "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4",
        "sha256:36adc34a4dfefa82d558f339a6d59a06528a89491a6d16778656f62cb7cd5a5e",
        "sha256:d5e7cda47d3cd7fc9632705d1cf4a0914897028e987308dec0e5b2c35aa9ba4f",
        "sha256:8d987c764c06ec84b3e49af033eca3da1a51d19c770658824477d418fed390dd",
        "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4",
        "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
    ]
}

@vrothberg
Copy link
Member

I uploaded a reproducer to my quay repository:

[~]$ skopeo inspect docker://quay.io/vrothberg/test:digestA | grep Digest
    "Digest": "sha256:b26451bc53d98fe3741785afbcd909b11824568a6c944b11f080861a0e7e92fb",

[~]$ skopeo inspect docker://quay.io/vrothberg/test:digestB | grep Digest
    "Digest": "sha256:1fd6b1057ad544277f82e5aac8f47dabe4d6571eb9c5fe5d8dbd1de2fffda73f",

[~]$ podman pull docker://quay.io/vrothberg/test:digestA
Trying to pull docker://quay.io/vrothberg/test:digestA...Getting image source signatures
Copying blob a3ed95caeb02 done
Copying blob 2bb885776435 done
Writing manifest to image destination
Storing signatures
b7b28af77ffec6054d13378df4fdf02725830086c7444d9c278af25312aa39b9

[~]$ podman pull docker://quay.io/vrothberg/test:digestB
Trying to pull docker://quay.io/vrothberg/test:digestB...Getting image source signatures
Copying blob 2bb885776435 skipped: already exists
Copying blob a3ed95caeb02 done
Writing manifest to image destination
Storing signatures
b7b28af77ffec6054d13378df4fdf02725830086c7444d9c278af25312aa39b9


[~]$ podman inspect b7b28af77ffec6054d13378df4fdf02725830086c7444d9c278af25312aa39b9 | head -n12
[
    {
        "Id": "b7b28af77ffec6054d13378df4fdf02725830086c7444d9c278af25312aa39b9",
        "Digest": "sha256:1fd6b1057ad544277f82e5aac8f47dabe4d6571eb9c5fe5d8dbd1de2fffda73f",
        "RepoTags": [
            "quay.io/vrothberg/test:digestB",
            "quay.io/vrothberg/test:digestA"
        ],
        "RepoDigests": [
            "quay.io/vrothberg/test@sha256:1fd6b1057ad544277f82e5aac8f47dabe4d6571eb9c5fe5d8dbd1de2fffda73f",
            "quay.io/vrothberg/test@sha256:1fd6b1057ad544277f82e5aac8f47dabe4d6571eb9c5fe5d8dbd1de2fffda73f"
        ],

Note that I only managed to reproduce it with a schema 1 image; altering the name was sufficient.

@vrothberg
Copy link
Member

So, we're always overwriting the digest.

@vrothberg
Copy link
Member

So, we're always overwriting the digest.

Retried again with a clean storage where the initial digest is not overwritten.

@vrothberg
Copy link
Member

@mtrmac, @nalind, we are recording the digests already in the image but are not exposing them in inspect here. So this part seems straight forward to fix. However, I am not sure how we should treat the main Digest. It is set once and not being updated afterwards. Shall we change this behaviour? Once the RepoDigests are fixed, this might be enough?

@mtrmac
Copy link
Collaborator

mtrmac commented Aug 10, 2019

So you're saying that if I had previously pulled a tagged container from a registry (any registry) and then subsequently pulled either an updated container (i.e. the tag had been updated)

No; names of the images don’t matter in the RepoDigests hypothesis.

or pulled from a different registry then the original digest could have stayed?

Yes; images with the same content (but a different representation, e.g. differently-compressed or different manifest) would exhibit this behavior

However for quay.io/fedora/fedora:30-x86_64 I was just searching for a container from quay which would be an easy reproducer. I chose that one because it hadn't been updated for 3 months and would likely stay on that sha256 long enough for others to reproduce here in this issue. However, on my system I also pull from the fedora project: registry.fedoraproject.org/fedora:30 and I do that often. So maybe I had already the content of quay.io/fedora/fedora:30-x86_64 cached on my system somewhere because I've definitely had that container in the past.

This. My best guess at what has happened is that you have somehow pulled (or locally built) an image from/for the quay.io registry.fp.org registry, using schema2, and that image was copied/synced to quay.io, (converting it to schema1 in the process) — or maybe vice versa, or the same build system pushed the same build result to two different registries; and that resulted in two different representations of the same image, and you have pulled both.

@mtrmac
Copy link
Collaborator

mtrmac commented Aug 10, 2019

Looks like I can still reproduce with the latest coreos-assembler image:

[dustymabe@media ~]$ sudo podman image prune
[dustymabe@media ~]$ sudo podman pull quay.io/coreos-assembler/coreos-assembler:latest 
…
[dustymabe@media ~]$ sudo podman inspect quay.io/coreos-assembler/coreos-assembler:latest | jq '.[]["Digest"]'
"sha256:e4092afc3560bbc5b07c809082a4e526bfa1b6d545c1d276d3298f3b6b416754"
[dustymabe@media ~]$ sudo skopeo inspect docker://quay.io/coreos-assembler/coreos-assembler:latest | jq '.["Digest"]'
"sha256:7344b2012da0733f6faa9981d3144df0f1017ec1bab7943911910ddd20f5d9ed"

Interesting; I can’t reproduce it (with :latest now pointing at @sha256:4c882fb6bf63d9779c4025c711302803f9a85920ed5aea53df9b7833a52e89e7, and actually pull of quay.io/coreos-assembler/coreos-assembler@sha256:7344b2012da0733f6faa9981d3144df0f1017ec1bab7943911910ddd20f5d9ed fails with manifest unknown for me.

@mtrmac
Copy link
Collaborator

mtrmac commented Aug 10, 2019

@mtrmac, @nalind, we are recording the digests already in the image but are not exposing them in inspect here.

I don’t think we are; or rather, we do record digests via

s.imageRef.transport.store.SetImageBigData(img.ID, …, s.manifest, manifest.Digest)

but that does not include the repository name, so again RepoDigests would have to guess about the association.

However, I am not sure how we should treat the main Digest. It is set once and not being updated afterwards. Shall we change this behaviour? Once the RepoDigests are fixed, this might be enough?

IIRC the single Digest field is already obsoleted by the Digests array.


I think we want to

  • Teach podman and CRI-O RepoDigests to read values from Names (probably using repo@digest values as is, and keeping ~the current guessing code for repo:tag values if there is no corresponding repo@digest). That will still return incorrect data for pull repo:tag pulling digestA and pull repo@digestB pulling digestB, or indeed for the :digestA/:digestB reproducer as long as the digests are not in Names. Most important is that repo@digest values in Names are returned unmodified / uncorrupted.
  • Then modify c/image to always include a repo@digest value in Names.

That’s not going to fix previously-pulled images; hopefully they are going to be eventually replaced by updated versions, and eventually RepoDigests is going to be 100% accurate.

@rhatdan
Copy link
Member

rhatdan commented Aug 12, 2019

@mtrmac Are you going to make a PR to do this in containers/image?

@github-actions
Copy link

github-actions bot commented Nov 2, 2019

This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.

@vrothberg
Copy link
Member

@mtrmac, shall we create a dedicated for c/image?

@mtrmac
Copy link
Collaborator

mtrmac commented Nov 6, 2019

#3761 (comment) says edit the consumers first, then add the producing side. Quickly re-reading that, I’m not quite sure that ordering is necessary, but adding extra names with digests can affect the current code manufacturing artificial RepoDigests values, and reviewing that code to make sure it won’t break anything and would probably be about as difficult as updating it, so the ordering still seems a good guess to me.

@YuLimin
Copy link

YuLimin commented Jan 16, 2020

I guess this is related to how

https://github.com/containers/libpod/blob/09cedd152d5c5827520635b10498d15225999e19/libpod/image/image.go#L316
lies. AFAICS, once an image is pulled with one manifest, and later the same image (same “ID” ~ layers+config) is pulled with a different manifest (does not matter whether it is from the same or from a different registry/repo), RepoDigests will report all of those locations using the first digest ever encountered. (Apparently it will do it even for explicit pulls by digest, discarding the true, and actually known and recorded, value!)

We don’t currently even always record the digest used when pulling by tag into Names; if we did, fixing RepoDigests should be fairly easy.

IIRC CRI-O has the same problem. I don’t think it is something structurally baked in, we just never got around to fixing this.

Could the above explain the behavior?

Here are the case of incorrect Digest value in OCP 4.2.12 by podman

podman pull quay.io/openshift-release-dev/ocp-release:4.2.12
podman tag quay.io/openshift-release-dev/ocp-release:4.2.12 reg.self/openshift-release-dev/ocp-release:4.2.12
podman push reg.self/openshift-release-dev/ocp-release:4.2.12

After podman pushed image, It generated new sha256 value 617ac31a8a7716639486a991b6173f13548d369a702f7774b216950bcbfcb26d in registry(docker.io/library/registry:2) server directory, such as in the /docker/registry/v2/repositories/openshift-release-dev/ocp-release/_manifests/tags/4.2.12/index/sha256/617ac31a8a7716639486a991b6173f13548d369a702f7774b216950bcbfcb26d directory.

But docker will generate the correct Digest value

docker pull quay.io/openshift-release-dev/ocp-release:4.2.12
docker tag quay.io/openshift-release-dev/ocp-release:4.2.12 reg.self/openshift-release-dev/ocp-release:4.2.12
docker push reg.self/openshift-release-dev/ocp-release:4.2.12

"Digest": "sha256:77ade34c373062c6a6c869e0e56ef93b2faaa373adadaac1430b29484a24d843",
docker will generated the correct sha256 Digest.
in the /docker/registry/v2/repositories/openshift-release-dev/ocp-release/_manifests/tags/4.2.12/index/sha256/77ade34c373062c6a6c869e0e56ef93b2faaa373adadaac1430b29484a24d843

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 16, 2020

@YuLimin That’s unrelated to the issue discussed here, and completely expected behavior (image digests are created during push, and depend on the way the image is compressed, i.e. on the particular version of the implementation and the Go standard library; there is no guarantee at all, ever, that pull+push will reproduce the same digest. Use skopeo copy instead of pull+push to copy images if you want to preserve the original representation.

@sidkshatriya
Copy link

I am facing a similar issue with incorrect digests. On my fedora workstation fc34, digests listed via podman images --digests do not match up with the digests shown on hub.docker.com . However when I inspect the image explicitly (using podman inspect) then I get 2 RepoDigests in the json: one of the digests is the one that is shown on hub.docker.com and the other one is the digest that is shown when I do podman images --digests.

Is this expected behavior? Is it normal to have two RepoDigests ? Shouldn't the digest shown on hub.docker.com match up with podman images --digests?

Here is an example:

$ podman images --digests
REPOSITORY                     TAG         DIGEST                                                                   IMAGE ID      CREATED       SIZE
...etc...
docker.io/library/debian       latest      sha256:dcb20da8d9d73c9dab5059668852555c171d40cdec297da845da9c929b70e0b1  7a4951775d15  4 weeks ago   119 MB
...etc...
$ podman inspect 7a4951775d15
[                  
    {                        
        "Id": "7a4951775d157843b47250a2a5cc7b561d2abe0b29ae6f19737a04635302eacf",
        "Digest": "sha256:dcb20da8d9d73c9dab5059668852555c171d40cdec297da845da9c929b70e0b1",
        "RepoTags": [
            "docker.io/library/debian:latest"
        ],             
        "RepoDigests": [  
            "docker.io/library/debian@sha256:5625c115ad881f19967a9b66416f8d40710bb307ad607d037f8ad8289260f75f",
            "docker.io/library/debian@sha256:dcb20da8d9d73c9dab5059668852555c171d40cdec297da845da9c929b70e0b1"
        ],          
...

Summary
So here sha256:dcb20da8d9d73c9dab5059668852555c171d40cdec297da845da9c929b70e0b1 shown via podman images --digests does NOT match https://hub.docker.com/layers/debian/library/debian/latest/images/sha256-5625c115ad881f19967a9b66416f8d40710bb307ad607d037f8ad8289260f75f?context=explore

But interestingly the sha256 on hub.docker.com DOES appear in the RepoDigests array in the json (as the first entry).

$ podman --version
podman version 3.2.2

I'm experiencing this with simple popular images from hub.docker.com like hello-world, ubuntu, debian.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@seperman
Copy link

I'm on the same boat too.

@rhatdan
Copy link
Member

rhatdan commented Sep 16, 2021

Since @mtrmac says this is fixed in the main branch, I am going to close. Please reopen if you see this problem on podman 3.4 or later.

@rhatdan rhatdan closed this as completed Sep 16, 2021
@arjuhe
Copy link

arjuhe commented Oct 5, 2021

I am currently experiencing this on version 3.2.3 on rhel 8.4. Are there any plans to back-port this fix?

@mtrmac
Copy link
Collaborator

mtrmac commented Oct 5, 2021

In #3761 (comment), I was only reporting that the podman save + podman load reproducer from #3761 (comment) seems to work as expected.

The original bug report is, per #3761 (comment) , now behaving differently but not actually fixed. #3761 (comment) is what would need to happen to fix the original bug.

@mtrmac mtrmac reopened this Oct 5, 2021
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@SarthakGhosh16
Copy link

SarthakGhosh16 commented Mar 18, 2022

I'm facing same issue as stated HERE

podman --version
podman version 3.4.2

I built an image using podman on an env

podman build -t fci-operator:0.0.1-001 .
Successfully tagged fci-operator:0.0.1-001

On checking the digest of the image on the env, I got below result

"RepoDigests": [
            "fci-operator@sha256:d514bdd3ca978166dbf1913bf1b2dd0becf106a26bd17276297eade31e495f56"
        ]

After pushing the image to local repository, I pulled the image again using podman and checked the digest and got this

podman push fci-operator:0.0.1-001

podman pull fci-operator:0.0.1-001
"RepoDigests": [
            "fci-operator@sha256:b9da1a3ef32058e1451e9686de930048701ac1c06b2ef288185b54248ede9e7c",
            "fci-operator@sha256:d514bdd3ca978166dbf1913bf1b2dd0becf106a26bd17276297eade31e495f56"
        ]

I then pulled the same image using docker and got this digest

"RepoDigests": [
            "fci-operator@sha256:b9da1a3ef32058e1451e9686de930048701ac1c06b2ef288185b54248ede9e7c"
        ]

I'm not sure why the difference in the digests.

cat /etc/os-release 
NAME="Red Hat Enterprise Linux"
VERSION="8.5 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.5"

@mtrmac
Copy link
Collaborator

mtrmac commented Mar 19, 2022

@SarthakGhosh16 That seems reasonable at a first glance; podman build creates an uncompressed representation of the image, podman push writes a compressed one. The two are going to have different digests.

@umohnani8
Copy link
Member

@mheon please find the bugzilla for this and link it here.

@AndreasSko
Copy link

AndreasSko commented Aug 18, 2022

I'm facing the same issue currently with Podman v4.1.1: Depending on which image I pulled first, Podman will only show its digest. I could reproduce it like this:

# Pull nginx image from Docker Hub
❯ podman pull docker.io/library/nginx:1.23.1
...
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5

❯ podman image list --digests
REPOSITORY               TAG         DIGEST                                                                   IMAGE ID      CREATED      SIZE
docker.io/library/nginx  1.23.1      sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79  b692a91e4e15  2 weeks ago  146 MB

# Pull image from quay:
❯ podman pull quay.io/testing-farm/nginx:latest
...
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5

# Both images appear to have the same digest
❯ podman image list --digests
REPOSITORY                  TAG         DIGEST                                                                   IMAGE ID      CREATED      SIZE
quay.io/testing-farm/nginx  latest      sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79  b692a91e4e15  2 weeks ago  146 MB
docker.io/library/nginx     1.23.1      sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79  b692a91e4e15  2 weeks ago  146 MB

# Clean up and try the other way around
❯ podman system prune -a

❯ podman pull quay.io/testing-farm/nginx:latest
...
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5

# The image from quay actually has a different digest!
❯ podman image list --digests
REPOSITORY                  TAG         DIGEST                                                                   IMAGE ID      CREATED      SIZE
quay.io/testing-farm/nginx  latest      sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261  b692a91e4e15  2 weeks ago  146 MB

❯ podman pull docker.io/library/nginx:1.23.1
Storing signatures
b692a91e4e1582db97076184dae0b2f4a7a86b68c4fe6f91affa50ae06369bf5

# Now the image from Docker Hub appears to have the digest from the quay image
❯ podman image list --digests
REPOSITORY                  TAG         DIGEST                                                                   IMAGE ID      CREATED      SIZE
docker.io/library/nginx     1.23.1      sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261  b692a91e4e15  2 weeks ago  146 MB
quay.io/testing-farm/nginx  latest      sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261  b692a91e4e15  2 weeks ago  146 MB

The problem is: I would like to rely on the output to later pull the image by its digest, like:

❯ podman pull quay.io/testing-farm/nginx@sha256:f26fbadb0acab4a21ecb4e337a326907e61fbec36c9a9b52e725669d99ed1261

However, I currently can't rely on the output of the digest, as depending on the pull order I might try the wrong one:

❯ podman pull quay.io/testing-farm/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79
Trying to pull quay.io/testing-farm/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79...
Error: initializing source docker://quay.io/testing-farm/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79: reading manifest sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79 in quay.io/testing-farm/nginx: manifest unknown: manifest unknown

Which of course doesn't work.

@faern
Copy link

faern commented Oct 21, 2022

@SarthakGhosh16 That seems reasonable at a first glance; podman build creates an uncompressed representation of the image, podman push writes a compressed one. The two are going to have different digests.

This message gave me a glimpse of hope. So I tried podman build --disable-compression=false, but I still get a different hash compared to the one showing up on ghcr.io.

Is there any way to get the hash of what podman pushes, ugly or not? Because I need to build an image locally and push it to ghcr.io and then be able to obtain the digest locally so I can sign it and give to people. The people pulling down the image should not need to trust ghcr.io, but rather only need to trust me and my digest that the image is authentic.

EDIT: adding --digestfile=save-here-plix to podman push will store the digest of what was pushed in save-here-plix 🥳

@mtrmac
Copy link
Collaborator

mtrmac commented Oct 21, 2022

Yes. Or use the built-in signature support: podman push --sign-by…, podman push --sigstore-sign-by=….

@rchaudha
Copy link

Any update on this issue?

@rhatdan
Copy link
Member

rhatdan commented Jan 18, 2023

@mtrmac @vrothberg any update

@vrothberg
Copy link
Member

Unfortunately not

@Anniywell
Copy link

Why hasn't it been fixed?

@rhatdan
Copy link
Member

rhatdan commented Mar 8, 2023

Lack of time and priority. Are you interested in looking into fixing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests