Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman's Docker-compatible REST API and previously pulled images. #14291

Closed
petersenna opened this issue May 19, 2022 · 2 comments · Fixed by #14294
Closed

Podman's Docker-compatible REST API and previously pulled images. #14291

petersenna opened this issue May 19, 2022 · 2 comments · Fixed by #14294
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@petersenna
Copy link

petersenna commented May 19, 2022

/kind bug
Description
Since podman 4, when using the Docker-compatible REST API, trying to start a container that was previously pulled fails because the container does not exist on Docker Hub. More precisely I get a 404 for docker.io/library/NAME:TAG.

This issue is fixed by setting compat_api_enforce_docker_hub to false. I can also report that Docker is perfectly happy with starting the containers with the same API request.

This was originally discussed here

Steps to reproduce the issue:

  1. Login to a private registry
  2. Pull a container
  3. Logout from the registry
  4. Try to start the container using the Docker-compatible REST API

Describe the results you received:
404 on docker.io/library/NAME:TAG

Please note that I removed the environment variables sections of the requests.

# podman --version
podman version 4.0.2

# podman --log-level=debug system service -t 0 tcp:0.0.0.0:2376
INFO[0000] podman filtering at log level debug
DEBU[0000] Called service.PersistentPreRunE(podman --log-level=debug system service -t 0 tcp:0.0.0.0:2376)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/etc/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is being used
DEBU[0000] Cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 25
DEBU[0000] registered SIGHUP watcher for config
DEBU[0000] CORS Headers were not set
INFO[0000] API service listening on "[::]:2376"
DEBU[0000] waiting for SIGHUP to reload configuration
DEBU[0000] API service(s) shutting down, idle for 0s
DEBU[0000] API service shutdown request ignored as timeout Duration is UnlimitedService
DEBU[0020] IdleTracker:new 0m+0h/0t connection(s)        X-Reference-Id=0xc000011318
DEBU[0020] IdleTracker:new 0m+0h/1t connection(s)        X-Reference-Id=0xc000011320
DEBU[0020] IdleTracker:active 0m+0h/2t connection(s)     X-Reference-Id=0xc000011320
DEBU[0020] IdleTracker:active 1m+0h/2t connection(s)     X-Reference-Id=0xc000011318
DEBU[0020] Looking up image "NAME:TAG" in local containers storage
DEBU[0020] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0020] Trying "NAME:TAG" ...
DEBU[0020] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0020] Looking up image "NAME:TAG" in local containers storage
DEBU[0020] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0020] Trying "NAME:TAG" ...
DEBU[0020] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0020] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf"
DEBU[0020] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf"
DEBU[0020] Trying "localhost/NAME:TAG" ...
DEBU[0020] Trying "registry.fedoraproject.org/NAME:TAG" ...
DEBU[0020] Trying "localhost/NAME:TAG" ...
DEBU[0020] Trying "registry.access.redhat.com/NAME:TAG" ...
DEBU[0020] Trying "registry.fedoraproject.org/NAME:TAG" ...
DEBU[0020] Trying "registry.centos.org/NAME:TAG" ...
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] Trying "registry.access.redhat.com/NAME:TAG" ...
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] Trying "registry.centos.org/NAME:TAG" ...
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] Trying "REGISTRY/PATH/NAME:TAG" ...
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@5fb70826285b943f31f0c4b0dc0247d29177941c6e01a99b188bdda633244892"
DEBU[0020] Found image "NAME:TAG" as "REGISTRY/PATH/NAME:TAG" in local containers storage
DEBU[0020] Trying "REGISTRY/PATH/NAME:TAG" ...
DEBU[0020] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@5fb70826285b943f31f0c4b0dc0247d29177941c6e01a99b188bdda633244892"
DEBU[0020] Found image "NAME:TAG" as "REGISTRY/PATH/NAME:TAG" in local containers storage
DEBU[0020] Found image "NAME:TAG" as "REGISTRY/PATH/NAME:TAG" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@5fb70826285b943f31f0c4b0dc0247d29177941c6e01a99b188bdda633244892)
DEBU[0020] Looking up image "docker.io/library/NAME:TAG" in local containers storage
DEBU[0020] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] Found image "NAME:TAG" as "REGISTRY/PATH/NAME:TAG" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@5fb70826285b943f31f0c4b0dc0247d29177941c6e01a99b188bdda633244892)
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] Looking up image "docker.io/library/NAME:TAG" in local containers storage
DEBU[0020] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
INFO[0020] Request Failed(Not Found): No such image: docker.io/library/NAME:TAG: image not known
DEBU[0020] Trying "docker.io/library/NAME:TAG" ...
INFO[0020] Request Failed(Not Found): No such image: docker.io/library/NAME:TAG: image not known
34.74.239.56 - - [19/May/2022:11:31:19 +0000] "POST /containers/create?Image=NAME%3ATAG&Tty=true&Cmd=%5B%22%2Fopt%2Fbin%2Fentry_point.sh%22%5D&HostConfig=%7B%22Privileged%22%3Atrue%2C%22Binds%22%3A%5B%22%2Fdev%2Fshm%3A%2Fdev%2Fshm%22%2C%22%2Fdata%2Fvideo%3A%2Fvideo%22%2C%22%2Fdata%2Fbrowsers%3A%2Fbrowsers%22%5D%7D&name=overconfident-wish-2&Env=REMOVED_BY_PETER HTTP/1.1" 404 133 "" ""

34.74.239.56 - - [19/May/2022:11:31:19 +0000] "POST /containers/create?Image=NAME%3ATAG&Tty=true&Cmd=%5B%22%2Fopt%2Fbin%2Fentry_point.sh%22%5D&HostConfig=%7B%22Privileged%22%3Atrue%2C%22Binds%22%3A%5B%22%2Fdev%2Fshm%3A%2Fdev%2Fshm%22%2C%22%2Fdata%2Fvideo%3A%2Fvideo%22%2C%22%2Fdata%2Fbrowsers%3A%2Fbrowsers%22%5D%2C%22PortBindings%22%3A%7B%225900%2Ftcp%22%3A%5B%7B%22HostPort%22%3A%225900%22%7D%5D%2C%229229%2Ftcp%22%3A%5B%7B%22HostPort%22%3A%229229%22%7D%5D%7D%7D&name=overconfident-wish-1&Env=REMOVED_BY_PETER HTTP/1.1" 404 133 "" ""
DEBU[0020] IdleTracker:closed 2m+0h/2t connection(s)     X-Reference-Id=0xc000011320
DEBU[0020] IdleTracker:closed 1m+0h/2t connection(s)     X-Reference-Id=0xc000011318

Describe the results you expected:
The container should start as it is found on local storage.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 4.0.2

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-1.module_el8.6.0+2877+8e437bf5.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: edfc4e28654b9f8e3597bb8f87c6af099a50261f'
  cpus: 8
  distribution:
    distribution: '"spearlineos"'
    version: "8.6"
  eventLogger: file
  hostname: peter-from-image1
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-372.9.1.el8.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 4423311360
  memTotal: 8340979712
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.0.3-1.module_el8.6.0+2877+8e437bf5.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.3
      spec: 1.0.2-dev
      go: go1.17.7
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-2.module_el8.6.0+2877+8e437bf5.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 2209345536
  swapTotal: 2209345536
  uptime: 1h 18m 47.63s (Approximately 0.04 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 4
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.0.2
  Built: 1652194338
  BuiltTime: Tue May 10 14:52:18 2022
  GitCommit: ""
  GoVersion: go1.17.7
  OsArch: linux/amd64
  Version: 4.0.2

Package info (e.g. output of rpm -q podman or apt list podman):

podman-4.0.2-5.module_el8.6.0+2877+8e437bf5.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

No

Additional environment details (AWS, VirtualBox, physical, etc.):
GCP

@vrothberg
Copy link
Member

Thanks for opening the issue, @petersenna.

I can reproduce locally and will prepare a fix.

@vrothberg
Copy link
Member

I opened #14294 to fix the bug.

vrothberg added a commit to vrothberg/libpod that referenced this issue May 23, 2022
Fix a bug in the resolution of images in the Docker compat API.
When looking up an image by a short name, the name may match
an image that does not live on Docker Hub.  The resolved name
should be used for normalization instead of the input name to
make sure that `busybox` can resolve to `registry.com/busybox`
if present in the local storage.

Fixes: containers#14291
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/podman that referenced this issue May 24, 2022
Fix a bug in the resolution of images in the Docker compat API.
When looking up an image by a short name, the name may match
an image that does not live on Docker Hub.  The resolved name
should be used for normalization instead of the input name to
make sure that `busybox` can resolve to `registry.com/busybox`
if present in the local storage.

Fixes: containers#14291
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
cdoern pushed a commit to cdoern/podman that referenced this issue May 27, 2022
Fix a bug in the resolution of images in the Docker compat API.
When looking up an image by a short name, the name may match
an image that does not live on Docker Hub.  The resolved name
should be used for normalization instead of the input name to
make sure that `busybox` can resolve to `registry.com/busybox`
if present in the local storage.

Fixes: containers#14291
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants