Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: copying system image from manifest list: trying to reuse blob */diff: no such file or directory #21810

Open
luckylinux opened this issue Feb 25, 2024 · 29 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@luckylinux
Copy link

luckylinux commented Feb 25, 2024

Issue Description

I am facing a very weird issue.

My standard folder structure is like this (the idea behind splitting into several folders was to make it easier to handle e.g. ZFS based snapshots and backups):

/home/podman/bin
/home/podman/build
/home/podman/cache
/home/podman/certificates
/home/podman/compose
/home/podman/config
/home/podman/containers
/home/podman/data
/home/podman/images
/home/podman/local
/home/podman/log
/home/podman/root
/home/podman/run
/home/podman/secrets
/home/podman/storage
/home/podman/tmp
/home/podman/volumes

The issue does NOT show up with ZFS. Everything seems to work fine there, with zdata/PODMAN/ mounted with --rbind to /home/podman/:

zdata/PODMAN on /zdata/PODMAN type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/BUILD on /zdata/PODMAN/BUILD type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/CACHE on /zdata/PODMAN/CACHE type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/DATA on /zdata/PODMAN/DATA type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/LOG on /zdata/PODMAN/LOG type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/STORAGE on /zdata/PODMAN/STORAGE type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/IMAGES on /zdata/PODMAN/IMAGES type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/CONFIG on /zdata/PODMAN/CONFIG type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/COMPOSE on /zdata/PODMAN/COMPOSE type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/CERTIFICATES on /zdata/PODMAN/CERTIFICATES type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/VOLUMES on /zdata/PODMAN/VOLUMES type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/SECRETS on /zdata/PODMAN/SECRETS type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/LOCAL on /zdata/PODMAN/LOCAL type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/ROOT on /zdata/PODMAN/ROOT type zfs (rw,noatime,xattr,noacl,casesensitive)
zdata/PODMAN/CONFIG on /home/podman/.config/containers type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/STORAGE on /home/podman/storage type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/BUILD on /home/podman/build type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/CERTIFICATES on /home/podman/certificates type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/COMPOSE on /home/podman/compose type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/CONFIG on /home/podman/config type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/LOG on /home/podman/log type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/ROOT on /home/podman/root type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/DATA on /home/podman/data type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/IMAGES on /home/podman/images type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/VOLUMES on /home/podman/volumes type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/CACHE on /home/podman/cache type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/LOCAL on /home/podman/local type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)
zdata/PODMAN/SECRETS on /home/podman/secrets type zfs (rw,noatime,xattr,noacl,casesensitive,x-systemd.automount)

The issue appears only on EXT4.
Here, no mount --rbind is used, and the folders are just plain folders within the user home directory.

/home/podman/bin
/home/podman/build
/home/podman/cache
/home/podman/certificates
/home/podman/compose
/home/podman/config
/home/podman/containers
/home/podman/data
/home/podman/images
/home/podman/local
/home/podman/log
/home/podman/root
/home/podman/run
/home/podman/secrets
/home/podman/storage
/home/podman/tmp
/home/podman/volumes

When trying to install some images (redis:alpine, redis:bookworm, possibly also headscale and headscale-ui, not sure), they usually fail with the following messages:

podman@<SERVERNAME>:~$ podman pull redis:alpine
✔ docker.io/library/redis:alpine
Trying to pull docker.io/library/redis:alpine...
Getting image source signatures
Copying blob 2afe905a8615 skipped: already exists  
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 128c39a261ff skipped: already exists  
Copying blob f27432e97d04 skipped: already exists  
Copying blob 33486cc813b5 skipped: already exists  
Copying blob d29554ca490b skipped: already exists  
Copying blob 4f4fb700ef54 done   | 
Copying blob 5cb59ee00f00 done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": creating read-only layer with ID "1234f0cb2a613f6c85750af636ba97719670085a675695486429aef9c7530373": Stat /home/podman/storage/overlay/f5e4e1e17cc9ce821cce6f8fea6123439c6c36904d180a84075cffc3dc85483c/diff: no such file or directory

Debug level log: https://pastebin.com/F65rwZuU

I also tried podman system reset which mostly failed to delete the storage folder:

podman@<SERVERNAME>:~/compose/authentik$ podman system reset
WARNING! This will remove:
        - all containers
        - all pods
        - all images
        - all networks
        - all build cache
        - all machines
        - all volumes
        - the graphRoot directory: "/home/podman/storage"
        - the runRoot directory: "/run/user/1001"
Are you sure you want to continue? [y/N] Y

ERRO[0175] unlinkat /home/podman/storage/overlay/e8378ab0015b2be72773b263dc7148dcc79b2cd263832b3e5c4929d04034f641/merged: device or resource busy 
 A "/home/podman/.config/containers/storage.conf" config file exists.
Remove this file if you did not modify the configuration.
ERRO[0175] failed to remove runtime root dir /run/user/1001, since it is the same as XDG_RUNTIME_DIR 

Steps to reproduce the issue

Steps to reproduce the issue

  1. Install podman on ext4 file system with the directory structure listed above
    a. This can be done mostly automatically using my helper script: https://github.com/luckylinux/podman-tools
    b. ./setup_podman_debian.sh "podman" "dir" "/home/podman"
  2. Run podman pull redis:alpine

Describe the results you received

podman@<SERVERNAME>:~$ podman pull redis:alpine
✔ docker.io/library/redis:alpine
Trying to pull docker.io/library/redis:alpine...
Getting image source signatures
Copying blob 2afe905a8615 skipped: already exists  
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 128c39a261ff skipped: already exists  
Copying blob f27432e97d04 skipped: already exists  
Copying blob 33486cc813b5 skipped: already exists  
Copying blob d29554ca490b skipped: already exists  
Copying blob 4f4fb700ef54 done   | 
Copying blob 5cb59ee00f00 done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": creating read-only layer with ID "1234f0cb2a613f6c85750af636ba97719670085a675695486429aef9c7530373": Stat /home/podman/storage/overlay/f5e4e1e17cc9ce821cce6f8fea6123439c6c36904d180a84075cffc3dc85483c/diff: no such file or directory

Debug level log: https://pastebin.com/F65rwZuU

Describe the results you expected

Podman pulling redis:alpine image (and others) normally.

Putting the storage folder graphRoot inside e.g. a subfolder within the user home directory works correctly apparently:

  • /home/podman/containers/storage
  • /home/podman/.local/share/containers/storag

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.5
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.10+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: unknown'
  cpuUtilization:
    idlePercent: 72.8
    systemPercent: 17.26
    userPercent: 9.94
  cpus: 1
  databaseBackend: sqlite
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  freeLocks: 2048
  hostname: ra
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.1.0-18-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 1767870464
  memTotal: 2012446720
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.4.0-5_amd64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-3_amd64
    path: /usr/lib/podman/netavark
    version: netavark 1.4.0
  ociRuntime:
    name: crun
    package: crun_1.14.1-1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.14.1
      commit: de537a7965bfbe9992e2cfae0baeb56a08128171
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20230309.7c7625d-1_amd64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 1023406080
  swapTotal: 1023406080
  uptime: 0h 0m 19.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs_1.13-1_amd64
      Version: |-
        fusermount3 version: 3.14.0
        fuse-overlayfs: version 1.13-dev
        FUSE library version 3.14.0
        using FUSE kernel interface version 7.31
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /home/podman/storage
  graphRootAllocated: 18969468928
  graphRootUsed: 4682727424
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /home/podman/tmp
  imageStore:
    number: 0
  runRoot: /run/user/1001
  transientStore: false
  volumePath: /home/podman/storage/volumes
version:
  APIVersion: 4.9.3
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.21.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.3

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

VPS on KVM AMD64.

Debian Bookwork 12 with Podman 4.9.3 pinned from Debian Testing/Trixie.

Additional information

I quickly tested in my local KVM (Proxmox VE) ZFS-based storage for podman: podman pull redis:alpine works correctly.

@luckylinux luckylinux added the kind/bug Categorizes issue or PR as related to a bug. label Feb 25, 2024
luckylinux pushed a commit to luckylinux/podman-tools that referenced this issue Feb 25, 2024
…fault folder to /home/podman/containers for "dir" mode (related to containers/podman#21810)
@luckylinux
Copy link
Author

luckylinux commented Feb 25, 2024

And now, when I tried to replicate the same directory structure on my ZFS-based installations, I encounter the issue when the folders are mounted to e.g. /home/podman/containers/{compose,config,storage,images,volumes,...}.

podman@Rock5B-02:~$ podman pull redis:alpine
✔ docker.io/library/redis:alpine
Trying to pull docker.io/library/redis:alpine...
Getting image source signatures
Copying blob bca4290a9639 skipped: already exists  
Copying blob ac791973e295 skipped: already exists  
Copying blob 2bf8baaf8aab skipped: already exists  
Copying blob dbbc6ec9f2f5 skipped: already exists  
Copying blob e214c39e91b1 skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 2768c9034bfe done   | 
Error: copying system image from manifest list: trying to reuse blob sha256:e17209f7d6fed36b5abfc43936503a79fc8acd176364e78a25825d521db008c7 at destination: reading layer "a045e7de0e0a748501984bba6610ba2efc026d6c38e8cadbe27ac12f553b4931" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/storage/overlay/a045e7de0e0a748501984bba6610ba2efc026d6c38e8cadbe27ac12f553b4931/diff: no such file or directory

What the hell ????

Update: And of course on another server the error does NOT occur with ZFS and the new Folder Structure.

@luckylinux
Copy link
Author

luckylinux commented Feb 25, 2024

And for headscale-ui yet another error:

podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.9.3
** excluding:  set()
['podman', 'ps', '--filter', 'label=io.podman.compose.project=headscale', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
recreating: ...
** excluding:  set()
podman stop -t 10 headscale-ui
Error: no container with name or ID "headscale-ui" found: no such container
exit code: 125
podman stop -t 10 headscale
headscale
exit code: 0
podman rm headscale-ui
Error: no container with ID or name "headscale-ui" found: no such container
exit code: 1
podman rm headscale
headscale
exit code: 0
recreating: done


['podman', 'network', 'exists', 'traefik']
podman run --name=headscale -d --label traefik.enable=true --label traefik.http.routers.headscale-rtr.rule=PathPrefix(`/`) && Host(`headscale.MYDOMAIN.TLD.TLD`) --label traefik.http.services.headscale-svc.loadbalancer.server.port=8080 --label io.podman.compose.config-hash=66e38f57d69b8f9dc2b542d2f4b011a84313ba4d23cd3f7900da773273fcd581 --label io.podman.compose.project=headscale --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@headscale.service --label com.docker.compose.project=headscale --label com.docker.compose.project.working_dir=/home/podman/containers/compose/headscale --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=headscale -v /home/podman/containers/config/headscale:/etc/headscale -v /home/podman/containers/data/headscale:/var/lib/headscale -v /home/podman/containers/run/headscale:/var/run/headscale --net traefik --network-alias headscale --hostname headscale.MYDOMAIN.TLD.TLD --pull always --restart unless-stopped headscale/headscale:latest-alpine headscale serve
Trying to pull docker.io/headscale/headscale:latest-alpine...
Getting image source signatures
Copying blob 3e1c99d88867 skipped: already exists  
Copying blob ca7dd9ec2225 skipped: already exists  
Copying config ff699c4c37 done   | 
Writing manifest to image destination
1a5a5ec9637b5343729d34cf5783983a1d33c1264a6013e616c8f0c9e0e6ff13
exit code: 0
['podman', 'network', 'exists', 'traefik']
podman run --name=headscale-ui -d --requires=headscale --label traefik.enable=true --label traefik.http.routers.headscale-ui-rtr.rule=PathPrefix(`/web`) && Host(`headscale.MYDOMAIN.TLD.TLD`) --label traefik.http.routers.headscale-rtr.middlewares=auth-headscale-ui --label traefik.http.services.headscale-ui-svc.loadbalancer.server.port=80 --label traefik.http.middlewares.auth-headscale-ui.basicauth.usersfile=/config/users --label io.podman.compose.config-hash=66e38f57d69b8f9dc2b542d2f4b011a84313ba4d23cd3f7900da773273fcd581 --label io.podman.compose.project=headscale --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@headscale.service --label com.docker.compose.project=headscale --label com.docker.compose.project.working_dir=/home/podman/containers/compose/headscale --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=headscale-ui --net traefik --network-alias headscale-ui --hostname headscale.MYDOMAIN.TLD.TLD --pull always --restart unless-stopped ghcr.io/gurucomputing/headscale-ui:latest
Trying to pull ghcr.io/gurucomputing/headscale-ui:latest...
Getting image source signatures
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 7144c1805df2 skipped: already exists  
Copying blob 855024e2e210 skipped: already exists  
Copying blob 9ebf39365dcc skipped: already exists  
Copying blob 1b11e799bcea done   | 
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob f1e553f18f88 done   | 
Copying blob e770faca4e36 done   | 
Copying blob d89adfd8f9bd done   | 
Copying blob a0747d8072e8 done   | 
Copying blob 88402d33207c done   | 
Copying blob 119faa53dd31 done   | 
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob c84b632129c2 done   | 
Error: copying system image from manifest list: trying to reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 at destination: reading layer "1234f0cb2a613f6c85750af636ba97719670085a675695486429aef9c7530373" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/f5e4e1e17cc9ce821cce6f8fea6123439c6c36904d180a84075cffc3dc85483c/diff: no such file or directory


exit code: 125
podman start headscale-ui
Error: no container with name or ID "headscale-ui" found: no such container
exit code: 125
****

This was tested on another VPS and the issue could be repeated.

However, as soon as I remove the redis image, I can install headscale-ui.

So I can install either headscale-ui, or redis, NOT both !

@luckylinux
Copy link
Author

luckylinux commented Feb 29, 2024

Renaming issue since it seems more general that home root folder data storage.

Old title: Home folder ~/storage folder for graphRoot causes issues with some containers (e.g. redis:alpine)
New title: Error: copying system image from manifest list: trying to reuse blob */diff: no such file or directory

@luckylinux luckylinux changed the title Home folder ~/storage folder for graphRoot causes issues with **some** containers (e.g. redis:alpine) Error: copying system image from manifest list: trying to reuse blob */diff: no such file or directory Feb 29, 2024
@rhatdan
Copy link
Member

rhatdan commented Feb 29, 2024

@giuseppe PTAL

@giuseppe
Copy link
Member

giuseppe commented Mar 1, 2024

I've tried your script on Debian and I encounter a lot of errors, the last one being:

./setup_tools_autoupdate_service.sh: line 23: syntax error in conditional expression

and it stops the execution.

Can you simplify the reproducer to not require such a complex configuration?

Is it enough to override some of the paths used by Podman?

Can you reproduce it on Fedora?

@luckylinux
Copy link
Author

luckylinux commented Mar 1, 2024

I've tried your script on Debian and I encounter a lot of errors, the last one being:

./setup_tools_autoupdate_service.sh: line 23: syntax error in conditional expression

I'll have to check what went wrong there. I'm using this script myself to remember what I did / what to do as there were/are several steps required in order to get Podman Rootless working. It's NOT (or was at least) plug & play.

Can you simplify the reproducer to not require such a complex configuration?
Well again, this is not required per-se. It was just what I used. Minus the syntax error.

Is it enough to override some of the paths used by Podman?

The latest findings seem to indicate that it's not necessarily caused by the folder location. /home/podman/storage works in some cases, in some cases not. /home/podman/containers/storage works in some cases, in other cases not.

But yeah ... it might be enough to change the storage / graphRoot path, as that seems where the problem occurs with overlay and overlay-images.

Can you reproduce it on Fedora?

I don't know, I don't use Fedora (and Fedora IIRC causes more issues related to SELinux being enabled by default, which I have no experience since I last used Fedora maybe 15 years ago ...)

@luckylinux
Copy link
Author

OK there was a syntax error indeed (missing 1 "]" in "if" statement initiated by "[[") in ./setup_tools_autoupdate_service.sh: line 23.

I also updated setup_podman_debian.sh to invoke some functions AFTER the user name is defined based on command line arguments.

Fixed in the commit e8c169e I just pushed now.

@luckylinux
Copy link
Author

I just re-ran podman-compose up -d for a headscale & headscale-ui container.

podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.9.3
** excluding:  set()
['podman', 'ps', '--filter', 'label=io.podman.compose.project=zerotier', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
['podman', 'network', 'exists', 'zerotier_default']
podman run --name=zerotier-controller -d --label io.podman.compose.config-hash=64cb0833af58c2859dbfbc305d17bf09f42f3487a9780b73e8ea1ec3a50de667 --label io.podman.compose.project=zerotier --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@zerotier.service --label com.docker.compose.project=zerotier --label com.docker.compose.project.working_dir=/home/podman/containers/compose/zerotier --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=zerotier -e ZT_OVERRIDE_LOCAL_CONF=true -e ZT_ALLOW_MANAGEMENT_FROM=0.0.0.0/0 -v /home/podman/containers/data/zerotier-controller:/var/lib/zerotier-one --net zerotier_default --network-alias zerotier --expose 9993/tcp -p 9993:9993/udp --restart unless-stopped zyclonite/zerotier:latest
Error: creating container storage: the container name "zerotier-controller" is already in use by 59c03ef741df6ed4f420a0ce2a5fe107fb393b852270ca13f8c36b995b46c454. You have to remove that container to be able to reuse that name: that name is already in use, or use --replace to instruct Podman to do so.
exit code: 125
podman start zerotier-controller
exit code: 0
['podman', 'network', 'exists', 'zerotier_default']
podman run --name=zerotier-ui -d --label traefik.enable=true --label traefik.http.routers.zerotier-ui-rtr.rule=PathPrefix(`/web`) && Host(`zerotier.MYDOMAIN.TLD`) --label traefik.http.routers.zerotier-ui-rtr.middlewares=zerotier-ui --label traefik.http.services.zerotier-ui-svc.loadbalancer.server.port=4000 --label io.podman.compose.config-hash=64cb0833af58c2859dbfbc305d17bf09f42f3487a9780b73e8ea1ec3a50de667 --label io.podman.compose.project=zerotier --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@zerotier.service --label com.docker.compose.project=zerotier --label com.docker.compose.project.working_dir=/home/podman/containers/compose/zerotier --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=zero-ui -e ZU_CONTROLLER_ENDPOINT=http://zerotier-controller:9993/ -e ZU_SECURE_HEADERS=true -e ZU_DEFAULT_USERNAME=admin -e ZU_DEFAULT_PASSWORD=zero-ui -v /home/podman/containers/data/zerotier-controller:/var/lib/zerotier-one -v /home/podman/containers/data/zerotier-ui:/app/backend/data --net zerotier_default --network-alias zero-ui --expose 4000 --restart unless-stopped dec0dos/zero-ui:latest
✔ docker.io/dec0dos/zero-ui:latest
Trying to pull docker.io/dec0dos/zero-ui:latest...
Getting image source signatures
Copying blob 6c7c9abe4a1e skipped: already exists  
Copying blob 96526aa774ef skipped: already exists  
Copying blob 824de1d006d4 skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 76c34934b331 skipped: already exists  
Copying blob fdef87f136ff skipped: already exists  
Copying blob 6d9f518b8a5b skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 078acbcb3c68 done   | 
Copying blob 133eb9bb812b done   | 
Copying blob ea0d0d2be7d6 done   | 
Copying blob cab857d07113 done   | 
Copying blob e154d4509c74 done   | 
Error: copying system image from manifest list: trying to reuse blob sha256:4ee50a7ce84e2979ecdd656fd0d5ccc6a1e1ab0f9377120b30f50a9395f71f93 at destination: reading layer "fc6cc8571524708ce6078f609eb407f5ddd20be0ea58ef7fd96f3906f3566e66" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/2c2f56a504856860ddea25ec3c23af6e1e3a6a7d92ac8d08e2d7d9e09c579e5c/diff: no such file or directory


exit code: 125
podman start zerotier-ui
Error: no container with name or ID "zerotier-ui" found: no such container
exit code: 125

Seems the path is quite long ?

/home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/2c2f56a504856860ddea25ec3c23af6e1e3a6a7d92ac8d08e2d7d9e09c579e5c/diff -> 152 characters

This part seems normal: /home/podman/containers/images/

This may make the path too long ???
overlay/link/home/podman/containers/storage/overlay/2c2f56a504856860ddea25ec3c23af6e1e3a6a7d92ac8d08e2d7d9e09c579e5c/diff

Not sure what the issue is.

podman@ServerB:~/containers/compose/zerotier$ podman run --name=zerotier-ui --log-level=debug dec0dos/zero-ui:latest
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --name=zerotier-ui --log-level=debug dec0dos/zero-ui:latest) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
INFO[0000] Using sqlite as database backend             
DEBU[0000] systemd-logind: Unknown object '/'.          
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/podman/containers/storage 
DEBU[0000] Using run root /run/user/1001                
DEBU[0000] Using static dir /home/podman/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1001/libpod/tmp      
DEBU[0000] Using volume path /home/podman/containers/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] overlay: imagestore=/home/podman/containers/storage 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument 
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 4              
DEBU[0000] Successfully loaded network traefik: &{traefik 8781be2fadcac3609b23efa839de1a015df56c8825cc04da26c4e8260b70670f bridge podman1 2024-02-25 15:28:04.904726588 +0100 CET [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[] map[] map[driver:host-local]} 
DEBU[0000] Successfully loaded network zerotier_default: &{zerotier_default a6e1e743bed72fe4279393729b8cc7a50296ef32ff16a3cefdfdc654e7d66b3d bridge podman2 2024-02-29 17:08:24.419599995 +0100 CET [{{{10.89.1.0 ffffff00}} 10.89.1.1 <nil>}] [] false false true [] map[com.docker.compose.project:zerotier io.podman.compose.project:zerotier] map[] map[driver:host-local]} 
DEBU[0000] Successfully loaded 3 networks               
DEBU[0000] Pulling image dec0dos/zero-ui:latest (policy: missing) 
DEBU[0000] Looking up image "dec0dos/zero-ui:latest" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Loading registries configuration "/home/podman/.config/containers/registries.conf" 
DEBU[0000] Trying "localhost/dec0dos/zero-ui:latest" ... 
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]localhost/dec0dos/zero-ui:latest" does not resolve to an image ID 
DEBU[0000] Trying "registry.fedoraproject.org/dec0dos/zero-ui:latest" ... 
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]registry.fedoraproject.org/dec0dos/zero-ui:latest" does not resolve to an image ID 
DEBU[0000] Trying "registry.access.redhat.com/dec0dos/zero-ui:latest" ... 
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]registry.access.redhat.com/dec0dos/zero-ui:latest" does not resolve to an image ID 
DEBU[0000] Trying "docker.io/dec0dos/zero-ui:latest" ... 
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]docker.io/dec0dos/zero-ui:latest" does not resolve to an image ID 
DEBU[0000] Trying "quay.io/dec0dos/zero-ui:latest" ...  
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]quay.io/dec0dos/zero-ui:latest" does not resolve to an image ID 
DEBU[0000] Trying "docker.io/dec0dos/zero-ui:latest" ... 
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]docker.io/dec0dos/zero-ui:latest" does not resolve to an image ID 
DEBU[0000] Trying "dec0dos/zero-ui:latest" ...          
✔ docker.io/dec0dos/zero-ui:latest
DEBU[0001] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0001] Attempting to pull candidate docker.io/dec0dos/zero-ui:latest for dec0dos/zero-ui:latest 
DEBU[0001] parsed reference into "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]docker.io/dec0dos/zero-ui:latest" 
Trying to pull docker.io/dec0dos/zero-ui:latest...
DEBU[0001] Copying source image //dec0dos/zero-ui:latest to destination image [overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,metacopy=on]docker.io/dec0dos/zero-ui:latest 
DEBU[0001] Using registries.d directory /etc/containers/registries.d 
DEBU[0001] Trying to access "docker.io/dec0dos/zero-ui:latest" 
DEBU[0001] No credentials matching docker.io/dec0dos/zero-ui found in /run/user/1001/containers/auth.json 
DEBU[0001] No credentials matching docker.io/dec0dos/zero-ui found in /home/podman/.config/containers/auth.json 
DEBU[0001] No credentials matching docker.io/dec0dos/zero-ui found in /home/podman/.docker/config.json 
DEBU[0001] No credentials matching docker.io/dec0dos/zero-ui found in /home/podman/.dockercfg 
DEBU[0001] No credentials for docker.io/dec0dos/zero-ui found 
DEBU[0001]  No signature storage configuration found for docker.io/dec0dos/zero-ui:latest, using built-in default file:///home/podman/.local/share/containers/sigstore 
DEBU[0001] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io 
DEBU[0001] GET https://registry-1.docker.io/v2/         
DEBU[0002] Ping https://registry-1.docker.io/v2/ status 401 
DEBU[0002] GET https://auth.docker.io/token?scope=repository%3Adec0dos%2Fzero-ui%3Apull&service=registry.docker.io 
DEBU[0002] GET https://registry-1.docker.io/v2/dec0dos/zero-ui/manifests/latest 
DEBU[0002] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.list.v2+json" 
DEBU[0002] Using SQLite blob info cache at /home/podman/.local/share/containers/cache/blob-info-cache-v1.sqlite 
DEBU[0002] Source is a manifest list; copying (only) instance sha256:d2a5fae1daff1dd9861623967fa7221dccf4d376cb23596c6fd469f1d8cc2c2a for current system 
DEBU[0002] GET https://registry-1.docker.io/v2/dec0dos/zero-ui/manifests/sha256:d2a5fae1daff1dd9861623967fa7221dccf4d376cb23596c6fd469f1d8cc2c2a 
DEBU[0003] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.v2+json" 
DEBU[0003] IsRunningImageAllowed for image docker:docker.io/dec0dos/zero-ui:latest 
DEBU[0003]  Using default policy section                
DEBU[0003]  Requirement 0: allowed                      
DEBU[0003] Overall: allowed                             
DEBU[0003] Downloading /v2/dec0dos/zero-ui/blobs/sha256:aec81cb63f9cd23c0ae32b05b09fe4450da979ef9b9093b4894e245cee57a968 
DEBU[0003] GET https://registry-1.docker.io/v2/dec0dos/zero-ui/blobs/sha256:aec81cb63f9cd23c0ae32b05b09fe4450da979ef9b9093b4894e245cee57a968 
Getting image source signatures
DEBU[0003] Reading /home/podman/.local/share/containers/sigstore/dec0dos/zero-ui@sha256=d2a5fae1daff1dd9861623967fa7221dccf4d376cb23596c6fd469f1d8cc2c2a/signature-1 
DEBU[0003] Not looking for sigstore attachments: disabled by configuration 
DEBU[0003] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json] 
DEBU[0003] ... will first try using the original manifest unmodified 
DEBU[0003] Checking if we can reuse blob sha256:6c7c9abe4a1e230d5cd151926793ca21eff1eba812bb9ad02b09db0aeebbc287: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:6c7c9abe4a1e230d5cd151926793ca21eff1eba812bb9ad02b09db0aeebbc287 (already present): 
DEBU[0003] Checking if we can reuse blob sha256:96526aa774ef0126ad0fe9e9a95764c5fc37f409ab9e97021e7b4775d82bf6fa: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:96526aa774ef0126ad0fe9e9a95764c5fc37f409ab9e97021e7b4775d82bf6fa (already present): 
DEBU[0003] Checking if we can reuse blob sha256:824de1d006d492e037a03312c272427b62e171607bc6fb0e7db991b2eda190b7: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:824de1d006d492e037a03312c272427b62e171607bc6fb0e7db991b2eda190b7 (already present): 
DEBU[0003] Checking if we can reuse blob sha256:76c34934b3319c116bffd4685ee2bcb032298ccce64b89b7081ec18ffdbd6175: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:76c34934b3319c116bffd4685ee2bcb032298ccce64b89b7081ec18ffdbd6175 (already present): 
DEBU[0003] Checking if we can reuse blob sha256:fdef87f136ff3e0ba0d622cf3a573aebe6891d240d7dd55799ac0fd30a706db4: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:fdef87f136ff3e0ba0d622cf3a573aebe6891d240d7dd55799ac0fd30a706db4 (already present): 
DEBU[0003] Checking if we can reuse blob sha256:6d9f518b8a5bd4cefa3974341d70e9328a5aff000a50d2513ce1d2b30bbb2469: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:6d9f518b8a5bd4cefa3974341d70e9328a5aff000a50d2513ce1d2b30bbb2469 (already present): 
DEBU[0003] Checking if we can reuse blob sha256:4ee50a7ce84e2979ecdd656fd0d5ccc6a1e1ab0f9377120b30f50a9395f71f93: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:4ee50a7ce84e2979ecdd656fd0d5ccc6a1e1ab0f9377120b30f50a9395f71f93 (already present): 
DEBU[0003] Checking if we can reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Checking if we can reuse blob sha256:078acbcb3c68949ae94e09ae28130b426354f0484366c99911b2cf4f1d6c82fd: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Checking if we can reuse blob sha256:133eb9bb812b922c25b073bb272fffd6fa6c4bdc4a8e4a13ed58cbe5d83944ea: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Checking if we can reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Skipping blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 (already present): 
DEBU[0003] Checking if we can reuse blob sha256:ea0d0d2be7d62c74e6b225f2ac83afd943d8c1c9a6cf41fe9aa342bc6d8e0012: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Checking if we can reuse blob sha256:cab857d07113e4ce79f705e565b9dab2b92237a7bf5506329db455a47b4f86ab: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Checking if we can reuse blob sha256:e154d4509c7422731e7381b0b8983b9392642f02b9f866e7ec19f2e9bb92ec5f: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Failed to retrieve partial blob: convert_images not configured 
DEBU[0003] Failed to retrieve partial blob: convert_images not configured 
Copying blob 6c7c9abe4a1e skipped: already exists  
Copying blob 96526aa774ef skipped: already exists  
Copying blob 824de1d006d4 skipped: already exists  
Copying blob 76c34934b331 skipped: already exists  
Copying blob fdef87f136ff skipped: already exists  
Copying blob 6d9f518b8a5b skipped: already exists  
Copying blob 4ee50a7ce84e skipped: already exists  
Copying blob 078acbcb3c68 [--------------------------------------] 0.0b / 102.9KiB (skipped: 0.0b = 0.00%)
Copying blob 133eb9bb812b [--------------------------------------] 0.0b / 959.6KiB (skipped: 0.0b = 0.00%)
Copying blob 4f4fb700ef54 skipped: already exists  
DEBU[0003] Failed to retrieve partial blob: convert_images not configured 
DEBU[0003] Failed to retrieve partial blob: convert_images not configured 
DEBU[0003] Failed to retrieve partial blob: convert_images not configured 
Copying blob 6c7c9abe4a1e skipped: already exists  
Copying blob 96526aa774ef skipped: already exists  
Copying blob 824de1d006d4 skipped: already exists  
Copying blob 76c34934b331 skipped: already exists  
Copying blob fdef87f136ff skipped: already exists  
Copying blob 6d9f518b8a5b skipped: already exists  
Copying blob 6c7c9abe4a1e skipped: already exists  
Copying blob 96526aa774ef skipped: already exists  
Copying blob 6c7c9abe4a1e skipped: already exists  
Copying blob 96526aa774ef skipped: already exists  
Copying blob 6c7c9abe4a1e skipped: already exists  
Copying blob 96526aa774ef skipped: already exists  
Copying blob 824de1d006d4 skipped: already exists  
Copying blob 76c34934b331 skipped: already exists  
Copying blob fdef87f136ff skipped: already exists  
Copying blob 6d9f518b8a5b skipped: already exists  
Copying blob 4ee50a7ce84e skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 078acbcb3c68 done   | 
Copying blob 133eb9bb812b done   | 
Copying blob ea0d0d2be7d6 done   | 
Copying blob cab857d07113 done   | 
Copying blob e154d4509c74 done   | 
DEBU[0005] Error pulling candidate docker.io/dec0dos/zero-ui:latest: copying system image from manifest list: trying to reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 at destination: reading layer "fc6cc8571524708ce6078f609eb407f5ddd20be0ea58ef7fd96f3906f3566e66" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/2c2f56a504856860ddea25ec3c23af6e1e3a6a7d92ac8d08e2d7d9e09c579e5c/diff: no such file or directory
 
Error: copying system image from manifest list: trying to reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 at destination: reading layer "fc6cc8571524708ce6078f609eb407f5ddd20be0ea58ef7fd96f3906f3566e66" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/2c2f56a504856860ddea25ec3c23af6e1e3a6a7d92ac8d08e2d7d9e09c579e5c/diff: no such file or directory


DEBU[0005] Shutting down engines 

@luckylinux
Copy link
Author

OK I acknowledge there were several errors, as you @giuseppe reported.

Hopefully I fixed most of them now in my latest commits, but I ended up in a new one that I never saw before.

Keep in mind this is on a Raspberry Pi 2 (armv7l) so it might also be related to ARM 32 bit instruction set (even more limited than armhf - yeah confusing Raspberry Pi logic in the architecture naming).

root@RaspberryPI-Flashed:/tools_local/podman-tools # ./setup_podman_debian.sh "podman" "dir" "/home/podman/containers"
useradd: user 'podman' already exists
passwd: password changed.
New password: 
Retype new password: 
passwd: password updated successfully
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
sudo is already the newest version (1.9.13p3-1+deb12u1).
aptitude is already the newest version (0.8.13-5).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
podman is already the newest version (4.3.1+ds1-8).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
python3 is already the newest version (3.11.2-1).
python3-pip is already the newest version (23.0.1+dfsg-1+rpt1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Please add <systemd.unified_cgroup_hierarchy=1> to /etc/default/kernel-cmdline or /etc/default/grub
Press ENTER once ready
--2024-03-02 20:11:35--  https://src.fedoraproject.org/rpms/containers-common/raw/main/f/storage.conf
Resolving src.fedoraproject.org (src.fedoraproject.org)... 38.145.60.20, 38.145.60.21
Connecting to src.fedoraproject.org (src.fedoraproject.org)|38.145.60.20|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10899 (11K) [text/plain]
Saving to: ‘storage.conf’

storage.conf                                                100%[==========================================================================================================================================>]  10.64K  --.-KB/s    in 0.002s  

2024-03-02 20:11:36 (5.30 MB/s) - ‘storage.conf’ saved [10899/10899]

--2024-03-02 20:11:36--  https://src.fedoraproject.org/rpms/containers-common/raw/main/f/registries.conf
Resolving src.fedoraproject.org (src.fedoraproject.org)... 38.145.60.20, 38.145.60.21
Connecting to src.fedoraproject.org (src.fedoraproject.org)|38.145.60.20|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3895 (3.8K) [text/plain]
Saving to: ‘registries.conf’

registries.conf                                             100%[==========================================================================================================================================>]   3.80K  --.-KB/s    in 0s      

2024-03-02 20:11:37 (15.5 MB/s) - ‘registries.conf’ saved [3895/3895]

--2024-03-02 20:11:37--  https://src.fedoraproject.org/rpms/containers-common/raw/main/f/default-policy.json
Resolving src.fedoraproject.org (src.fedoraproject.org)... 38.145.60.21, 38.145.60.20
Connecting to src.fedoraproject.org (src.fedoraproject.org)|38.145.60.21|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 256 [application/json]
Saving to: ‘default-policy.json’

default-policy.json                                         100%[==========================================================================================================================================>]     256  --.-KB/s    in 0s      

2024-03-02 20:11:37 (1.05 MB/s) - ‘default-policy.json’ saved [256/256]

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
uidmap is already the newest version (1:4.13+dfsg1-1).
fuse-overlayfs is already the newest version (1.10-1).
slirp4netns is already the newest version (1.2.0-1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Created symlink /home/podman/.config/systemd/user/sockets.target.wants/podman.socket → /home/podman/.config/systemd/user/podman.socket.
Created symlink /home/podman/.config/systemd/user/default.target.wants/podman.service → /home/podman/.config/systemd/user/podman.service.
Created symlink /home/podman/.config/systemd/user/default.target.wants/podman-restart.service → /home/podman/.config/systemd/user/podman-restart.service.
Job for podman-restart.service failed because the control process exited with error code.
See "systemctl --user status podman-restart.service" and "journalctl --user -xeu podman-restart.service" for details.
Created symlink /home/podman/.config/systemd/user/default.target.wants/podman-auto-update.service → /home/podman/.config/systemd/user/podman-auto-update.service.
Job for podman-auto-update.service failed because the control process exited with error code.
See "systemctl --user status podman-auto-update.service" and "journalctl --user -xeu podman-auto-update.service" for details.
● podman.socket - Podman API Socket
     Loaded: loaded (/home/podman/.config/systemd/user/podman.socket; enabled; preset: enabled)
     Active: active (listening) since Sat 2024-03-02 20:12:07 CET; 8s ago
   Triggers: ● podman.service
       Docs: man:podman-system-service(1)
     Listen: /run/user/1001/podman/podman.sock (Stream)
     CGroup: /user.slice/user-1001.slice/user@1001.service/app.slice/podman.socket

Mar 02 20:12:07 RaspberryPI-Flashed systemd[2472]: Listening on podman.socket - Podman API Socket.

× podman.service - Podman API Service
     Loaded: loaded (/home/podman/.config/systemd/user/podman.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Sat 2024-03-02 20:12:11 CET; 3s ago
   Duration: 2.303s
TriggeredBy: ● podman.socket
       Docs: man:podman-system-service(1)
   Main PID: 6985 (code=exited, status=125)
        CPU: 550ms

Mar 02 20:12:09 RaspberryPI-Flashed systemd[2472]: Starting podman.service - Podman API Service...
Mar 02 20:12:09 RaspberryPI-Flashed systemd[2472]: Started podman.service - Podman API Service.
Mar 02 20:12:09 RaspberryPI-Flashed podman[6985]: time="2024-03-02T20:12:09+01:00" level=info msg="/usr/bin/podman filtering at log level info"
Mar 02 20:12:09 RaspberryPI-Flashed podman[6985]: Error: failed to get new shm lock manager: failed to create 2048 locks in /libpod_rootless_lock_1001: permission denied
Mar 02 20:12:11 RaspberryPI-Flashed systemd[2472]: podman.service: Main process exited, code=exited, status=125/n/a
Mar 02 20:12:11 RaspberryPI-Flashed systemd[2472]: podman.service: Failed with result 'exit-code'.

× podman-restart.service - Podman Start All Containers With Restart Policy Set To Always
     Loaded: loaded (/home/podman/.config/systemd/user/podman-restart.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Sat 2024-03-02 20:12:12 CET; 3s ago
       Docs: man:podman-start(1)
   Main PID: 7038 (code=exited, status=125)
        CPU: 329ms

Mar 02 20:12:12 RaspberryPI-Flashed systemd[2472]: Starting podman-restart.service - Podman Start All Containers With Restart Policy Set To Always...
Mar 02 20:12:12 RaspberryPI-Flashed podman[7038]: time="2024-03-02T20:12:12+01:00" level=info msg="/usr/bin/podman filtering at log level info"
Mar 02 20:12:12 RaspberryPI-Flashed podman[7038]: Error: failed to get new shm lock manager: failed to create 2048 locks in /libpod_rootless_lock_1001: permission denied
Mar 02 20:12:12 RaspberryPI-Flashed systemd[2472]: podman-restart.service: Main process exited, code=exited, status=125/n/a
Mar 02 20:12:12 RaspberryPI-Flashed systemd[2472]: podman-restart.service: Failed with result 'exit-code'.
Mar 02 20:12:12 RaspberryPI-Flashed systemd[2472]: Failed to start podman-restart.service - Podman Start All Containers With Restart Policy Set To Always.

× podman-auto-update.service - Podman auto-update service
     Loaded: loaded (/home/podman/.config/systemd/user/podman-auto-update.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Sat 2024-03-02 20:12:15 CET; 259ms ago
       Docs: man:podman-auto-update(1)
    Process: 7083 ExecStart=/usr/bin/podman auto-update (code=exited, status=125)
   Main PID: 7083 (code=exited, status=125)
        CPU: 327ms

Mar 02 20:12:14 RaspberryPI-Flashed systemd[2472]: Starting podman-auto-update.service - Podman auto-update service...
Mar 02 20:12:15 RaspberryPI-Flashed podman[7083]: Error: failed to get new shm lock manager: failed to create 2048 locks in /libpod_rootless_lock_1001: permission denied
Mar 02 20:12:15 RaspberryPI-Flashed systemd[2472]: podman-auto-update.service: Main process exited, code=exited, status=125/n/a

And the key error is
Error: failed to get new shm lock manager: failed to create 2048 locks in /libpod_rootless_lock_1001: permission denied

@luckylinux
Copy link
Author

Any update ?

@giuseppe
Copy link
Member

who owns /dev/shm/libpod_rootless_lock_1001?

Can your user create that directory?

@luckylinux
Copy link
Author

who owns /dev/shm/libpod_rootless_lock_1001?

Can your user create that directory?

I cannot say it is an issue during deployment. It was an error during SETUP, something that didn't occur in the past

podman@RaspberryPI-ChargerDevelopment:~ $ ls -l /dev/shm/libpod_rootless_lock_1001 
-rw------- 1 podman podman 49444 Mar 10 18:27 /dev/shm/libpod_rootless_lock_1001

It's fair for you to say that it's difficult to replicate as I'm "only" encountering it 50% of the time. So it's kinda hit-or-miss.

I fixed some bugs in the scripts (thank you for reporting that), but it also doesn't always happen.

However, I can say now that both ZFS+rbind ("zfs") and EXT4 ("dir") are affected. And not only on AMD64.

@luckylinux
Copy link
Author

luckylinux commented Mar 11, 2024

At least during podman pull it kinda seems to work on this device. At last no error with these "diff" files.

The errors are due to me not specifying the image name:tag properly or it's not available for armv7.

podman@RaspberryPI-ChargerDevelopment:~/containers/compose $ podman pull redis
✔ docker.io/library/redis:latest
Trying to pull docker.io/library/redis:latest...
Getting image source signatures
Copying blob edf2e68d37a1 done   | 
Copying blob 7a5e2a926145 done   | 
Copying blob 25dcb96de5ba done   | 
Copying blob 993aa13e29cb done   | 
Copying blob 6f0d4e077042 done   | 
Copying blob 05e5e23af732 done   | 
Copying blob 4f4fb700ef54 done   | 
Copying blob 9562ff57e339 done   | 
Copying config fda91ddf78 done   | 
Writing manifest to image destination
fda91ddf785d2f668c018ae56cc6371842688d384b8041134ddb14e85688a077
podman@RaspberryPI-ChargerDevelopment:~/containers/compose $ podman pull headscale
✔ docker.io/library/headscale:latest
Trying to pull docker.io/library/headscale:latest...
Error: initializing source docker://headscale:latest: reading manifest latest in docker.io/library/headscale: requested access to the resource is denied
podman@RaspberryPI-ChargerDevelopment:~/containers/compose $ podman pull headscale/headscale:latest-alpine
✔ docker.io/headscale/headscale:latest-alpine
Trying to pull docker.io/headscale/headscale:latest-alpine...
Error: choosing an image from manifest list docker://headscale/headscale:latest-alpine: no image found in manifest list for architecture arm, variant "v7", OS linux
podman@RaspberryPI-ChargerDevelopment:~/containers/compose $ podman pull ghcr.io/gurucomputing/headscale-ui:latest
Trying to pull ghcr.io/gurucomputing/headscale-ui:latest...
Getting image source signatures
Copying blob 7456de8d57e7 done   | 
Copying blob 855024e2e210 done   | 
Copying blob cf6a1cece104 done   | 
Copying blob fda0ff469afd done   | 
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 7144c1805df2 done   | 
Copying blob f1e553f18f88 done   | 
Copying blob 400590fde682 done   | 
Copying blob ee012e6d62cc done   | 
Copying blob 61be97478472 done   | 
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 88402d33207c done   | 
Copying blob 882322e53e92 done   | 
Copying blob c84b632129c2 done   | 
Copying blob 4f4fb700ef54 skipped: already exists  
Copying config 444ff6203f done   | 
Writing manifest to image destination
444ff6203fbe4566e817ec5ed90528d027f967d3bbe6bb9e03ebae7befb5e395
podman@RaspberryPI-ChargerDevelopment:~/containers/compose $ podman pull zerotier
✔ docker.io/library/zerotier:latest
Trying to pull docker.io/library/zerotier:latest...
Error: initializing source docker://zerotier:latest: reading manifest latest in docker.io/library/zerotier: requested access to the resource is denied
podman@RaspberryPI-ChargerDevelopment:~/containers/compose $ podman pull zyclonite/zerotier:latest
✔ docker.io/zyclonite/zerotier:latest
Trying to pull docker.io/zyclonite/zerotier:latest...
Getting image source signatures
Copying blob e792a58462fe done   | 
Copying blob 130ba56d3d90 done   | 
Copying config f0c7fa193e done   | 
Writing manifest to image destination
f0c7fa193ee3e71b167af23564dc19c57126959aa1b87346ddd018c1e2e85e79

@luckylinux
Copy link
Author

Tried to update Authentik on my Remote VPS and surely, when Redis attemped upgrade, the issue appeared again ....

@luckylinux
Copy link
Author

Did a podman system reset. Now it's even worse. Not even postgresql accept to install ...

podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.9.3
** excluding:  set()
['podman', 'ps', '--filter', 'label=io.podman.compose.project=authentik', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
['podman', 'network', 'exists', 'traefik']
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-server -d --label traefik.enable=true --label traefik.http.routers.authentik-rtr.rule=PathPrefix(`/`) && Host(`auth.MYDOMAIN.TLD`) --label traefik.http.services.authentik-svc.loadbalancer.server.port=9000 --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-server --env-file /home/podman/containers/compose/authentik/.env -e AUTHENTIK_REDIS__HOST=authentik-redis -e AUTHENTIK_POSTGRESQL__HOST=authentik-postgresql -e AUTHENTIK_POSTGRESQL__USER=authentik -e AUTHENTIK_POSTGRESQL__NAME=authentik -e AUTHENTIK_POSTGRESQL__PASSWORD=XXXXXXXXXXXXXX -e AUTHENTIK_ERROR_REPORTING__ENABLED=True -v /home/podman/containers/data/authentik/media:/media -v /home/podman/containers/data/authentik/custom-templates:/templates -v /home/podman/containers/data/authentik/assets:/media/custom --net traefik,authentik --network-alias authentik-server -p 9000:9000 --pull always --restart unless-stopped ghcr.io/goauthentik/server:latest server
Trying to pull ghcr.io/goauthentik/server:latest...
Getting image source signatures
Copying blob d9c631089dc7 done   | 
Copying blob e1caac4eb9d2 done   | 
Copying blob 51d1f07906b7 done   | 
Copying blob 2e3e9a37b01a done   | 
Copying blob 0a33514831e0 done   | 
Copying blob 0959bfd25ad8 done   | 
Copying blob a51f1fa60300 done   | 
Copying blob ff966a1a7474 done   | 
Copying blob 7991f6eaae37 done   | 
Copying blob e749560e2e95 done   | 
Copying blob 1970e732555b done   | 
Copying blob a364eb0a3306 done   | 
Copying blob e424a52cb810 done   | 
Copying blob 684d311352d5 done   | 
Copying blob b5d24fee7368 done   | 
Copying blob 456d5ce9fa45 done   | 
Copying blob e75b93abd18c done   | 
Copying blob 2dcddcf4f394 done   | 
Copying blob 9b04b7ca178f done   | 
Copying blob 25b9435a04c1 done   | 
Copying blob 7fe3c7fb343b done   | 
Copying config 10d11d26be done   | 
Writing manifest to image destination
74b11de12728274204fba47338c6e296e15364147219607dbd3994f4055e32a3
exit code: 0
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-worker -d --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-worker --env-file /home/podman/containers/compose/authentik/.env -e AUTHENTIK_REDIS__HOST=authentik-redis -e AUTHENTIK_POSTGRESQL__HOST=authentik-postgresql -e AUTHENTIK_POSTGRESQL__USER=authentik -e AUTHENTIK_POSTGRESQL__NAME=authentik -e AUTHENTIK_POSTGRESQL__PASSWORD=XXXXXXXXXXXXXX -e AUTHENTIK_SECRET_KEY_FILE -e AUTHENTIK_ERROR_REPORTING__ENABLED=True -v /run/user/1001/podman/podman.sock:/var/run/docker.sock -v /home/podman/containers/data/authentik/media:/media -v /home/podman/containers/data/authentik/certs:/certs -v /home/podman/containers/data/authentik/custom-templates:/templates --net authentik --network-alias authentik-worker -u root --pull always --restart unless-stopped ghcr.io/goauthentik/server:latest worker
Trying to pull ghcr.io/goauthentik/server:latest...
Getting image source signatures
Copying blob d9c631089dc7 skipped: already exists  
Copying blob e1caac4eb9d2 skipped: already exists  
Copying blob 51d1f07906b7 skipped: already exists  
Copying blob 2e3e9a37b01a skipped: already exists  
Copying blob 0a33514831e0 skipped: already exists  
Copying blob 0959bfd25ad8 skipped: already exists  
Copying blob a51f1fa60300 skipped: already exists  
Copying blob ff966a1a7474 skipped: already exists  
Copying blob 7991f6eaae37 skipped: already exists  
Copying blob e749560e2e95 skipped: already exists  
Copying blob 1970e732555b skipped: already exists  
Copying blob a364eb0a3306 skipped: already exists  
Copying blob e424a52cb810 skipped: already exists  
Copying blob 684d311352d5 skipped: already exists  
Copying blob b5d24fee7368 skipped: already exists  
Copying blob 456d5ce9fa45 skipped: already exists  
Copying blob e75b93abd18c skipped: already exists  
Copying blob 2dcddcf4f394 skipped: already exists  
Copying blob 9b04b7ca178f skipped: already exists  
Copying blob 25b9435a04c1 skipped: already exists  
Copying blob 7fe3c7fb343b skipped: already exists  
Copying config 10d11d26be done   | 
Writing manifest to image destination
2f6c84161fa605d4092958af6168f46b63cfa867be5623ab4cf38f977e4077c1
exit code: 0
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-postgresql -d --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-postgresql --env-file /home/podman/containers/compose/authentik/.env -e POSTGRES_PASSWORD=XXXXXXXXXXXXXXXXXXXXX -e POSTGRES_USER=authentik -e POSTGRES_DB=authentik -v /home/podman/containers/data/authentik/database:/var/lib/postgresql/data --net authentik --network-alias authentik-postgresql --pull always --restart unless-stopped postgres:12-alpine
Resolved "postgres" as an alias (/home/podman/.cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/postgres:12-alpine...
Getting image source signatures
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 6866d4a6f5d7 done   | 
Copying blob c769d8d77aec done   | 
Copying blob 05b0ca72b17d done   | 
Copying blob 7da40a38e5aa done   | 
Copying blob cd41d8466042 done   | 
Copying blob 2922012aa875 done   | 
Copying blob 11d974fc9638 done   | 
Copying blob 2fc9c6d608c1 done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:c769d8d77aec50e12b081f78b4bc613dbbe60bd253f9d58af69ef2382d04e1ca": creating read-only layer with ID "bbb952f8cd382c2a5f818abac5f09f3a4d663ec87aa6dceb3ea7f5d83397f155": Stat /home/podman/containers/storage/overlay/d4fc045c9e3a848011de66f34b81f052d4f2c15a17bb196d637e526349601820/diff: no such file or directory
exit code: 125
podman start authentik-postgresql
Error: no container with name or ID "authentik-postgresql" found: no such container
exit code: 125
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-redis -d --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-redis -v /home/podman/containers/data/authentik/redis:/data --net authentik --network-alias authentik-redis --pull always --restart unless-stopped --healthcheck-command /bin/sh -c 'redis-cli ping | grep PONG' --healthcheck-interval 30s --healthcheck-timeout 3s --healthcheck-start-period 20s --healthcheck-retries 5 redis:alpine --save 60 1 --loglevel warning
Resolved "redis" as an alias (/home/podman/.cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/redis:alpine...
Getting image source signatures
Copying blob 5913474e0f39 skipped: already exists  
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 5c3180d10209 skipped: already exists  
Copying blob f76326fd8e6b skipped: already exists  
Copying blob 034c076ba1e7 skipped: already exists  
Copying blob dffcad17539b skipped: already exists  
Copying blob 4f4fb700ef54 done   | 
Copying blob cc6fccbbefa3 done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": creating read-only layer with ID "9b8f1b593fcef6a2d48862ebe1ea0304d109ee87156adf379d9112705fc459e1": Stat /home/podman/containers/storage/overlay/e0d1aee0d0652a2e19426c025923270e185039060122715af95bfcbe2ef2d47f/diff: no such file or directory
exit code: 125
podman start authentik-redis
Error: no container with name or ID "authentik-redis" found: no such container
exit code: 125

@luckylinux
Copy link
Author

It just seems to hate anything that has to do with Alpine images really. But it's not consistent:

podman pull redis:alpine

podman pull redis:alpine
Resolved "redis" as an alias (/home/podman/.cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/redis:alpine...
Getting image source signatures
Copying blob 5913474e0f39 skipped: already exists  
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 5c3180d10209 skipped: already exists  
Copying blob f76326fd8e6b skipped: already exists  
Copying blob 034c076ba1e7 skipped: already exists  
Copying blob dffcad17539b skipped: already exists  
Copying blob cc6fccbbefa3 done   | 
Error: copying system image from manifest list: trying to reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 at destination: reading layer "d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58/diff: no such file or directory


podman@ra:~/containers/compose/authentik$ podman --log-level=debug pull redis:alpine
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called pull.PersistentPreRunE(podman --log-level=debug pull redis:alpine) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
INFO[0000] Using sqlite as database backend             
DEBU[0000] systemd-logind: Unknown object '/'.          
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/podman/containers/storage 
DEBU[0000] Using run root /run/user/1001                
DEBU[0000] Using static dir /home/podman/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1001/libpod/tmp      
DEBU[0000] Using volume path /home/podman/containers/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] overlay: imagestore=/home/podman/containers/storage 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument 
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 4              
DEBU[0000] Pulling image redis:alpine (policy: always)  
DEBU[0000] Looking up image "redis:alpine" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Loading registries configuration "/home/podman/.config/containers/registries.conf" 
DEBU[0000] Trying "docker.io/library/redis:alpine" ...  
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]docker.io/library/redis:alpine" does not resolve to an image ID 
DEBU[0000] Trying "localhost/redis:alpine" ...          
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]localhost/redis:alpine" does not resolve to an image ID 
DEBU[0000] Trying "registry.fedoraproject.org/redis:alpine" ... 
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]registry.fedoraproject.org/redis:alpine" does not resolve to an image ID 
DEBU[0000] Trying "registry.access.redhat.com/redis:alpine" ... 
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]registry.access.redhat.com/redis:alpine" does not resolve to an image ID 
DEBU[0000] Trying "docker.io/library/redis:alpine" ...  
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]docker.io/library/redis:alpine" does not resolve to an image ID 
DEBU[0000] Trying "quay.io/redis:alpine" ...            
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]quay.io/redis:alpine" does not resolve to an image ID 
DEBU[0000] Trying "docker.io/library/redis:alpine" ...  
DEBU[0000] reference "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]docker.io/library/redis:alpine" does not resolve to an image ID 
DEBU[0000] Trying "redis:alpine" ...                    
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Attempting to pull candidate docker.io/library/redis:alpine for redis:alpine 
DEBU[0000] parsed reference into "[overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]docker.io/library/redis:alpine" 
DEBU[0000] Resolved "redis" as an alias (/home/podman/.cache/containers/short-name-aliases.conf) 
Resolved "redis" as an alias (/home/podman/.cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/redis:alpine...
DEBU[0000] Copying source image //redis:alpine to destination image [overlay@/home/podman/containers/storage+/run/user/1001:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev]docker.io/library/redis:alpine 
DEBU[0000] Using registries.d directory /etc/containers/registries.d 
DEBU[0000] Trying to access "docker.io/library/redis:alpine" 
DEBU[0000] No credentials matching docker.io/library/redis found in /run/user/1001/containers/auth.json 
DEBU[0000] No credentials matching docker.io/library/redis found in /home/podman/.config/containers/auth.json 
DEBU[0000] No credentials matching docker.io/library/redis found in /home/podman/.docker/config.json 
DEBU[0000] No credentials matching docker.io/library/redis found in /home/podman/.dockercfg 
DEBU[0000] No credentials for docker.io/library/redis found 
DEBU[0000]  No signature storage configuration found for docker.io/library/redis:alpine, using built-in default file:///home/podman/.local/share/containers/sigstore 
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io 
DEBU[0000] GET https://registry-1.docker.io/v2/         
DEBU[0000] Ping https://registry-1.docker.io/v2/ status 401 
DEBU[0000] GET https://auth.docker.io/token?scope=repository%3Alibrary%2Fredis%3Apull&service=registry.docker.io 
DEBU[0000] GET https://registry-1.docker.io/v2/library/redis/manifests/alpine 
DEBU[0001] Content-Type from manifest GET is "application/vnd.oci.image.index.v1+json" 
DEBU[0001] Using SQLite blob info cache at /home/podman/.local/share/containers/cache/blob-info-cache-v1.sqlite 
DEBU[0001] Source is a manifest list; copying (only) instance sha256:3487aa5cf06dceb38202b06bba45b6e6d8a92288848698a6518eee5f63a293a3 for current system 
DEBU[0001] GET https://registry-1.docker.io/v2/library/redis/manifests/sha256:3487aa5cf06dceb38202b06bba45b6e6d8a92288848698a6518eee5f63a293a3 
DEBU[0001] Content-Type from manifest GET is "application/vnd.oci.image.manifest.v1+json" 
DEBU[0001] IsRunningImageAllowed for image docker:docker.io/library/redis:alpine 
DEBU[0001]  Using default policy section                
DEBU[0001]  Requirement 0: allowed                      
DEBU[0001] Overall: allowed                             
DEBU[0001] Downloading /v2/library/redis/blobs/sha256:435993df2c8d3a1508114cea2dd12ef4d6cbab5c7238bb8e587f20b18982c834 
DEBU[0001] GET https://registry-1.docker.io/v2/library/redis/blobs/sha256:435993df2c8d3a1508114cea2dd12ef4d6cbab5c7238bb8e587f20b18982c834 
Getting image source signatures
DEBU[0001] Reading /home/podman/.local/share/containers/sigstore/library/redis@sha256=3487aa5cf06dceb38202b06bba45b6e6d8a92288848698a6518eee5f63a293a3/signature-1 
DEBU[0001] Not looking for sigstore attachments: disabled by configuration 
DEBU[0001] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json] 
DEBU[0001] ... will first try using the original manifest unmodified 
DEBU[0001] Checking if we can reuse blob sha256:5913474e0f39b23ca3d952a08c0008364c774a07984efaf8ad3a5ba8e04d31f6: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
DEBU[0001] Skipping blob sha256:5913474e0f39b23ca3d952a08c0008364c774a07984efaf8ad3a5ba8e04d31f6 (already present): 
DEBU[0001] Checking if we can reuse blob sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
DEBU[0001] Skipping blob sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 (already present): 
DEBU[0001] Checking if we can reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
DEBU[0001] Skipping blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 (already present): 
Copying blob 5913474e0f39 skipped: already exists  
DEBU[0001] Checking if we can reuse blob sha256:5c3180d102093de53ebc54b965de6754cbbb344a30e2bf2f5d17cbc3ac1d50b5: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
DEBU[0001] Skipping blob sha256:5c3180d102093de53ebc54b965de6754cbbb344a30e2bf2f5d17cbc3ac1d50b5 (already present): 
Copying blob 5913474e0f39 skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
DEBU[0001] Checking if we can reuse blob sha256:f76326fd8e6b93c5ddd86c0795b0a04c186faf08ce032c102aa9b3671276019a: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
Copying blob 5913474e0f39 skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 5c3180d10209 skipped: already exists  
DEBU[0001] Skipping blob sha256:f76326fd8e6b93c5ddd86c0795b0a04c186faf08ce032c102aa9b3671276019a (already present): 
Copying blob 5913474e0f39 skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 5913474e0f39 skipped: already exists  
Copying blob 4f4fb700ef54 skipped: already exists  
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 5c3180d10209 skipped: already exists  
Copying blob f76326fd8e6b skipped: already exists  
Copying blob cc6fccbbefa3 done   | 
Copying blob 034c076ba1e7 skipped: already exists  
DEBU[0001] Error pulling candidate docker.io/library/redis:alpine: copying system image from manifest list: trying to reuse blob sha256:dffcad17539bc6497d8dd4bd24f6628013eb413b988050010925cb2ce4382291 at destination: reading layer "d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58/diff: no such file or directory
 
Error: copying system image from manifest list: trying to reuse blob sha256:dffcad17539bc6497d8dd4bd24f6628013eb413b988050010925cb2ce4382291 at destination: reading layer "d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/images/overlay/link/home/podman/containers/storage/overlay/63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58/diff: no such file or directory


DEBU[0001] Shutting down engines  

Any idea on how to move on ?

@luckylinux
Copy link
Author

luckylinux commented Mar 28, 2024

And is this expected behaviour ?

Trying to run rm -rf /home/podman/containers/storage/overlay/* results in

rm: cannot remove '/home/podman/containers/storage/overlay/1253c5d3f417e128bb83a4aaffedc0e13d3c806e5c80df62a97aadeba9fba5df/diff/geoip': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/1253c5d3f417e128bb83a4aaffedc0e13d3c806e5c80df62a97aadeba9fba5df/diff/.pivot_root465541140': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/1253c5d3f417e128bb83a4aaffedc0e13d3c806e5c80df62a97aadeba9fba5df/diff/temp-storage-extract2971669986': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/19a321b8b7f230d1d092dbbb39e8a7490fe9f82a1b5fc131e81eef5478f031c4/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/19a321b8b7f230d1d092dbbb39e8a7490fe9f82a1b5fc131e81eef5478f031c4/diff/run': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/19a321b8b7f230d1d092dbbb39e8a7490fe9f82a1b5fc131e81eef5478f031c4/diff/templates': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/202615028f5dd8299dd5954b0716d1ad57fbb2f4d03b5060bb623a967f701f0e/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/202615028f5dd8299dd5954b0716d1ad57fbb2f4d03b5060bb623a967f701f0e/diff/.pivot_root3819785717': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/202615028f5dd8299dd5954b0716d1ad57fbb2f4d03b5060bb623a967f701f0e/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/202615028f5dd8299dd5954b0716d1ad57fbb2f4d03b5060bb623a967f701f0e/diff/temp-storage-extract177580789': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/2ed24bb3cbba59a50395c9a3c575045837fcd9fb1257a0b6571b279d818d1b0d/diff/manage.py': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/2ed24bb3cbba59a50395c9a3c575045837fcd9fb1257a0b6571b279d818d1b0d/diff/.pivot_root727267755': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/2ed24bb3cbba59a50395c9a3c575045837fcd9fb1257a0b6571b279d818d1b0d/diff/temp-storage-extract3424084412': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/346fd195d367036c67652367b701275617161c41150c4ae6bb6057dc8ddaff52/diff/.pivot_root803153566': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/346fd195d367036c67652367b701275617161c41150c4ae6bb6057dc8ddaff52/diff/web': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/346fd195d367036c67652367b701275617161c41150c4ae6bb6057dc8ddaff52/diff/temp-storage-extract339449774': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/385939d7b8b5389930d127e9c9701b60b2e9724c842f5135dcfa1f112093696f/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/385939d7b8b5389930d127e9c9701b60b2e9724c842f5135dcfa1f112093696f/diff/.pivot_root1654467450': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/385939d7b8b5389930d127e9c9701b60b2e9724c842f5135dcfa1f112093696f/diff/temp-storage-extract1549152121': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/3d8b71b2a24da25b50757e38edaa9ede56eee22c7a8ba4096714009cf001456e/diff/authentik': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/3d8b71b2a24da25b50757e38edaa9ede56eee22c7a8ba4096714009cf001456e/diff/temp-storage-extract4056052346': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/3d8b71b2a24da25b50757e38edaa9ede56eee22c7a8ba4096714009cf001456e/diff/.pivot_root2070125669': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/52c985e7ce100f4798e52e79135e319d88bf89c6512dab032ea4cd1667b4a35b/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/52c985e7ce100f4798e52e79135e319d88bf89c6512dab032ea4cd1667b4a35b/diff/run': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/52c985e7ce100f4798e52e79135e319d88bf89c6512dab032ea4cd1667b4a35b/diff/templates': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/576cc13c138fc4026f668d5baf904313b6dc3f2570d69050dd480cb38a4556c8/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/576cc13c138fc4026f668d5baf904313b6dc3f2570d69050dd480cb38a4556c8/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/576cc13c138fc4026f668d5baf904313b6dc3f2570d69050dd480cb38a4556c8/diff/var': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/576cc13c138fc4026f668d5baf904313b6dc3f2570d69050dd480cb38a4556c8/diff/temp-storage-extract286887090': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/576cc13c138fc4026f668d5baf904313b6dc3f2570d69050dd480cb38a4556c8/diff/.pivot_root673260650': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/5eb7704bf6f9f19555e14f3f0c14fdf54fe0237e7a0e723658126e48e2e21f36/diff/.pivot_root936647881': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/5eb7704bf6f9f19555e14f3f0c14fdf54fe0237e7a0e723658126e48e2e21f36/diff/website': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/5eb7704bf6f9f19555e14f3f0c14fdf54fe0237e7a0e723658126e48e2e21f36/diff/temp-storage-extract323249460': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58/diff/.pivot_root1880800220': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58/diff/temp-storage-extract2689432257': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58/diff/data/.wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58/diff/data/.wh..wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/66e65e0e0f2e4f11b7fe3a23f2976b1001e26ab2257a1f88f2ede20b1b571c28/diff/temp-storage-extract836686369': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/66e65e0e0f2e4f11b7fe3a23f2976b1001e26ab2257a1f88f2ede20b1b571c28/diff/.pivot_root1611533294': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/66e65e0e0f2e4f11b7fe3a23f2976b1001e26ab2257a1f88f2ede20b1b571c28/diff/pyproject.toml': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/714c002f1d04de8013b803bf0eb4bdf3056e3e2c2827c8e8f2c78cfa3e6525e1/diff/.pivot_root432206161': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/714c002f1d04de8013b803bf0eb4bdf3056e3e2c2827c8e8f2c78cfa3e6525e1/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/714c002f1d04de8013b803bf0eb4bdf3056e3e2c2827c8e8f2c78cfa3e6525e1/diff/temp-storage-extract3151536142': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/81823dc842d4a0457dc03734fbdef62bd18f0ee9a4ac7c1045328c3b3dff0803/diff/.pivot_root3393395054': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/81823dc842d4a0457dc03734fbdef62bd18f0ee9a4ac7c1045328c3b3dff0803/diff/temp-storage-extract2967939608': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/81823dc842d4a0457dc03734fbdef62bd18f0ee9a4ac7c1045328c3b3dff0803/diff/web': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/temp-storage-extract2485759760': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/authentik/.ssh/.wh..wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/ak-root/.wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/ak-root/.wh..wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/run': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/.pivot_root3083381766': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/var': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/media': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/blueprints': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/certs/.wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f/diff/certs/.wh..wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/86d576d5b09da084832f5d5b5fe0ac743be2f12d5e15e48fac1298ec10262593/diff/ak-root/venv': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/86d576d5b09da084832f5d5b5fe0ac743be2f12d5e15e48fac1298ec10262593/diff/.pivot_root4072260384': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/86d576d5b09da084832f5d5b5fe0ac743be2f12d5e15e48fac1298ec10262593/diff/temp-storage-extract4074264567': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/9e6ad22d9ecbd3732aa12d635c0cba4c2f815b85cb553c5a26575403f715e3cb/diff/temp-storage-extract2663166579': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/9e6ad22d9ecbd3732aa12d635c0cba4c2f815b85cb553c5a26575403f715e3cb/diff/var/cache/apt/archives/partial': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/9e6ad22d9ecbd3732aa12d635c0cba4c2f815b85cb553c5a26575403f715e3cb/diff/.pivot_root1485550204': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/9f57e56f367a3460acf57cee925eb7c363fa239a3de1301b23aaa2896231377b/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/9f57e56f367a3460acf57cee925eb7c363fa239a3de1301b23aaa2896231377b/diff/temp-storage-extract1119335126': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/9f57e56f367a3460acf57cee925eb7c363fa239a3de1301b23aaa2896231377b/diff/.pivot_root62576254': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a36fb9396bdd6a32a14cf2bd6e4c91a8b72790a1f3315962ae228d5d8f448b78/diff/.pivot_root3282802824': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a36fb9396bdd6a32a14cf2bd6e4c91a8b72790a1f3315962ae228d5d8f448b78/diff/lifecycle': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a36fb9396bdd6a32a14cf2bd6e4c91a8b72790a1f3315962ae228d5d8f448b78/diff/temp-storage-extract871721125': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/tmp': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/bin': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/run': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/lib': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/lib64': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/var': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/media': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/mnt': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/sbin': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/srv': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/sys': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/boot': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/opt': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/root': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/home': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/dev': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999/diff/proc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/bdb85da025f9b986fe7f158232d933c1bd6f09cddf62275906e219b0701499e3/diff/.pivot_root456993221': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/bdb85da025f9b986fe7f158232d933c1bd6f09cddf62275906e219b0701499e3/diff/blueprints': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/bdb85da025f9b986fe7f158232d933c1bd6f09cddf62275906e219b0701499e3/diff/temp-storage-extract1035591619': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/c217461a7c9a0ba108b860bb7b184520596ee2023a2a273e04ba365626578ccd/diff/locale': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/c217461a7c9a0ba108b860bb7b184520596ee2023a2a273e04ba365626578ccd/diff/temp-storage-extract928191482': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/c217461a7c9a0ba108b860bb7b184520596ee2023a2a273e04ba365626578ccd/diff/.pivot_root2594869195': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/tmp': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/bin': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/run': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/lib': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/lib64': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/var': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/media': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/mnt': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/sbin': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/srv': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/sys': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/boot': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/opt': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/root': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/home': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/dev': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15/diff/proc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d2609d9f5eb68c5edf648be9ad1910d1f4c519be01d047ab82a0437990046d07/diff/poetry.lock': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d2609d9f5eb68c5edf648be9ad1910d1f4c519be01d047ab82a0437990046d07/diff/.pivot_root3793293636': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d2609d9f5eb68c5edf648be9ad1910d1f4c519be01d047ab82a0437990046d07/diff/temp-storage-extract1882897520': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340/diff/.pivot_root2631411975': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340/diff/temp-storage-extract1167966554': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d6cd4e778275d980965c275db31ebebeec21bc72f7064cb8850d3afb96d7909a/diff/schemas': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d6cd4e778275d980965c275db31ebebeec21bc72f7064cb8850d3afb96d7909a/diff/temp-storage-extract2691504308': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d6cd4e778275d980965c275db31ebebeec21bc72f7064cb8850d3afb96d7909a/diff/.pivot_root3057627516': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4/diff/tmp': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4/diff/var': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4/diff/temp-storage-extract3759580359': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4/diff/root': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4/diff/.pivot_root795311556': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/dddb8027df185b61c86a5277b5fcb98ed71cf411c115c6a100e9b3d36665b27e/diff/tests': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/dddb8027df185b61c86a5277b5fcb98ed71cf411c115c6a100e9b3d36665b27e/diff/.pivot_root1639603899': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/dddb8027df185b61c86a5277b5fcb98ed71cf411c115c6a100e9b3d36665b27e/diff/temp-storage-extract2561516265': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69/diff/temp-storage-extract4189733855': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69/diff/var/lib/apt/lists/auxfiles/.wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69/diff/var/lib/apt/lists/auxfiles/.wh..wh..opq': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69/diff/root': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69/diff/.pivot_root3353632955': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b/diff/tmp': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b/diff/var': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b/diff/.pivot_root4032271261': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b/diff/root': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b/diff/temp-storage-extract1210109078': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509/diff/tmp': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509/diff/etc': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509/diff/usr': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509/diff/var/cache/apt/archives/partial': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509/diff/temp-storage-extract3997820512': Permission denied
rm: cannot remove '/home/podman/containers/storage/overlay/f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509/diff/.pivot_root3665530699': Permission denied

When permissions are

 ls -l /home/podman/containers/storage/overlay/
total 124
drwx------ 3 podman podman 4096 Mar 28 13:18 1253c5d3f417e128bb83a4aaffedc0e13d3c806e5c80df62a97aadeba9fba5df
drwx------ 3 podman podman 4096 Mar 28 13:18 19a321b8b7f230d1d092dbbb39e8a7490fe9f82a1b5fc131e81eef5478f031c4
drwx------ 3 podman podman 4096 Mar 28 13:18 202615028f5dd8299dd5954b0716d1ad57fbb2f4d03b5060bb623a967f701f0e
drwx------ 3 podman podman 4096 Mar 28 13:18 2ed24bb3cbba59a50395c9a3c575045837fcd9fb1257a0b6571b279d818d1b0d
drwx------ 3 podman podman 4096 Mar 28 13:18 346fd195d367036c67652367b701275617161c41150c4ae6bb6057dc8ddaff52
drwx------ 3 podman podman 4096 Mar 28 13:18 385939d7b8b5389930d127e9c9701b60b2e9724c842f5135dcfa1f112093696f
drwx------ 3 podman podman 4096 Mar 28 13:18 3d8b71b2a24da25b50757e38edaa9ede56eee22c7a8ba4096714009cf001456e
drwx------ 3 podman podman 4096 Mar 28 13:18 52c985e7ce100f4798e52e79135e319d88bf89c6512dab032ea4cd1667b4a35b
drwx------ 3 podman podman 4096 Mar 28 13:18 576cc13c138fc4026f668d5baf904313b6dc3f2570d69050dd480cb38a4556c8
drwx------ 3 podman podman 4096 Mar 28 13:18 5eb7704bf6f9f19555e14f3f0c14fdf54fe0237e7a0e723658126e48e2e21f36
drwx------ 3 podman podman 4096 Mar 28 13:18 63a48d98b53e7b6680d6b89e9ea439d9aaece043dfbb99c2d7b5c689dc2b5b58
drwx------ 3 podman podman 4096 Mar 28 13:18 66e65e0e0f2e4f11b7fe3a23f2976b1001e26ab2257a1f88f2ede20b1b571c28
drwx------ 3 podman podman 4096 Mar 28 13:18 714c002f1d04de8013b803bf0eb4bdf3056e3e2c2827c8e8f2c78cfa3e6525e1
drwx------ 3 podman podman 4096 Mar 28 13:18 81823dc842d4a0457dc03734fbdef62bd18f0ee9a4ac7c1045328c3b3dff0803
drwx------ 3 podman podman 4096 Mar 28 13:18 82ef9d49cae3cdde6bdba02bd226af7af4d451b090beb7d56ec101a1ef603a0f
drwx------ 3 podman podman 4096 Mar 28 13:18 86d576d5b09da084832f5d5b5fe0ac743be2f12d5e15e48fac1298ec10262593
drwx------ 3 podman podman 4096 Mar 28 13:18 9e6ad22d9ecbd3732aa12d635c0cba4c2f815b85cb553c5a26575403f715e3cb
drwx------ 3 podman podman 4096 Mar 28 13:18 9f57e56f367a3460acf57cee925eb7c363fa239a3de1301b23aaa2896231377b
drwx------ 3 podman podman 4096 Mar 28 13:18 a36fb9396bdd6a32a14cf2bd6e4c91a8b72790a1f3315962ae228d5d8f448b78
drwx------ 3 podman podman 4096 Mar 28 13:18 a483da8ab3e941547542718cacd3258c6c705a63e94183c837c9bc44eb608999
drwx------ 3 podman podman 4096 Mar 28 13:18 bdb85da025f9b986fe7f158232d933c1bd6f09cddf62275906e219b0701499e3
drwx------ 3 podman podman 4096 Mar 28 13:18 c217461a7c9a0ba108b860bb7b184520596ee2023a2a273e04ba365626578ccd
drwx------ 3 podman podman 4096 Mar 28 13:18 ceb365432eec83dafc777cac5ee87737b093095035c89dd2eae01970c57b1d15
drwx------ 3 podman podman 4096 Mar 28 13:18 d2609d9f5eb68c5edf648be9ad1910d1f4c519be01d047ab82a0437990046d07
drwx------ 3 podman podman 4096 Mar 28 13:18 d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340
drwx------ 3 podman podman 4096 Mar 28 13:18 d6cd4e778275d980965c275db31ebebeec21bc72f7064cb8850d3afb96d7909a
drwx------ 3 podman podman 4096 Mar 28 13:18 d71152d108e017455f7bb3c8bd8a291e3af195116a63399a3118b664858546b4
drwx------ 3 podman podman 4096 Mar 28 13:18 dddb8027df185b61c86a5277b5fcb98ed71cf411c115c6a100e9b3d36665b27e
drwx------ 3 podman podman 4096 Mar 28 13:18 e19f8026a0c6e59ff68c4742cf14b6cc64e15504e189571783536a24dd72dc69
drwx------ 3 podman podman 4096 Mar 28 13:18 e520365da9c46f020c84b151bc5c4c4be195d45304dd9f063af5abd1d024a08b
drwx------ 3 podman podman 4096 Mar 28 13:18 f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509

And for instance for the latest error message:
ls -la /home/podman/containers/storage/overlay/f26181193e161765f80aeb238a47aacf6ab95f128dcf99d466dc3cb6a2645509/diff/

total 24
dr-xr-xr-x 6 podman podman 4096 Mar 28 13:13 .
drwx------ 3 podman podman 4096 Mar 28 13:18 ..
drwxr-xr-x 2 podman podman 4096 Mar 28 13:18 etc
c--------- 1 podman podman 0, 0 Mar 28 13:13 .pivot_root3665530699
c--------- 1 podman podman 0, 0 Mar 28 13:13 temp-storage-extract3997820512
drwxrwxrwt 2 podman podman 4096 Feb 13 09:39 tmp
drwxr-xr-x 2 podman podman 4096 Mar 28 13:18 usr
drwxr-xr-x 3 podman podman 4096 Mar 28 13:18 var

As root everything works correctly:

root@MYHOST:~# rm -rf /home/podman/containers/storage/overlay/*
root@MYHOST:~# 

@luckylinux
Copy link
Author

Is it possible some customization in storage.conf is required for the overlay driver ?

My storage.conf currently looks like

# This file is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
#  Note: The storage.conf file overrides other storage.conf files based on this precedence:
#      /usr/containers/storage.conf
#      /etc/containers/storage.conf
#      $HOME/.config/containers/storage.conf
#      $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/user/1001"

# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure  the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
graphroot = "/home/podman/containers/storage"

# Optional alternate location of image store if a location separate from the
# container store is required. If set, it must be different than graphroot.
imagestore = "/home/podman/containers/images"

# Volume Path
#volumepath = "/home/podman/containers/volumes"

# Storage path for rootless users
#
rootless_storage_path = "/home/podman/containers/storage"

# Transient store mode makes all container metadata be saved in temporary storage
# (i.e. runroot above). This is faster, but doesn't persist across reboots.
# Additional garbage collection must also be performed at boot-time, so this
# option should remain disabled in most configurations.
# transient_store = true

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Allows specification of how storage is populated when pulling images. This
# option can speed the pulling process of images compressed with format
# zstd:chunked. Containers/storage looks for files within images that are being
# pulled from a container registry that were previously pulled to the host.  It
# can copy or create a hard link to the existing file when it finds them,
# eliminating the need to pull them from the container registry. These options
# can deduplicate pulling of content, disk storage of content and can allow the
# kernel to use less memory when running containers.

# containers/storage supports three keys
#   * enable_partial_images="true" | "false"
#     Tells containers/storage to look for files previously pulled in storage
#     rather then always pulling them from the container registry.
#   * use_hard_links = "false" | "true"
#     Tells containers/storage to use hard links rather then create new files in
#     the image, if an identical file already existed in storage.
#   * ostree_repos = ""
#     Tells containers/storage where an ostree repository exists that might have
#     previously pulled content which can be used when attempting to avoid
#     pulling content from the container registry
pull_options = {enable_partial_images = "true", use_hard_links = "false", ostree_repos=""}

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = "0:1668442479:65536"
# remap-gids = "0:1668442479:65536"

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps. This setting overrides the
# Remap-UIDs/GIDs setting.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the maximum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Inodes is used to set a maximum inodes of the container image.
# inodes = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
#mountopt = "nodev,metacopy=on"
mountopt = "nodev"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Set to use composefs to mount data layers with overlay.
# use_composefs = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather than the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

@luckylinux
Copy link
Author

Apparently all containers are failing now.
What the hell is going on here ?

podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.9.3
** excluding:  set()
['podman', 'ps', '--filter', 'label=io.podman.compose.project=authentik', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
['podman', 'network', 'exists', 'traefik']
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-server -d --label traefik.enable=true --label traefik.http.routers.authentik-rtr.rule=PathPrefix(`/`) && Host(`auth.MYDOMAIN.TLD`) --label traefik.http.services.authentik-svc.loadbalancer.server.port=9000 --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-server --env-file /home/podman/containers/compose/authentik/.env -e AUTHENTIK_REDIS__HOST=authentik-redis -e AUTHENTIK_POSTGRESQL__HOST=authentik-postgresql -e AUTHENTIK_POSTGRESQL__USER=authentik -e AUTHENTIK_POSTGRESQL__NAME=authentik -e AUTHENTIK_POSTGRESQL__PASSWORD=XXXXXXXXXXXXXXXXXXXXXXX -e AUTHENTIK_ERROR_REPORTING__ENABLED=True -v /home/podman/containers/data/authentik/media:/media -v /home/podman/containers/data/authentik/custom-templates:/templates -v /home/podman/containers/data/authentik/assets:/media/custom --net traefik,authentik --network-alias authentik-server -p 9000:9000 --pull always --restart unless-stopped ghcr.io/goauthentik/server:latest server
Trying to pull ghcr.io/goauthentik/server:latest...
Getting image source signatures
Copying blob d9c631089dc7 skipped: already exists  
Copying blob e1caac4eb9d2 skipped: already exists  
Copying blob 51d1f07906b7 skipped: already exists  
Copying blob 2e3e9a37b01a skipped: already exists  
Copying blob 0a33514831e0 skipped: already exists  
Copying blob 0959bfd25ad8 skipped: already exists  
Copying blob a51f1fa60300 skipped: already exists  
Copying blob ff966a1a7474 skipped: already exists  
Copying blob 7991f6eaae37 skipped: already exists  
Copying blob e749560e2e95 skipped: already exists  
Copying blob 1970e732555b skipped: already exists  
Copying blob a364eb0a3306 skipped: already exists  
Copying blob e424a52cb810 skipped: already exists  
Copying blob 684d311352d5 skipped: already exists  
Copying blob b5d24fee7368 skipped: already exists  
Copying blob 456d5ce9fa45 skipped: already exists  
Copying blob e75b93abd18c skipped: already exists  
Copying blob 2dcddcf4f394 skipped: already exists  
Copying blob 9b04b7ca178f skipped: already exists  
Copying blob 25b9435a04c1 skipped: already exists  
Copying blob 7fe3c7fb343b skipped: already exists  
Copying config 10d11d26be done   | 
Writing manifest to image destination
Error: creating container storage: creating read-write layer with ID "d6a14fcd80c5329019d475c311b65dc477b06c011b114bfe7eeb457e43f817ab": Stat /home/podman/containers/storage/overlay/1253c5d3f417e128bb83a4aaffedc0e13d3c806e5c80df62a97aadeba9fba5df/diff: no such file or directory
exit code: 125
podman start authentik-server
Error: no container with name or ID "authentik-server" found: no such container
exit code: 125
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-worker -d --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-worker --env-file /home/podman/containers/compose/authentik/.env -e AUTHENTIK_REDIS__HOST=authentik-redis -e AUTHENTIK_POSTGRESQL__HOST=authentik-postgresql -e AUTHENTIK_POSTGRESQL__USER=authentik -e AUTHENTIK_POSTGRESQL__NAME=authentik -e AUTHENTIK_POSTGRESQL__PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -e AUTHENTIK_SECRET_KEY_FILE -e AUTHENTIK_ERROR_REPORTING__ENABLED=True -v /run/user/1001/podman/podman.sock:/var/run/docker.sock -v /home/podman/containers/data/authentik/media:/media -v /home/podman/containers/data/authentik/certs:/certs -v /home/podman/containers/data/authentik/custom-templates:/templates --net authentik --network-alias authentik-worker -u root --pull always --restart unless-stopped ghcr.io/goauthentik/server:latest worker
Trying to pull ghcr.io/goauthentik/server:latest...
Getting image source signatures
Copying blob d9c631089dc7 skipped: already exists  
Copying blob e1caac4eb9d2 skipped: already exists  
Copying blob 51d1f07906b7 skipped: already exists  
Copying blob 2e3e9a37b01a skipped: already exists  
Copying blob 0a33514831e0 skipped: already exists  
Copying blob 0959bfd25ad8 skipped: already exists  
Copying blob a51f1fa60300 skipped: already exists  
Copying blob ff966a1a7474 skipped: already exists  
Copying blob 7991f6eaae37 skipped: already exists  
Copying blob e749560e2e95 skipped: already exists  
Copying blob 1970e732555b skipped: already exists  
Copying blob a364eb0a3306 skipped: already exists  
Copying blob e424a52cb810 skipped: already exists  
Copying blob 684d311352d5 skipped: already exists  
Copying blob b5d24fee7368 skipped: already exists  
Copying blob 456d5ce9fa45 skipped: already exists  
Copying blob e75b93abd18c skipped: already exists  
Copying blob 2dcddcf4f394 skipped: already exists  
Copying blob 9b04b7ca178f skipped: already exists  
Copying blob 25b9435a04c1 skipped: already exists  
Copying blob 7fe3c7fb343b skipped: already exists  
Copying config 10d11d26be done   | 
Writing manifest to image destination
Error: creating container storage: creating read-write layer with ID "efa3cffb573d1360344283c3b2333bf3b0a1db9a99db2c4a2e460c5eefc59bd3": Stat /home/podman/containers/storage/overlay/1253c5d3f417e128bb83a4aaffedc0e13d3c806e5c80df62a97aadeba9fba5df/diff: no such file or directory
exit code: 125
podman start authentik-worker
Error: no container with name or ID "authentik-worker" found: no such container
exit code: 125
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-postgresql -d --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-postgresql --env-file /home/podman/containers/compose/authentik/.env -e POSTGRES_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXX-e POSTGRES_USER=authentik -e POSTGRES_DB=authentik -v /home/podman/containers/data/authentik/database:/var/lib/postgresql/data --net authentik --network-alias authentik-postgresql --pull always --restart unless-stopped postgres:12-alpine
Resolved "postgres" as an alias (/home/podman/.cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/postgres:12-alpine...
Getting image source signatures
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 6866d4a6f5d7 done   | 
Copying blob c769d8d77aec done   | 
Copying blob 05b0ca72b17d done   | 
Copying blob 7da40a38e5aa done   | 
Copying blob cd41d8466042 done   | 
Copying blob 2922012aa875 done   | 
Copying blob 11d974fc9638 done   | 
Copying blob 2fc9c6d608c1 done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:c769d8d77aec50e12b081f78b4bc613dbbe60bd253f9d58af69ef2382d04e1ca": creating read-only layer with ID "bbb952f8cd382c2a5f818abac5f09f3a4d663ec87aa6dceb3ea7f5d83397f155": Stat /home/podman/containers/storage/overlay/d4fc045c9e3a848011de66f34b81f052d4f2c15a17bb196d637e526349601820/diff: no such file or directory
exit code: 125
podman start authentik-postgresql
Error: no container with name or ID "authentik-postgresql" found: no such container
exit code: 125
['podman', 'network', 'exists', 'authentik']
podman run --name=authentik-redis -d --label io.podman.compose.config-hash=a81c517f81fd01258e9414e3d9ed39799ee8c83333367253471a37a0b5e7d110 --label io.podman.compose.project=authentik --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@authentik.service --label com.docker.compose.project=authentik --label com.docker.compose.project.working_dir=/home/podman/containers/compose/authentik --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=authentik-redis -v /home/podman/containers/data/authentik/redis:/data --net authentik --network-alias authentik-redis --pull always --restart unless-stopped --healthcheck-command /bin/sh -c 'redis-cli ping | grep PONG' --healthcheck-interval 30s --healthcheck-timeout 3s --healthcheck-start-period 20s --healthcheck-retries 5 redis:alpine --save 60 1 --loglevel warning
Resolved "redis" as an alias (/home/podman/.cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/redis:alpine...
Getting image source signatures
Copying blob 5913474e0f39 skipped: already exists  
Copying blob 4abcf2066143 skipped: already exists  
Copying blob 5c3180d10209 skipped: already exists  
Copying blob f76326fd8e6b skipped: already exists  
Copying blob 034c076ba1e7 skipped: already exists  
Copying blob dffcad17539b skipped: already exists  
Copying blob cc6fccbbefa3 done   | 
Error: copying system image from manifest list: trying to reuse blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 at destination: reading layer "d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340" for blob "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1": 1 error occurred:
	* creating file-getter: readlink /home/podman/containers/storage/overlay/d60de4f697721993bfa886ca9182526beeadbd536be1360525558b0acfdb8340/diff: no such file or directory


exit code: 125
podman start authentik-redis
Error: no container with name or ID "authentik-redis" found: no such container
exit code: 125

@luckylinux
Copy link
Author

Latest podman info in case people are wondering ...

host:
  arch: amd64
  buildahVersion: 1.33.5
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.10+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: unknown'
  cpuUtilization:
    idlePercent: 99.12
    systemPercent: 0.35
    userPercent: 0.54
  cpus: 1
  databaseBackend: sqlite
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  freeLocks: 2048
  hostname: ra
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.1.0-18-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 883855360
  memTotal: 2012446720
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.4.0-5_amd64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-3_amd64
    path: /usr/lib/podman/netavark
    version: netavark 1.4.0
  ociRuntime:
    name: crun
    package: crun_1.14.4-1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.14.4
      commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20230309.7c7625d-1_amd64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 1023143936
  swapTotal: 1023406080
  uptime: 19h 48m 37.00s (Approximately 0.79 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs_1.13-1_amd64
      Version: |-
        fusermount3 version: 3.14.0
        fuse-overlayfs: version 1.13-dev
        FUSE library version 3.14.0
        using FUSE kernel interface version 7.31
    overlay.mountopt: nodev
  graphRoot: /home/podman/containers/storage
  graphRootAllocated: 18969468928
  graphRootUsed: 5155635200
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /run/user/1001
  transientStore: false
  volumePath: /home/podman/containers/storage/volumes
version:
  APIVersion: 4.9.3
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.21.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.3


@luckylinux
Copy link
Author

luckylinux commented Mar 29, 2024

A quick search about the error brings up #16882.

However, I don't have such .json file.

Any idea ?

Otherwise is it possible to somehow disable "reusing blob", since it keeps generating issues ?

@luckylinux
Copy link
Author

Hit it again on yet another machine (EXT4).

Might be related to some Podman Regressions after upgrade to Podman 4.9.3 ?

Or possibly something to do with Buildah etc ?

@luckylinux
Copy link
Author

@giuseppe Any idea ?

@luckylinux
Copy link
Author

podman_pull_certbot_podman_dns-cloudflare_latest.log

Now it's complaining about storage corruption.

Even though I ran a podman system reset and did a rm -rf /home/podman/storage/* as root since some overlay file refused to get removed. Rebooted, then ran that command as user, which failed.

These are some potential clues

DEBU[0002] Detected compression format gzip             
DEBU[0002] Using original blob without modification     
DEBU[0002] Checking if we can reuse blob sha256:affc648adbf9e9a671939d1df66191d3ce366c769d9f4ac8e87797278ef9b65f: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0002] Failed to retrieve partial blob: convert_images not configured 
DEBU[0002] Downloading /v2/certbot/dns-cloudflare/blobs/sha256:affc648adbf9e9a671939d1df66191d3ce366c769d9f4ac8e87797278ef9b65f 
DEBU[0002] GET https://registry-1.docker.io/v2/certbot/dns-cloudflare/blobs/sha256:affc648adbf9e9a671939d1df66191d3ce366c769d9f4ac8e87797278ef9b65f 
WARN[0002] Can't read link "/home/podman/images/overlay/l/WHGMGX4YYAW52YXKIY6BPIDUCP" because it does not exist. A storage corruption might have occurred, attempting to recreate the missing symlinks. It might be best wipe the storage to avoid further errors due to storage corruption. 
DEBU[0002] Ignoring global metacopy option, the mount program doesn't support it 
...
...
DEBU[0003] Checking if we can reuse blob sha256:2cbac784a86e9a303ab0232d347dde1e12f0ec02738190e1c8b4beebcd518e67: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0003] Failed to retrieve partial blob: convert_images not configured 

@luckylinux
Copy link
Author

What other logs should I provide ? It's still not working ...

@giuseppe
Copy link
Member

I can't think of something in particular that can cause that.

My suggestion is to start from a default configuration and iteratively change each path until you find what causes the error.

@luckylinux
Copy link
Author

Well that's a bummer ...

I have some indications that specifying directly the folder in storage.conf instead of a --rbind mount seems to work better.

But I also saw in the past that it stopped working after a while.

@luckylinux
Copy link
Author

@giuseppe

I had a read at #20203

It seems that DISABLING (commenting out the line) of imagestore (and possibly also additionalimagestores) seems to reduce the occurrence of this issue.

Not sure if it's enough to eliminate it completely, since it's so intermittent it's difficult to diagnose ...

@luckylinux
Copy link
Author

luckylinux commented May 18, 2024

@giuseppe , @rhatdan: I don't know if you want to do something about this.

For what I'm concerned, removing / commenting the imagestore (and possibly also additionalimagestores) Options in storage.conf essentially solved the Issue.

I'm not running happily with up to 20-40 Containers and the Issue didn't reapper after one Month.

On some Hosts where the Issue still appeared, I checked storage.conf and behold ... the imagestore Option was still Active. Removed the Option, did podman system reset, ran happily ever after.

I could replicate the Issue on AMD64 ZFS, AMD64 EXT4 and ARM64/AARCH64 ZFS, so I would say that it's not ARCH nor Filesystem dependent. After a few Containers are pulled / ran, the Issue would show up. Not consistently (sometimes it shows after 2 Containers, sometimes after 5), but definitively before reaching any kind of "sensible" (~ 10 ?) Containers.

If you do NOT want to find the Root Cause of the Issue (which IMHO is fair, probably not the highest priority for you), then I propose you add to the Documentation, in BOLD, RED, and preceded by 3x "IMPORTANT" Words that enabling imagestore (and possibly also additionalimagestores) can lead to this Issue (then reference the error e.g. "Error: copying system image from manifest list: trying to reuse blob */diff: no such file or directory" and maybe link to this Issue and possibly #20203).

I still cannot pinpoint exactly on which Images the issue shows up, it's possible when it's trying to install a newer Image / Update of a blob of an existing Image.

The feeling still holds that it occurs more with alpine and redis (and redis is based on alpine, at least the one I was installing) or other similar/derived Images.

I cannot however pin-point the Issue more precisely, that's why I suggest you update the Documentation accordingly, if Investigating the Root Cause is not a Priority for you.

I'd be happy if this was at least a "Documented BUG", rather than an "Obscure Feature" 👍.

If somebody wants to replicate on their end, try maybe in a VM and enable imagestore (if I use /home/podman/containers/storage for graphRoot, I'd use /home/podman/containers/images for imagestore).

For ZFS it's /zdata/PODMAN/STORAGE for graphRoot and /zdata/PODMAN/IMAGES for imagestore.

Then do a few pulls (suggested alpine, redis), do your "normal stuff" (I usually install e.g. traefik), install a few other Containers, and see if you can replicate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants