Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rootless containers cannot be started: RLIMIT_NPROC error #1507

Closed
hrismarin opened this issue Jun 5, 2023 · 20 comments
Closed

Rootless containers cannot be started: RLIMIT_NPROC error #1507

hrismarin opened this issue Jun 5, 2023 · 20 comments
Labels

Comments

@hrismarin
Copy link

Describe the bug

I am running several WordPress containers behind nginx reverse-proxy in a KVM/QEMU VM. I am also running a pod, which includes mariadb and a Document/File Sharing platform on a bare metal machine.

Recently, when FCOS was updated to 38.20230527.1.1 next, after a reboot, the containers were not automatically started by their systemd services. When I try to run them manually I get the following error:

Error: unable to start container "CONTAINER ID": crun: setrlimit RLIMIT_NPROC: Operation not permitted: OCI permission denied

When I rollback to to the previous deployment containers started as usual.

Reproduction steps

  1. Install or use already installed and working FCOS 38.20230514.1.0 next.
  2. Create and run a container. podman container create and/or podman container run commands must be executed without the --rm option.
  3. Make sure the application in the container is up and running.
  4. (Optional) Stop the container. The container must be stopped, not removed.
  5. Update to FCOS 38.20230527.1.1 next.
  6. After the update and reboot, start the previously created container.

Expected behavior

The container and application are up and running.

Actual behavior

podman container start CONTAINER NAME failed with

Error: unable to start container "CONTAINER ID": crun: setrlimit RLIMIT_NPROC: Operation not permitted: OCI permission denied

System details

Systems 1 & 2 (Identical, Bare Metal)
Architecture: x86_64
Motherboard: Gigabyte Technology G41M-ES2L
CPU: Pentium(R) Dual-Core CPU E6300 @ 2.80GHz
RAM: 4 GB

System 3 (KVM/QEMU)
Architecture: x86_64
Host: CentOS Stream 8
vCPUs: 2
vRAM: 4 GB

System 4 (KVM/QEMU)
Architecture: x86_64
Host: Fedora Rawhide
vCPUs: 2
vRAM: 4 GB

Butane or Ignition config

### System 1 ###
variant: fcos
version: 1.5.0

passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIk2z5yEL8TH9NrMuPNHC1ra4MT0fuJX31az6RYbj595

boot_device:
  mirror:
    devices:
      - /dev/sda
      - /dev/sdb
      - /dev/sdc

storage:

  disks:

    - device: /dev/sda
#    - device: /dev/disk/by-id/coreos-boot-disk
#      wipe_table: false
#      partitions:
#        # Delete the BIOS-BOOT partition when BIOS booting
#        - number: 1
#          wipe_partition_entry: true
#          should_exist: false
#        # Delete the EFI-SYSTEM partition when BIOS booting
#        - number: 2
#          wipe_partition_entry: true
#          should_exist: false
      partitions:
        # Override size of root partition on first disk, via the label
        # generated for boot_device.mirror
        - label: root-1
#        - number: 4
#          label: root
          resize: true
          size_mib: 10240 

        # Add a new var partition filling the remainder of the disk
        - label: var-1
#        - label: var
          size_mib: 0

    - device: /dev/sdb
      partitions:
        # Similarly for second disk
        - label: root-2
          resize: true
          size_mib: 10240

        - label: var-2
          size_mib: 0

    - device: /dev/sdc
      partitions:
        # Similarly for second disk
        - label: root-3
          resize: true
          size_mib: 10240

        - label: var-3
          size_mib: 0

    # Backup disk
    - device: /dev/sdd
      wipe_table: true
      partitions:
        - label: backup
#          resize: true
#          size_mib: 0 


  raid:
    - name: md-var
      level: raid5
      devices:
        - /dev/disk/by-partlabel/var-1
        - /dev/disk/by-partlabel/var-2
        - /dev/disk/by-partlabel/var-3

  filesystems:

    - device: /dev/md/md-root
##    - device: /dev/disk/by-partlabel/root
#      path: /
#      label: root 
      wipe_filesystem: true
      format: ext4 
##      with_mount_unit: true

    - device: /dev/md/md-var
#    - device: /dev/disk/by-partlabel/var
      path: /var
      label: var
      wipe_filesystem: true
      # We can select the filesystem we'd like
      format: ext4 
      # Ask Butane to generate a mount unit for us so that this filesystem
      # gets mounted in the real root
      with_mount_unit: true

    - device: /dev/disk/by-partlabel/backup
      path: /var/backup
      label: backup
      wipe_filesystem: true
      format: ext4
      with_mount_unit: true


  files:

    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          server.cells.lan

    - path: /var/home/core/.bashrc
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.bashrc

    - path: /var/home/core/.git-prompt.sh
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.git-prompt.sh

    - path: /var/home/core/.git-completion.bash
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.git-completion.bash

    - path: /var/home/core/.bashrc.d/history.bash
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.bashrc.d/history.bash

  directories:
    - path: /var/home/core/.bashrc.d
      user:
        id: 1000
      group:
        id: 1000

    - path: /var/backup
      user:
        id: 1000
      group:
        id: 1000
        
        
### System 2 ###
variant: fcos
version: 1.5.0

passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIk2z5yEL8TH9NrMuPNHC1ra4MT0fuJX31az6RYbj595

storage:

  files:

    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          fcos-test.lan

    - path: /var/home/core/.bashrc
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.bashrc

    - path: /var/home/core/.git-prompt.sh
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.git-prompt.sh

    - path: /var/home/core/.git-completion.bash
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.git-completion.bash

    - path: /var/home/core/.bashrc.d/history.bash
      overwrite: true
      user:
        id: 1000
      group:
        id: 1000
      contents:
        source: http://192.168.30.20:8000/.bashrc.d/history.bash

  directories:
    - path: /var/home/core/.bashrc.d
      user:
        id: 1000
      group:
        id: 1000
        
        
### System 3 ###
variant: fcos
version: 1.5.0

passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIk2z5yEL8TH9NrMuPNHC1ra4MT0fuJX31az6RYbj595


storage:

  files:

    - path: /etc/NetworkManager/system-connections/external.nmconnection
      mode: 0600
      contents:
        inline: |
          [connection]
          id=External Connection
          type=ethernet
          interface-name=enp1s0
          [ipv4]
          method=manual
          address1=192.168.30.32/24,192.168.30.1
          dns=192.168.30.1;

    - path: /etc/NetworkManager/system-connections/internal.nmconnection
      mode: 0600
      contents:
        inline: |
          [connection]
          id=Internal Connection
          type=ethernet
          interface-name=enp2s0
          [ipv4]
          method=manual
          address1=192.168.122.32/24,192.168.122.1
          dns=192.168.122.1;


    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
         philipova.com 

  
  # end files


  disks:

    # The link to the block device the OS was booted from.
  - device: /dev/disk/by-id/coreos-boot-disk
    # We do not want to wipe the partition table since this is the primary
    # device.
    wipe_table: false

    partitions:

    - number: 4
      label: root
      # Allocate 8 GiB to the rootfs.
      size_mib: 8192
      resize: true

      # We assign a descriptive label to the partition. This is important
      # for referring to it in a device-agnostic way in other parts of the
      # configuration.
    - label: containers
      size_mib: 0

    - label: rootless_containers
      size_mib: 0

    - label: mysql
      size_mib: 0

  filesystems:

    - path: /var/lib/containers
      device: /dev/disk/by-partlabel/containers
      # We can select the filesystem we'd like.
      format: xfs
      # Ask Butane to generate a mount unit for us so that this filesystem
      # gets mounted in the real root.
      with_mount_unit: true

    - path: /var/home/core/.local/share/containers
      device: /dev/disk/by-partlabel/rootless_containers
      format: xfs
      with_mount_unit: true

    - path: /var/lib/mysql
      device: /dev/disk/by-partlabel/mysql
      format: xfs
      with_mount_unit: true

# end storage

systemd:

  units:

    - name: serial-getty@ttyS0.service
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure`
          ExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM


### Sytem 4 ###
variant: fcos
version: 1.5.0

passwd:
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIk2z5yEL8TH9NrMuPNHC1ra4MT0fuJX31az6RYbj595


storage:

  files:

    - path: /etc/NetworkManager/system-connections/external.nmconnection
      mode: 0600
      contents:
        inline: |
          [connection]
          id=External Connection
          type=ethernet
          interface-name=enp1s0
          [ipv4]
          method=manual
          address1=192.168.30.42/24,192.168.30.1
          dns=192.168.30.1;

    - path: /etc/NetworkManager/system-connections/internal.nmconnection
      mode: 0600
      contents:
        inline: |
          [connection]
          id=Internal Connection
          type=ethernet
          interface-name=enp2s0
          [ipv4]
          method=manual
          address1=192.168.122.42/24,192.168.122.1
          dns=192.168.122.1;

systemd:

  units:

    - name: serial-getty@ttyS0.service
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure`
          ExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM

Additional information

### Creating, running, starting and stopping new container from an existing image works as expected on the updated 38.20230527.1.1 next system.

Replacing kernel with older 6.2.13-300.fc38.x86_64 and also overriding crun with the older 1.8.4-1.fc38.x86_64 version on 38.20230527.1.1 next system did not solve the issue.

Systems 1 and 3 already had FCOS 38.20230514.1.0 next installed and working.

On systems 2 and 4 I repeated the reproduction steps intentionally, just in case.

@dustymabe
Copy link
Member

This could be containers/podman#18696 (fixed by containers/podman#18721 which isn't in any release yet).

@Cydox
Copy link

Cydox commented Jun 5, 2023

This could be containers/podman#18696 (fixed by containers/podman#18721 which isn't in any release yet).

I think so too.

@hrismarin : To confirm you can run ulimit -u (as a regular non-root user) before and after the update. If the number is lower after the update this is the same issue.

@dustymabe dustymabe added the status/pending-upstream-release Fixed upstream. Waiting on an upstream component source code release. label Jun 5, 2023
@hrismarin
Copy link
Author

hrismarin commented Jun 5, 2023

15282 vs 15257, so it should be the same issue.

@dustymabe dustymabe changed the title Since FCOS 38.20230527.1.1 next update, pods or containers cannot be started in rootless mode rootless containers cannot be started: RLIMIT_NPROC error Jun 5, 2023
@dustymabe dustymabe changed the title rootless containers cannot be started: RLIMIT_NPROC error Rootless containers cannot be started: RLIMIT_NPROC error Jun 5, 2023
@dustymabe
Copy link
Member

What I don't quite fully understand is that the podman version didn't change between 38.20230514.1.0 and 38.20230527.1.1, but the fix is in podman?

Was it a different piece of software that caused another change that affected containers?

@dustymabe
Copy link
Member

Here should be the full list of changes:

Added:

    ipcalc-1.0.2-2.fc38.x86_64
    passt-0^20230509.g96f8d55-1.fc38.x86_64
    passt-selinux-0^20230509.g96f8d55-1.fc38.noarch 

Upgraded:

    afterburn 5.4.0-2.fc38.x86_64 → 5.4.2-1.fc38.x86_64
    afterburn-dracut 5.4.0-2.fc38.x86_64 → 5.4.2-1.fc38.x86_64
    amd-gpu-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    atheros-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    bind-libs 32:9.18.14-1.fc38.x86_64 → 32:9.18.15-1.fc38.x86_64
    bind-license 32:9.18.14-1.fc38.noarch → 32:9.18.15-1.fc38.noarch
    bind-utils 32:9.18.14-1.fc38.x86_64 → 32:9.18.15-1.fc38.x86_64
    brcmfmac-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    c-ares 1.19.0-1.fc38.x86_64 → 1.19.1-1.fc38.x86_64
    container-selinux 2:2.211.1-1.fc38.noarch → 2:2.215.0-2.fc38.noarch
    crun 1.8.4-1.fc38.x86_64 → 1.8.5-1.fc38.x86_64
    ethtool 2:6.2-1.fc38.x86_64 → 2:6.3-1.fc38.x86_64
    fuse-overlayfs 1.10-3.fc38.x86_64 → 1.12-1.fc38.x86_64
    glib2 2.76.2-1.fc38.x86_64 → 2.76.3-1.fc38.x86_64
    intel-gpu-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    iptables-legacy 1.8.9-2.fc38.x86_64 → 1.8.9-4.fc38.x86_64
    iptables-legacy-libs 1.8.9-2.fc38.x86_64 → 1.8.9-4.fc38.x86_64
    iptables-libs 1.8.9-2.fc38.x86_64 → 1.8.9-4.fc38.x86_64
    iptables-nft 1.8.9-2.fc38.x86_64 → 1.8.9-4.fc38.x86_64
    iptables-services 1.8.9-2.fc38.noarch → 1.8.9-4.fc38.noarch
    iptables-utils 1.8.9-2.fc38.x86_64 → 1.8.9-4.fc38.x86_64
    irqbalance 2:1.9.1-2.fc38.x86_64 → 2:1.9.2-1.fc38.x86_64
    libgcc 13.1.1-1.fc38.x86_64 → 13.1.1-2.fc38.x86_64
    libipa_hbac 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    libmodulemd 2.14.0-5.fc38.x86_64 → 2.15.0-2.fc38.x86_64
    libsss_certmap 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    libsss_idmap 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    libsss_nss_idmap 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    libsss_sudo 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    libstdc++ 13.1.1-1.fc38.x86_64 → 13.1.1-2.fc38.x86_64
    linux-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    linux-firmware-whence 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    microcode_ctl 2:2.1-54.fc38.x86_64 → 2:2.1-55.fc38.x86_64
    mt7xxx-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    nvidia-gpu-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    ostree 2023.1-2.fc38.x86_64 → 2023.3-1.fc38.x86_64
    ostree-libs 2023.1-2.fc38.x86_64 → 2023.3-1.fc38.x86_64
    realtek-firmware 20230404-149.fc38.noarch → 20230515-150.fc38.noarch
    rpm-ostree 2023.3-1.fc38.x86_64 → 2023.4-2.fc38.x86_64
    rpm-ostree-libs 2023.3-1.fc38.x86_64 → 2023.4-2.fc38.x86_64
    rpm-sequoia 1.4.0-2.fc38.x86_64 → 1.4.0-3.fc38.x86_64
    sssd-ad 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-client 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-common 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-common-pac 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-ipa 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-krb5 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-krb5-common 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-ldap 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    sssd-nfs-idmap 2.8.2-4.fc38.x86_64 → 2.9.0-1.fc38.x86_64
    vim-data 2:9.0.1486-1.fc38.noarch → 2:9.0.1575-1.fc38.noarch
    vim-minimal 2:9.0.1486-1.fc38.x86_64 → 2:9.0.1575-1.fc38.x86_64 


@Cydox
Copy link

Cydox commented Jun 5, 2023

What I don't quite fully understand is that the podman version didn't change between 38.20230514.1.0 and 38.20230527.1.1, but the fix is in podman?

podman isn't really broken in either version. It's just that you can not start a container that was created in 38.20230514.1.0 in 38.20230527.1.1.

Upon starting the container the runtime will set the container's resource limits (number of processes, files, etc). A regular user (as is the case here) is only allowed to set the resource limits to the current hard limit at most. Trying to set a soft limit higher than the hard limit will results in the error seen here (or trying to increase the hard limit). https://linux.die.net/man/2/setrlimit

The problem with podman prior to the fix is that the default resource limits for a container were set at container creation time instead of when it's started. So if for any reason the resource limits for an unprivileged user decrease between the container's creation time and when it's launched, this error will result. In this case podman (actually the runtime) will still try to set the higher resource limit that was saved in the containers config at creation time. This will fail as the current hard limit is lower.

Was it a different piece of software that caused another change that affected containers?

I don't know what package caused the decrease in the resource limits. Maybe linux-firmware? It's also affected by the amount of memory in the system. So people can potentially run into this issue without doing any updates.

Once the fix is in an upstream release and it makes it into coreos, people can still run into this issue unfortunately, because containers that were created with a version of podman without the fix will still have the resource limit set from creation time. So a future coreos update or memory configuration change even after the upstream fix is in coreos can still trigger this issue on older containers.

@hrismarin
Copy link
Author

Maybe linux-firmware?

Unfortunately

LocalOverrides: linux-firmware linux-firmware-whence 20230515-150.fc38 -> 20230404-149.fc38

did not resolve the issue.

@hrismarin
Copy link
Author

Thank you @travier!

The only reason for me to start previously created containers, so far, is that they are automatically started by their systemd services. This is perhaps not the most proper and recommended way. I have already tried and implemented some workarounds.

I'm just trying to participate as much as I can to prevent similar issues in the future... if that's even possible.

@dustymabe
Copy link
Member

Going off of #1507 (comment) containers/podman@249f046 is in v4.6.0+.

@dustymabe dustymabe added status/pending-testing-release Fixed upstream. Waiting on a testing release. status/pending-next-release Fixed upstream. Waiting on a next release. and removed status/pending-upstream-release Fixed upstream. Waiting on an upstream component source code release. labels Aug 11, 2023
@dustymabe
Copy link
Member

The fix for this went into next stream release 38.20230806.1.0. Please try out the new release and report issues.

@dustymabe
Copy link
Member

The fix for this went into testing stream release 38.20230806.2.0. Please try out the new release and report issues.

@dustymabe dustymabe added status/pending-stable-release Fixed upstream and in testing. Waiting on stable release. and removed status/pending-testing-release Fixed upstream. Waiting on a testing release. status/pending-next-release Fixed upstream. Waiting on a next release. labels Aug 11, 2023
@hrismarin
Copy link
Author

hrismarin commented Aug 18, 2023

Since I no longer run containers without the --rm option (more or less because of this issue), I reproduced all the steps.
Unfortunately, the same error is returned.

[core@localhost ~]$ podman container list --all 
CONTAINER ID  IMAGE                                 COMMAND               CREATED            STATUS                     PORTS                   NAMES
d07b8a318528  quay.io/fedora/httpd-24-micro:latest  /usr/bin/run-http...  About an hour ago  Exited (0) 40 minutes ago  0.0.0.0:8080->8080/tcp  RLIMIT_NPROC
[core@localhost ~]$ podman container start RLIMIT_NPROC 
Error: unable to start container "d07b8a31852802f1151aa1b25ab91006ab0c0f08cd913b41c588cd6ddd432e3b": crun: setrlimit `RLIMIT_NPROC`: Operation not permitted: OCI permission denied
[core@localhost ~]$ podman --version 
podman version 4.6.0
[core@localhost ~]$ rpm-ostree status 
State: idle
AutomaticUpdatesDriver: Zincati
  DriverState: active; periodically polling for updates (last checked Fri 2023-08-18 13:19:41 UTC)
Deployments:
● fedora:fedora/x86_64/coreos/next
                  Version: 38.20230806.1.0 (2023-08-07T18:56:40Z)
                   Commit: ec10f2df99e1bfd4621022f5d11950cea5395c867ce3e9a4eb2e1f5aee4cf0e5
             GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464

  fedora:fedora/x86_64/coreos/next
                  Version: 38.20230514.1.0 (2023-05-15T11:40:36Z)
                   Commit: a3d4c8820c43eeb733062d4605e74891074674cd87b175c4f6ef84b0e2e1b98f
             GPGSignature: Valid signature by 6A51BBABBA3D5467B6171221809A8D7CEB10B464

However, in the following scenario:

  1. Create a container with podman version 4.6.0
  2. Lower maximum number of processes
  3. Reboot
  4. Start the container

The container is running, so it looks like the fix in podman is working.

Please let me know if you need more details or additional tests. I would be glad to do so.

@travier
Copy link
Member

travier commented Aug 19, 2023

Containers that have been created from older (unfixed) podman versions can not be "fixed" after the fact. This only fixes it for new containers created by podman 4.6.

@hrismarin
Copy link
Author

Yes, it was already assumed by @Cydox:

Once the fix is in an upstream release and it makes it into coreos, people can still run into this issue unfortunately, because containers that were created with a version of podman without the fix will still have the resource limit set from creation time. So a future coreos update or memory configuration change even after the upstream fix is in coreos can still trigger this issue on older containers.

I just wanted to check.

@travier
Copy link
Member

travier commented Aug 21, 2023

Workarounds

The issue is fixed in podman 4.6 but containers created before this release can not be fixed without being re-created or raising the nproc limit.

Save and restore containers

You can export and import your existing containers. See for example:

Set a higher ulimit for your user

Write the following file, replacing username by your user and 150000 by a value larger than ulimut -u:

$ cat /etc/security/limits.d/50-podman-ulimits.conf
username hard nproc 150000

Reboot to apply the configuration change.

@dustymabe
Copy link
Member

The fix for this went into stable stream release 38.20230806.3.0.

@dustymabe dustymabe removed the status/pending-stable-release Fixed upstream and in testing. Waiting on stable release. label Aug 22, 2023
@tidux
Copy link

tidux commented Aug 29, 2023

This is still broken in latest Kinoite/Podman.

$ rpm-ostree status
State: idle
Deployments:
● fedora:fedora/38/x86_64/kinoite
                  Version: 38.20230829.0 (2023-08-29T00:54:20Z)
<snip>
$ podman --version
podman version 4.6.1
$ toolbox enter
Error: failed to start container fedora-toolbox-38

@Cydox
Copy link

Cydox commented Aug 29, 2023

@tidux What podman version was the container created with? It is only fixed for containers that were created with 4.6 or later.

@tidux
Copy link

tidux commented Aug 29, 2023

I ended up deleting the containers with podman rm -a and starting over. It did have the blank entrypoint problem so I think it was created with 4.5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants