Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't start container: error reading container (probably exited) json message: EOF #1430

Closed
jlebon opened this issue Sep 8, 2018 · 21 comments
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jlebon
Copy link
Contributor

jlebon commented Sep 8, 2018

/kind bug

Description

After finally upgrading to 0.8.4 from 0.7.4 (which I had been holding off due to #1283), I'm now hitting:

$ sudo podman start pet
unable to start container "pet": error reading container (probably exited) json message: EOF

Steps to reproduce the issue:

  1. Again, I'm not sure other than potentially a compat issue. I can create and run new containers fine. It seems to be specifically about starting a previously created container.
$ ros db diff 7167d79cc60336d68efcd22a7efb5ca763bfa08368722a3efdf6e119f36b87bc 48a190a11b072807d3ea259c8524c50a761546f5b9fc1a603f3ecc864304f25f | grep podman
  podman 0.7.4-4.git80612fb.fc28 -> 0.8.4-2.git9f9b8cf.fc28

Output of podman info:

host:
  Conmon:
    package: podman-0.8.4-2.git9f9b8cf.fc28.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: fd2097f7ef3d0aacd9ed450ddb36fe544a242de4-dirty'
  MemFree: 8360644608
  MemTotal: 12491980800
  OCIRuntime:
    package: runc-1.0.0-51.dev.gitfdd8055.fc28.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.0'
  SwapFree: 8321495040
  SwapTotal: 8321495040
  arch: amd64
  cpus: 4
  hostname: lux
  kernel: 4.17.19-200.fc28.x86_64
  os: linux
  uptime: 17m 50.58s
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ContainerStore:
    number: 2
  GraphDriverName: overlay
  GraphOptions:
  - overlay.mountopt=nodev
  - overlay.override_kernel_check=true
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
  ImageStore:
    number: 3
  RunRoot: /var/run/containers/storage

Additional environment details (AWS, VirtualBox, physical, etc.):

Fedora Silverblue laptop at version 28.20180907.0.

@mheon
Copy link
Member

mheon commented Sep 8, 2018

Can you rerun with --log-level=debug and grab the output of that, and also see if there's anything from conmon in the output of journalctl -b 0?

@TomSweeneyRedHat
Copy link
Member

Also I think this is an issue that crops up when container-selinux, containernetworking-cni and/or runc isn't up to the latest from the testing repos. You might try

dnf -y update runc container-selinux containernetworking-cni --enablerepo=updates-testing

@mheon
Copy link
Member

mheon commented Sep 8, 2018

My smell test says it's probably an outdated runc, and I recall we had issues with an outdated version in Fedora proper a few weeks back - it's possible that we didn't bump the dependency version and just pushed a more recent package out.

@mheon mheon added the bug label Sep 9, 2018
@vrothberg
Copy link
Member

I share the feeling with @TomSweeneyRedHat and @mheon. I think we can safely close this issue but please feel free to re-open if the occurs again.

@jlebon
Copy link
Contributor Author

jlebon commented Oct 4, 2018

I decided to try out the latest podman again and I'm still hitting this:

$ rpm -q podman runc
podman-0.9.1-3.gitaba58d1.fc28.x86_64
runc-1.0.0-51.dev.gitfdd8055.fc28.x86_64
$ sudo podman --log-level=debug start pet
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: override_kernelcheck=true           
DEBU[0000] overlay test mount with multiple lowers succeeded 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true 
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] mounted container "cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294" at "/var/lib/containers/storage/overlay/9a24c651b6bdcaa0ad7b8eed1e440204d8d5094059c7118ba0fb106029c65706/merged" 
DEBU[0000] Created root filesystem for container cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294 at /var/lib/containers/storage/overlay/9a24c651b6bdcaa0ad7b8eed1e440204d8d5094059c7118ba0fb106029c65706/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
WARN[0000] failed to parse language "en_CA.UTF-8": language: tag is not well-formed 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] added hook /usr/share/containers/oci/hooks.d/oci-systemd-hook.json 
DEBU[0000] added hook /usr/share/containers/oci/hooks.d/oci-umount.json 
DEBU[0000] hook oci-systemd-hook.json did not match     
DEBU[0000] hook oci-umount.json matched; adding to stages [prestart] 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,overlay.override_kernel_check=true]@75aeb7f897fdff7569c8bf1bc33c32823eb6c5baad9ac7dfa501ce284d795116" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,overlay.override_kernel_check=true]@75aeb7f897fdff7569c8bf1bc33c32823eb6c5baad9ac7dfa501ce284d795116" 
DEBU[0000] exporting opaque data as blob "sha256:75aeb7f897fdff7569c8bf1bc33c32823eb6c5baad9ac7dfa501ce284d795116" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,overlay.override_kernel_check=true]@75aeb7f897fdff7569c8bf1bc33c32823eb6c5baad9ac7dfa501ce284d795116" 
DEBU[0000] exporting opaque data as blob "sha256:75aeb7f897fdff7569c8bf1bc33c32823eb6c5baad9ac7dfa501ce284d795116" 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,overlay.override_kernel_check=true]@75aeb7f897fdff7569c8bf1bc33c32823eb6c5baad9ac7dfa501ce284d795116" 
DEBU[0000] Setting CGroups for container cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294 to libpod_parent:libpod:cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294 
DEBU[0000] Created OCI spec for container cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294 at /var/lib/containers/storage/overlay-containers/cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294/userdata/config.json 
DEBU[0000] /usr/libexec/podman/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/libexec/podman/conmon    args=[-s -c cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294 -u cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294/userdata -p /var/run/containers/storage/overlay-containers/cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294/userdata/pidfile -l /var/lib/containers/storage/overlay-containers/cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -t --log-level debug --syslog]
INFO[0000] Running conmon under slice /libpod_parent and unitName libpod-conmon-cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294.scope 
WARN[0000] Failed to add conmon to systemd sandbox cgroup: Invalid unit name '/libpod_parent' 
DEBU[0000] Cleaning up container cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] unmounted container "cc34689c671a758f220ebe0add9f4a8d1a46fe3ee8752ef03c10d4f4229db294" 
ERRO[0000] unable to start container "pet": error reading container (probably exited) json message: EOF 

Relevant journal entries:

Oct 04 12:37:11 lux conmon[4211]: conmon cc34689c671a758f220e <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.2YULQZ}
Oct 04 12:37:11 lux systemd[1]: libcontainer-4212-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Oct 04 12:37:11 lux systemd[1]: libcontainer-4212-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Oct 04 12:37:11 lux systemd[1]: Created slice libcontainer_4212_systemd_test_default.slice.
Oct 04 12:37:11 lux systemd[1]: Removed slice libcontainer_4212_systemd_test_default.slice.
Oct 04 12:37:11 lux systemd[1]: libcontainer_4212_systemd_test_default.slice: Delegate=yes set, but has no effect for unit type
Oct 04 12:37:11 lux systemd[1]: libcontainer_4212_systemd_test_default.slice: Delegate=yes set, but has no effect for unit type
Oct 04 12:37:11 lux systemd[1]: Created slice libcontainer_4212_systemd_test_default.slice.
Oct 04 12:37:11 lux systemd[1]: Removed slice libcontainer_4212_systemd_test_default.slice.
Oct 04 12:37:11 lux conmon[4211]: conmon cc34689c671a758f220e <ninfo>: about to accept from console_socket_fd: 9
Oct 04 12:37:11 lux conmon[4211]: conmon cc34689c671a758f220e <ninfo>: about to recvfd from connfd: 15
Oct 04 12:37:11 lux conmon[4211]: conmon cc34689c671a758f220e <ninfo>: console = {.name = '(null)'; .fd = 0}
Oct 04 12:37:11 lux conmon[4211]: conmon cc34689c671a758f220e <error>: Failed to get console terminal settings Inappropriate ioctl for device

@mheon mheon reopened this Oct 4, 2018
@mheon
Copy link
Member

mheon commented Oct 4, 2018

It's the conmon console issue again. We really need to track down exactly where this is coming from.

@jlebon
Copy link
Contributor Author

jlebon commented Oct 4, 2018

Huh, so now even rolling back to 0.7.4-4 I'm still hitting that issue. Creating a new container and then stopping and restarting that one works fine though.

@mheon
Copy link
Member

mheon commented Oct 4, 2018

I'm going to say this is either conmon or container-selinux

@mheon
Copy link
Member

mheon commented Oct 4, 2018

And since we bundle conmon into the Podman RPM - almost certainly container-selinux

@rhatdan
Copy link
Member

rhatdan commented Oct 4, 2018

Could you check to see if there are AVC messages
ausearch -m avc -ts recent

Also see if it works in permissive mode?

@aengelke
Copy link

aengelke commented Oct 7, 2018

I'm hitting the same issue and am getting the same error in permissive mode. There are no AVC denials related to podman.

@jlebon
Copy link
Contributor Author

jlebon commented Oct 11, 2018

I had already nuked the container unfortunately, so thankfully someone else is hitting this now. I don't recall any AVC denials either.

@rhatdan
Copy link
Member

rhatdan commented Dec 22, 2018

Any updates on this bug, I am going to assume it is fixed.

@rhatdan rhatdan closed this as completed Dec 22, 2018
@ghost
Copy link

ghost commented Feb 1, 2019

I'm hitting this issue on the Podman package on Arch Linux. I'm not using SELinux, in case that helps.

ERRO[0002] error reading container (probably exited) json message: EOF

journalctl -b 0 shows a similar issue as posted above:

Feb 01 10:15:40 host conmon[16076]: conmon 65ad1c8d8b74c76b8351 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.R6IUWZ}
Feb 01 10:15:40 host conmon[16076]: conmon 65ad1c8d8b74c76b8351 <ninfo>: about to accept from console_socket_fd: 10
Feb 01 10:15:40 host conmon[16076]: conmon 65ad1c8d8b74c76b8351 <ninfo>: about to recvfd from connfd: 13
Feb 01 10:15:40 host conmon[16076]: conmon 65ad1c8d8b74c76b8351 <ninfo>: console = {.name = '(null)'; .fd = 0}
Feb 01 10:15:40 host conmon[16076]: conmon 65ad1c8d8b74c76b8351 <error>: Failed to get console terminal settings Inappropriate ioctl for device

This seems to happen when I append the :ro flag to a mount, which I took from a Docker volume mount. I'm not sure if this is even supported by Podman, but this is a rather strange message if it isn't.

I'm running this from a bash script, which is when I get this error, but when I run my command separately in a shell, I get a more detailed error:

container create failed: container_linux.go:337: starting container process caused "process_linux.go:403: container init caused \"rootfs_linux.go:58: mounting \\\"/home/user/test\\\" to rootfs \\\"/home/user/.local/share/containers/storage/vfs/dir/a534323b93b21fdf164de702a405d62e344d68dbbd70b0c8202c1af54e7774d2\\\" at \\\"/home/user/.local/share/containers/storage/vfs/dir/a534323b93b21fdf164de702a405d62e344d68dbbd70b0c8202c1af54e7774d2/home/composer/test\\\" caused \\\"operation not permitted\\\"\""
: internal libpod error

Here is the command I'm running separately that causes the problem (fish shell):

podman run --rm -v (pwd):/var/app -w /var/app -v /home/user/test:/home/composer/test:ro inventis/composer:7.3-alpine -v

The bash equivalent is probably something like:

podman run --rm -v $(pwd):/var/app -w /var/app -v /home/user/test:/home/composer/test:ro inventis/composer:7.3-alpine -v

If I remove the :ro from the volume mount, everything works as expected.

@rhatdan
Copy link
Member

rhatdan commented Feb 1, 2019

Is this a new issue? Are you running rootless?

@mheon
Copy link
Member

mheon commented Feb 1, 2019

Definitely looks like rootless from the paths.

@tominventisbe This is actually an entirely separate bug. What you're seeing here is conmon, which we use to start containers and monitor their exit status, eating the error code that occurred when starting the container and instead throwing out garbage errors about how it can't properly join the container's TTY (not surprising when the container wasn't successfully created). The real bug is the 'container create failed' issue you saw later. We're looking into a partial rework of Conmon to ensure we don't ever drop the actual error, and always report something sensible.

You're seeing an error mounting /home/user/test into the container's root filesystem - that would be the bind mount.

@giuseppe Didn't we just solve an issue with read-only rootless containers? I want to say the :ro on the bind mount would make this the same issue

@giuseppe
Copy link
Member

giuseppe commented Feb 1, 2019

it looks exactly like the issue we fixed some time ago, it works for me with podman from master. @tominventisbe could you try it?

@ghost
Copy link

ghost commented Feb 4, 2019

I've built from latest master, but unfortunately this gives me a similar error:

container create failed: container_linux.go:337: starting container process caused "process_linux.go:403: container init caused \"rootfs_linux.go:58: mounting \\\"/home/user/test\\\" to rootfs \\\"/home/user/.local/share/containers/storage/vfs/dir/c290f2480e0b1aae971a969371a4301169b6faf0078bdea55408788a8726f6ea\\\" at \\\"/home/user/.local/share/containers/storage/vfs/dir/c290f2480e0b1aae971a969371a4301169b6faf0078bdea55408788a8726f6ea/home/composer/test\\\" caused \\\"operation not permitted\\\"\""
: internal libpod error

The commit I built was d5593b8e718a1ca86380faa072c654f791b18bbc (version 1.0.1-dev is reported by podman --version).

The issue is fixed in the same way if I omit the :ro from the mount.

I did only build this repository and let the rest of the dependencies be installed automatically by the build script (I do still have Podman installed on my system, so direct and indirect dependencies are probably being fetched from system), is there perhaps another repository I also have to build manually?

@ibotty
Copy link
Contributor

ibotty commented May 22, 2019

I am also hitting that issue (or a similar one). The container has been created with toolbox.

This is happening with podman-1.3.1-1.git7210727.fc30.x86_64 and container-selinux-2.101-1.gitb0061dc.fc30.noarch on a silverblue system.

$ toolbox -v enter
toolbox: resolved absolute path for /usr/bin/toolbox to /usr/bin/toolbox
toolbox: TOOLBOX_PATH is /usr/bin/toolbox
toolbox: Fedora generational core is f30
toolbox: base image is fedora-toolbox:30
toolbox: customized user-specific image is fedora-toolbox-tf:30
toolbox: container is fedora-toolbox-tf-30
toolbox: checking if container fedora-toolbox-tf-30 exists
toolbox: container fedora-toolbox-tf-30 was created from image localhost/fedora-toolbox-tf:30
toolbox: checking if image localhost/fedora-toolbox-tf:30 has volumes for host bind mounts
toolbox: trying to start container fedora-toolbox-tf-30
Error: unable to start container "fedora-toolbox-tf-30": error reading container (probably exited) json message: EOF
toolbox: failed to start container fedora-toolbox-tf-30
$ podman start --log-level info fedora-toolbox-tf-30
INFO[0000] running as rootless                          
WARN[0000] User mount overriding libpod mount at "/dev/shm" 
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied 
Error: unable to start container "fedora-toolbox-tf-30": error reading container (probably exited) json message: EOF
$ podman start --log-level debug fedora-toolbox-tf-30
INFO[0000] running as rootless                          
DEBU[0000] Initializing boltdb state at /var/home/tf/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/tf/.local/share/containers/storage 
DEBU[0000] Using run root /tmp/1614                     
DEBU[0000] Using static dir /var/home/tf/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1614/libpod/tmp      
DEBU[0000] Using volume path /var/home/tf/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend file              
DEBU[0000] overlay: mount_data=lowerdir=/var/home/tf/.local/share/containers/storage/overlay/l/3AHEXBUA4IWSVR5NJ4ULJWTEEC:/var/home/tf/.local/share/containers/storage/overlay/l/BPQJTTZP2NQP7QLXS6Z5UGUMFF:/var/home/tf/.local/share/containers/storage/overlay/l/XPTY3T7VNWB5X3CCJPO64LZBPG,upperdir=/var/home/tf/.local/share/containers/storage/overlay/d5094d953dca7f97c9d9fabfd63ab02649329d9d297b2e78ad362b60422a5bdf/diff,workdir=/var/home/tf/.local/share/containers/storage/overlay/d5094d953dca7f97c9d9fabfd63ab02649329d9d297b2e78ad362b60422a5bdf/work,context="system_u:object_r:container_file_t:s0:c178,c253" 
DEBU[0000] mounted container "8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0" at "/var/home/tf/.local/share/containers/storage/overlay/d5094d953dca7f97c9d9fabfd63ab02649329d9d297b2e78ad362b60422a5bdf/merged" 
DEBU[0000] Created root filesystem for container 8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0 at /var/home/tf/.local/share/containers/storage/overlay/d5094d953dca7f97c9d9fabfd63ab02649329d9d297b2e78ad362b60422a5bdf/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
WARN[0000] User mount overriding libpod mount at "/dev/shm" 
DEBU[0000] set root propagation to "rslave"             
DEBU[0000] Created OCI spec for container 8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0 at /var/home/tf/.local/share/containers/storage/overlay-containers/8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0/userdata/config.json 
DEBU[0000] /usr/libexec/podman/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/libexec/podman/conmon    args=[-c 8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0 -u 8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0 -r /usr/bin/runc -b /var/home/tf/.local/share/containers/storage/overlay-containers/8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0/userdata -p /tmp/1614/overlay-containers/8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0/userdata/pidfile -l /var/home/tf/.local/share/containers/storage/overlay-containers/8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0/userdata/ctr.log --exit-dir /run/user/1614/libpod/tmp/exits --conmon-pidfile /var/home/tf/.local/share/containers/storage/overlay-containers/8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/tf/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1614 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1614/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0 --socket-dir-path /run/user/1614/libpod/tmp/socket --log-level debug --syslog]
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied 
DEBU[0000] Cleaning up container 8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] unmounted container "8453b796e4db456819dedb124c43305d99c27d7a48e0fd259c1a7c8fa9c50fd0" 
ERRO[0000] unable to start container "fedora-toolbox-tf-30": error reading container (probably exited) json message: EOF 

@giuseppe
Copy link
Member

@ibotty, you are seeing a regression in 1.3.1: #3174

@ibotty
Copy link
Contributor

ibotty commented May 22, 2019

Thank you! You are right. Downgrading (rollback) to podman-1.2.0-2.git3bd528e.fc30.x86_64 fixed the problem. I would not have expected that error because the error message looked so different.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

8 participants