You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error: container "nextcloud_[…]" is mounted and cannot be removed without using force: container state improper
…when trying to remove containers.
Reproduction steps
$ podman-compose -p nextcloud down
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
** excluding: set()
podman stop -t 10 nextcloud_caddy_1
Error: no container with name or ID "nextcloud_caddy_1" found: no such container
exit code: 125
podman stop -t 10 nextcloud_cron_1
Error: no container with name or ID "nextcloud_cron_1" found: no such container
exit code: 125
podman stop -t 10 nextcloud_nc_1
Error: no container with name or ID "nextcloud_nc_1" found: no such container
exit code: 125
podman stop -t 10 nextcloud_db_1
Error: no container with name or ID "nextcloud_db_1" found: no such container
exit code: 125
podman stop -t 10 nextcloud_redis_1
Error: no container with name or ID "nextcloud_redis_1" found: no such container
exit code: 125
podman rm nextcloud_caddy_1
Error: no container with ID or name "nextcloud_caddy_1" found: no such container
exit code: 1
podman rm nextcloud_cron_1
Error: no container with ID or name "nextcloud_cron_1" found: no such container
exit code: 1
podman rm nextcloud_nc_1
Error: container "nextcloud_nc_1" is mounted and cannot be removed without using force: container state improper
exit code: 2
podman rm nextcloud_db_1
Error: container "nextcloud_db_1" is mounted and cannot be removed without using force: container state improper
exit code: 2
podman rm nextcloud_redis_1
Error: container "nextcloud_redis_1" is mounted and cannot be removed without using force: container state improper
exit code: 2
$ podman rm nextcloud_db_1
Error: container "nextcloud_db_1" is mounted and cannot be removed without using force: container state improper
When trying to start the container I get this error then:
$ podman-compose --in-pod=0 -p nextcloud up -d
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.6.1
** excluding: set()
['podman', 'ps', '--filter', 'label=io.podman.compose.project=nextcloud', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
podman pod create --name=pod_nextcloud --infra=false --share=
Error: adding pod to state: name "pod_nextcloud" is in use: pod already exists
exit code: 125
[…]
Error: creating container storage: the container name "nextcloud_redis_1" is already in use by 4dbd88724af1ee89d859c6b2dfebb89f95cf6358503e09a8763009877a4830cb. You have to remove that container to be able to reuse that name: that name is already in use
exit code: 125
podman start nextcloud_redis_1
Error: no container with name or ID "nextcloud_redis_1" found: no such container
exit code: 125
[…]
Error: creating container storage: the container name "nextcloud_db_1" is already in use by 2760fa4a652ba952ef5270d256c658dd3f4455d96fe7554abdb13bbfbdbd6c19. You have to remove that container to be able to reuse that name: that name is already in use
exit code: 125
podman start nextcloud_db_1
Error: no container with name or ID "nextcloud_db_1" found: no such container
exit code: 125
[…]
Then there are dependency errors depending on the mentioned containers to start.
The thing is I see nothing of that running?
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ podman pod ls
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
562b052bdab9 pod_nextcloud Created 4 hours ago 0
Also the container that is said to use the name, is not there?
$ podman inspect 4dbd88724af1ee89d859c6b2dfebb89f95cf6358503e09a8763009877a4830cb
[]
Error: no such object: "4dbd88724af1ee89d859c6b2dfebb89f95cf6358503e09a8763009877a4830cb"
I can remove the pod, but it does not help:
$ podman pod rm pod_nextcloud
562b052bdab9c31692403405579935979a7026f1945bbc0eb0f1594a2b80b546
$ podman inspect nextcloud_db_1
[]
Error: no such object: "nextcloud_db_1"
Expected behavior
I should somehow be able to force/fix that. I have no idea what is "improper" nor how to fix it.
Actual behavior
I cannot stop or start the containers.
System details
Bare Metal
OS: Linux Fedora CoreOS v38.20230819.3.0
podman version: 4.6.1
podman compose version: (git hex) 1.0.6
podman info output
$ podman-compose versionpodman-compose version: 1.0.6['podman', '--version', '']using podman version: 4.6.1podman-compose version 1.0.6podman --version podman version 4.6.1exit code: 0$ podman versionClient: Podman EngineVersion: 4.6.1API Version: 4.6.1Go Version: go1.20.7Built: Fri Aug 11 00:07:53 2023OS/Arch: linux/amd64$ podman infohost:
arch: amd64buildahVersion: 1.31.2cgroupControllers:
- cpu
- memory
- pidscgroupManager: systemdcgroupVersion: v2conmon:
package: conmon-2.1.7-2.fc38.x86_64path: /usr/bin/conmonversion: 'conmon version 2.1.7, commit: 'cpuUtilization:
idlePercent: 98.31systemPercent: 0.82userPercent: 0.86cpus: 4databaseBackend: boltdbdistribution:
distribution: fedoravariant: coreosversion: "38"eventLogger: journaldfreeLocks: 615hostname: minipureidMappings:
gidmap:
- container_id: 0host_id: 1002size: 1
- container_id: 1host_id: 231072size: 65536uidmap:
- container_id: 0host_id: 1002size: 1
- container_id: 1host_id: 231072size: 65536kernel: 6.4.7-200.fc38.x86_64linkmode: dynamiclogDriver: journaldmemFree: 60165713920memTotal: 67283185664networkBackend: netavarknetworkBackendInfo:
backend: netavarkdns:
package: aardvark-dns-1.7.0-1.fc38.x86_64path: /usr/libexec/podman/aardvark-dnsversion: aardvark-dns 1.7.0package: netavark-1.7.0-1.fc38.x86_64path: /usr/libexec/podman/netavarkversion: netavark 1.7.0ociRuntime:
name: crunpackage: crun-1.8.6-1.fc38.x86_64path: /usr/bin/crunversion: |- crun version 1.8.6 commit: 73f759f4a39769f60990e7d225f561b4f4f06bcf rundir: /run/user/1002/crun spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJLos: linuxpasta:
executable: /usr/bin/pastapackage: passt-0^20230625.g32660ce-1.fc38.x86_64version: | pasta 0^20230625.g32660ce-1.fc38.x86_64 Copyright Red Hat GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.remoteSocket:
path: /run/user/1002/podman/podman.socksecurity:
apparmorEnabled: falsecapabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOTrootless: trueseccompEnabled: trueseccompProfilePath: /usr/share/containers/seccomp.jsonselinuxEnabled: trueserviceIsRemote: falseslirp4netns:
executable: /usr/bin/slirp4netnspackage: slirp4netns-1.2.0-12.fc38.x86_64version: |- slirp4netns version 1.2.0 commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383 libslirp: 4.7.0 SLIRP_CONFIG_VERSION_MAX: 4 libseccomp: 2.5.3swapFree: 4294963200swapTotal: 4294963200uptime: 4h 27m 21.00s (Approximately 0.17 days)plugins:
authorization: nulllog:
- k8s-file
- none
- passthrough
- journaldnetwork:
- bridge
- macvlan
- ipvlanvolume:
- localregistries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.iostore:
configFile: /var/home/****/.config/containers/storage.confcontainerStore:
number: 6paused: 0running: 4stopped: 2graphDriverName: overlaygraphOptions: {}graphRoot: /var/home/****/.local/share/containers/storagegraphRootAllocated: 999650168832graphRootUsed: 46548176896graphStatus:
Backing Filesystem: btrfsNative Overlay Diff: "false"Supports d_type: "true"Using metacopy: "false"imageCopyTmpDir: /var/tmpimageStore:
number: 12runRoot: /run/user/1002/containerstransientStore: falsevolumePath: /var/home/****/.local/share/containers/storage/volumesversion:
APIVersion: 4.6.1Built: 1691705273BuiltTime: Fri Aug 11 00:07:53 2023GitCommit: ""GoVersion: go1.20.7Os: linuxOsArch: linux/amd64Version: 4.6.1$ rpm -q podmanpodman-4.6.1-1.fc38.x86_64
Butane or Ignition config
No response
Additional information
Docker-compose v'3.7' – exact same YAML started before without any problems.
Describe the bug
Somehow I get the error:
…when trying to remove containers.
Reproduction steps
When trying to start the container I get this error then:
Then there are dependency errors depending on the mentioned containers to start.
The thing is I see nothing of that running?
Also the container that is said to use the name, is not there?
I can remove the pod, but it does not help:
Expected behavior
I should somehow be able to force/fix that. I have no idea what is "improper" nor how to fix it.
Actual behavior
I cannot stop or start the containers.
System details
podman info output
Butane or Ignition config
No response
Additional information
Docker-compose v'3.7' – exact same YAML started before without any problems.
I tested the echo example here and it did work, I have no idea what's wrong.
Cross-posted as containers/podman-compose#767 and containers/podman#19913
The text was updated successfully, but these errors were encountered: