Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman run says the container name is already in use but podman ps --all does not show any container with that name #2553

Closed
Zokormazo opened this issue Mar 6, 2019 · 31 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@Zokormazo
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I have this bug after a power outage.

podman run --name nextcloud fedora
error creating container storage: the container name "nextcloud" is already in use by "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1". You have to remove that container to be able to reuse that name.: that name is already in use

podman ps --all | grep nextcloud has not output

Steps to reproduce the issue:

Dunno how to reproduce it, it appeared after a power outage and it's abrupt shutdown

Output of podman version:

host:
  BuildahVersion: 1.6-dev
  Conmon:
    package: podman-1.0.0-1.git82e8011.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: 49780a1cf10d572edc4e1ea3b8a8429ce391d47d'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 374931456
  MemTotal: 8241008640
  OCIRuntime:
    package: runc-1.0.0-67.dev.git12f6a99.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: d164d9b08bf7fc96a931403507dd16bced11b865
      spec: 1.0.1-dev
  SwapFree: 8262250496
  SwapTotal: 8380215296
  arch: amd64
  cpus: 4
  hostname: asheville.intranet.zokormazo.info
  kernel: 4.20.6-200.fc29.x86_64
  os: linux
  rootless: false
  uptime: 12h 27m 2.91s (Approximately 0.50 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 6
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
  ImageStore:
    number: 8
  RunRoot: /var/run/containers/storage

Output of podman info --debug:

debug:
  compiler: gc
  git commit: '"49780a1cf10d572edc4e1ea3b8a8429ce391d47d"'
  go version: go1.11.4
  podman version: 1.0.0
host:
  BuildahVersion: 1.6-dev
  Conmon:
    package: podman-1.0.0-1.git82e8011.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: 49780a1cf10d572edc4e1ea3b8a8429ce391d47d'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 374919168
  MemTotal: 8241008640
  OCIRuntime:
    package: runc-1.0.0-67.dev.git12f6a99.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: d164d9b08bf7fc96a931403507dd16bced11b865
      spec: 1.0.1-dev
  SwapFree: 8262250496
  SwapTotal: 8380215296
  arch: amd64
  cpus: 4
  hostname: asheville.intranet.zokormazo.info
  kernel: 4.20.6-200.fc29.x86_64
  os: linux
  rootless: false
  uptime: 12h 27m 32.11s (Approximately 0.50 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 6
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
  ImageStore:
    number: 8
  RunRoot: /var/run/containers/storage

Additional environment details (AWS, VirtualBox, physical, etc.):
Bare metal f29

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 6, 2019
@Zokormazo
Copy link
Author

Some more info:

My containers.json on /var/lib/containers/storage/overlay-containers has a reference to this container:

  {
    "id": "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1",
    "names": [
      "nextcloud"
    ],
    "image": "dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4",
    "layer": "5078a913609383e102745769c42090cb62c878780002adf133dfadf3ca9b0e55",
    "metadata": "{\"image-name\":\"docker.io/library/nextcloud:14.0.3\",\"image-id\":\"dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4\",\"name\":\"nextcloud\",\"created-at\":1544648833,\"mountlabel\":\"system_u:object_r:container_file_t:s0:c151,c959\"}",
    "created": "2018-12-12T21:07:13.804209323Z"
  }

But podman doesn't know about it. podman prune doesn not help neither

@denisbrodbeck
Copy link

@Zokormazo I'm no podman dev, but maybe try adding sudo to your command: sudo podman ps --all

I had to sudo podman run -p 5432:5432 ... because podman 1.0 needed elevated permission for port bindings (fixed in v1.1). Got confused afterwards with podman ps --all output being empty. But running sudo podman ps --all did the trick.

@mheon
Copy link
Member

mheon commented Mar 6, 2019 via email

@Zokormazo
Copy link
Author

Zokormazo commented Mar 6, 2019

@Zokormazo I'm no podman dev, but maybe try adding sudo to your command: sudo podman ps --all

All my commands were used as root.

That container is probably a relic from a partially failed container delete, or was made by Buildah or CRI-O. You should be able to force it's removal, even if we don't see it, with Podman rm -f

[root@asheville ~]# podman rm -f nextcloud
unable to find container nextcloud: no container with name or ID nextcloud found: no such container
[root@asheville ~]# podman rm -f 31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1
unable to find container 31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1: no container with name or ID 31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1 found: no such container

Can't remove it with rm -f

@mheon
Copy link
Member

mheon commented Mar 6, 2019

Oh, you're on 1.0 - damn. We added that to rm -f in 1.1

If you have Buildah installed, it should be able to remove the container in the meantime - it operates at a lower level than us, and as such can see these containers.

@Zokormazo
Copy link
Author

https://paste.fedoraproject.org/paste/qIQ9gu0DF6ZtN8fEwG5pYg

Cleaned with podman 1.1.0

@altmind
Copy link

altmind commented Jun 20, 2019

Having the same issue on centos 7.6 with podman podman.x86_64 1.2-2.git3bd528e.el7 :
podman run --rm --name container-registry registry does not remove overlay file system on stop. trying to podman rm -f container-registry give an error "/var/lib/containers/storage/overlay/.../merged device or resource busy"

[root@... lib]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@...  lib]# uname -a
Linux ... 3.10.0-957.12.2.el7.x86_64 #1 SMP Tue May 14 21:24:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

sytemd service:

ExecStartPre=-/usr/bin/podman rm -f container-registry
ExecStart=/usr/bin/podman run --conmon-pidfile=/run/container-registry.pid --rm -p 5000:5000 --name container-registry -v /etc/letsencrypt:/certs -v /opt/docker:/var/lib/registry registry:2.6.2

The podman fails at "podman run":

/usr/bin/podman run --conmon-pidfile=/run/container-registry.pid --rm -p 5000:5000 --name container-registry -v /etc/letsencrypt:/certs -v /opt/docker:/var/lib/registry  registry:2.6.2

Error: error creating container storage: the container name "container-registry" is already in use by "8dc7a522698ced3d2a0c63a3023ed4ee879d68db4a913829c098d4a260f86cf7". You have to remove that container to be able to reuse that name.: that name is already in use
# podman ps -a
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
# podman rm -f "container-registry"
ERRO[0000] Failed to remove container "container-registry" from storage: remove /var/lib/containers/storage/overlay/fb732b3ba3f35ae237eb334bee09676408a068554a5fed33eff03d46c9ac7655/merged: device or resource busy
[root@... lib]# lsof |grep overlay
#emtpy
[root@... lib]# mount |grep overlay
# dmesg |grep -i "overlay\|podman\|docker"
[   18.250705] TECH PREVIEW: Overlay filesystem may not be fully supported.
[root@... lib]# cat /var/lib/containers/storage/overlay-containers/containers.json

[{"id":"8dc7a522698ced3d2a0c63a3023ed4ee879d68db4a913829c098d4a260f86cf7","names":["container-registry"],"image":"d5ef411ad932291d7733fe7188a1515b1db7bd6e69222a13929cdc5315d21fb0","layer":"fb732b3ba3f35ae237eb334bee09676408a068554a5fed33eff03d46c9ac7655","metadata":"{\"image-name\":\"docker.io/library/registry:2.6.2\",\"image-id\":\"d5ef411ad932291d7733fe7188a1515b1db7bd6e69222a13929cdc5315d21fb0\",\"name\":\"container-registry\",\"created-at\":1560743703}","created":"2019-06-17T03:55:03.766579368Z","flags":{"MountLabel":"","ProcessLabel":""}}]

May be related to moby/moby#34198

@altmind
Copy link

altmind commented Jun 20, 2019

Well, seems that it was fixed in moby around 2018. For podman, that can be fixed by using "slave" mount.
-v /etc/letsencrypt:/certs -v /opt/docker:/var/lib/registry -> -v /etc/letsencrypt:/certs:slave -v /opt/docker:/var/lib/registry:slave

@davidlt
Copy link

davidlt commented Jul 10, 2019

I have the same issue on Fedora 31 with podman-1.4.4-1.fc30.x86_64. There are no references of this container in containers.json so not sure how to clean it up manually.

@edsantiago
Copy link
Member

I saw this also yesterday; podman-1.4.4-3.fc30 as nonroot; but cannot reproduce it. Virt is still up, with one "container name already in use" stuck. Can provide login access on request.

@mheon
Copy link
Member

mheon commented Jul 10, 2019 via email

@edsantiago
Copy link
Member

That did it. Since this seems to be a common problem, should the podman-run message perhaps be amended to include this hint?

Error: error creating container storage: the container name "foo" is already in use by "00fbb9ad28dd0cb32811e87fe789cbed612206a97395420365e3238e9afd2e1e". You have to remove that container to be able to reuse that name.: that name is already in use (hint: if "podman rm foo" doesn't clear things up, try "podman rm --storage foo")

@mheon
Copy link
Member

mheon commented Jul 10, 2019

The only issue with recommending it unconditionally is that it will quite happily destroy containers from Buildah/CRI-O as well.

The overall recommendation works something like this: Check CRI-O and Buildah to see if it's a container running there. If it is, we recommend deleting them through crictl and buildah. If it's not there, it's probably an orphan container - hit it with --storage.

@spaghetti-
Copy link

spaghetti- commented Aug 18, 2019

podman rm --storage <id> doesn't seem to work for me with the zfs driver though:

# podman ps -a
CONTAINER ID  IMAGE                            COMMAND  CREATED         STATUS                       PORTS  NAMES
6b265ecd8ed3  docker.io/library/alpine:latest  sh       21 minutes ago  Exited (0) 21 minutes ago           suspicious_banzai
45de4c6bf843  docker.io/library/alpine:latest  sh       27 minutes ago  Exited (130) 24 minutes ago         optimistic_cerf
96aaa668db27  docker.io/library/alpine:latest  sh       39 minutes ago  Exited (0) 37 minutes ago           magical_hopper
c95e5272d83f  docker.io/library/alpine:latest  sh       41 minutes ago  Exited (0) 41 minutes ago           vigorous_khorana
9645695533c7  docker.io/library/alpine:latest  sh       42 minutes ago  Exited (130) 41 minutes ago         crazy_mccarthy
15684becc00a  docker.io/library/alpine:latest  bash     42 minutes ago  Created                             dreamy_kalam
 # podman run --rm --name=prometheus --net=bridge --network container-net -v "/var/container-data/prometheus/data:/prometheus" -v "/var/container-data/prometheus/conf/prometheus.yml:/etc/prometheus/prometheus.yml" -p "10.10.0.1:9090:9090" prom/prometheus
Error: error creating container storage: the container name "prometheus" is already in use by "dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5". You have to remove that container to be able to reuse that name.: that name is already in use
# podman rm -f prometheus
Error: no container with name or ID prometheus found: no such container

# podman rm -f dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
Error: no container with name or ID dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5 found: no such container

# podman rm --storage dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
Error: error removing storage for container "dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5": exit status 1: "/usr/sbin/zfs zfs destroy -r tank/containers/60024e34b354c0274536c32b941f7826742c0579d541de3b5ab30323f2e4c0af" => cannot open 'tank/containers/60024e34b354c0274536c32b941f7826742c0579d541de3b5ab30323f2e4c0af': dataset does not exist

@spaghetti-
Copy link

Managed to reproduce the issue accidentally by trying to Ctrl-C a container twice.

^CERRO[0014] Error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32 root filesyste
m: signal: interrupt: "/usr/sbin/zfs zfs destroy -r tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9" =>
ERRO[0014] Error forwarding signal 2 to container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: container has already been removed

And I fixed it with an ugly hack:

# zfs create tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9
# podman rm --storage nginx

@mheon
Copy link
Member

mheon commented Aug 19, 2019 via email

@mheon
Copy link
Member

mheon commented Aug 19, 2019 via email

@rhatdan
Copy link
Member

rhatdan commented Aug 19, 2019

Also might be better to do discussion in containers/storage since podman is just using that library for management of its container images. And graphdrivers.

@EmilienM
Copy link
Contributor

EmilienM commented Sep 3, 2019

Note: related to #3906

@BBBosp
Copy link

BBBosp commented May 12, 2021

I'm having this same issue now, and podman rm -f or podman rm --storage don't resolve the issue.
podman version
Version: 2.2.1
API Version: 2
Go Version: go1.14.12
Built: Sun Feb 21 22:51:35 2021
OS/Arch: linux/amd64

@rhatdan
Copy link
Member

rhatdan commented May 13, 2021

Does buildah containers show anything?

@BBBosp
Copy link

BBBosp commented May 13, 2021

both buildah containers and buildah containers --all are empty

@rhatdan
Copy link
Member

rhatdan commented May 13, 2021

@mheon Ideas, could this be an out of sync libpod database?

@rhatdan
Copy link
Member

rhatdan commented May 13, 2021

@BBBosp if you have removed all containers, you could remove the bolt_state.db

rm /home/dwalsh/.local/share/containers/storage/libpod/bolt_state.db

This will remove the database but leave your images, The next run of podman will recreate the database.

@mheon
Copy link
Member

mheon commented May 13, 2021

@rhatdan I doubt it - transactions should ensure we never do a partial write to the DB.

Do you have any pods with that name?

@BBBosp
Copy link

BBBosp commented May 13, 2021

@rhatdan I still have many running pods and containers, I'd rather not have to shut them all down to fix this if there is a better way.
@mheon I do not have any pods under that name either, though this container was originally assigned to a pod that no longer exists.

@mheon
Copy link
Member

mheon commented May 13, 2021

Please open a fresh issue with the full issue template filled out - this is too in-depth to discuss here.

@gowri777
Copy link

Some more info:

My containers.json on /var/lib/containers/storage/overlay-containers has a reference to this container:

  {
    "id": "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1",
    "names": [
      "nextcloud"
    ],
    "image": "dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4",
    "layer": "5078a913609383e102745769c42090cb62c878780002adf133dfadf3ca9b0e55",
    "metadata": "{\"image-name\":\"docker.io/library/nextcloud:14.0.3\",\"image-id\":\"dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4\",\"name\":\"nextcloud\",\"created-at\":1544648833,\"mountlabel\":\"system_u:object_r:container_file_t:s0:c151,c959\"}",
    "created": "2018-12-12T21:07:13.804209323Z"
  }

But podman doesn't know about it. podman prune doesn not help neither

Thanks for the great idea sharing, i could clear one such issues

@carrete
Copy link

carrete commented May 28, 2023

I just encountered this problem, but it seems like the recommended solution is obsolete?

$ podman version
Client:       Podman Engine
Version:      4.5.1
API Version:  4.5.1
Go Version:   go1.20.4
Git Commit:   9eef30051c83f62816a1772a743e5f1271b196d7
Built:        Fri May 26 11:10:12 2023
OS/Arch:      darwin/amd64

Server:       Podman Engine
Version:      4.5.0
API Version:  4.5.0
Go Version:   go1.20.2
Built:        Fri Apr 14 11:42:22 2023
OS/Arch:      linux/amd64

$ podman rm --storage
Error: unknown flag: --storage
See 'podman rm --help'

@rhatdan
Copy link
Member

rhatdan commented May 30, 2023

Try
podman rm --force CONTAINERID

@carrete
Copy link

carrete commented May 31, 2023

Thanks, @rhatdan. Same error. I tried a handful of other things too, like using docker. Always the same error. I have a script that ran podman in a loop and piped its output to another program. I updated the script so that it only calls podman once before the start of the the loop. This appears to have solved my problem. While podman is called in quick succession elsewhere in the script in several places, it appears that only the loop and/or the pipe were problematic

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 30, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 30, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests