Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Podman containers #2424

Closed
ephracis opened this issue Mar 11, 2020 · 11 comments · Fixed by #3021
Closed

Add support for Podman containers #2424

ephracis opened this issue Mar 11, 2020 · 11 comments · Fixed by #3021

Comments

@ephracis
Copy link

ephracis commented Mar 11, 2020

Hi,

I am running cAdvisor inside a container on RHEL 8.1. I can get it to run but the metrics I get from it doesn't contain the name of the containers.

Note that since I'm on RHEL/CentOS 8 I do not use Docker, instead I am using Podman to run my containers.

I run cAdvisor like this:

podman run -d --name cadvisor \
  --volume /:/rootfs:ro \
  --volume /var/run:/var/run:rw \
  --volume /sys:/sys:ro \
  --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
  --volume /dev/disk/:/dev/disk:ro \
  --volume /var/lib/containers:/var/lib/containers:ro \
  --privileged \
  -p 8080:8080 \
  gcr.io/google-containers/cadvisor:v0.35.0

Then I started a few other containers. When I go to localhost:8080/metrics I see the containers, but they only have the ID /machine.slice/libpod-<ID>.scope, there is no label with their name:

# ... snip ...

# HELP container_start_time_seconds Start time of the container since unix epoch in seconds.
# TYPE container_start_time_seconds gauge
container_start_time_seconds{id="/"} 1.583939163e+09
container_start_time_seconds{id="/machine.slice"} 1.58393933e+09
container_start_time_seconds{id="/machine.slice/libpod-580f2573ab58f885b7e72d1059fe840b2264e9c408fad78cb35891843d624f59.scope"} 1.58393933e+09
container_start_time_seconds{id="/machine.slice/libpod-760a5366a67c41f59eecdd50538bfbeb26477dfa942f448c7b89d88e86ba3e7e.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/libpod-818886cdcc9c5cfbeeb5ffda05ae5caaeb865e78395ac6920c88c01b0163242f.scope"} 1.58393933e+09
container_start_time_seconds{id="/machine.slice/libpod-9d0ac2a5e622ede98affa5be6083bbfe87f09ba8bf54b642d18b54f3387af963.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/libpod-conmon-580f2573ab58f885b7e72d1059fe840b2264e9c408fad78cb35891843d624f59.scope"} 1.58393933e+09
container_start_time_seconds{id="/machine.slice/libpod-conmon-760a5366a67c41f59eecdd50538bfbeb26477dfa942f448c7b89d88e86ba3e7e.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/libpod-conmon-818886cdcc9c5cfbeeb5ffda05ae5caaeb865e78395ac6920c88c01b0163242f.scope"} 1.58393933e+09
container_start_time_seconds{id="/machine.slice/libpod-conmon-9d0ac2a5e622ede98affa5be6083bbfe87f09ba8bf54b642d18b54f3387af963.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/libpod-conmon-d00efd0d860eb02fa83ba63ae2be68d5b2bc48ccbb12b5619e234dc2be5eff65.scope"} 1.583939959e+09
container_start_time_seconds{id="/machine.slice/libpod-conmon-d4b955b7aa4bd5d66e5c8f58151b2b507fcf73cd397f18ac5f41d252b32619b0.scope"} 1.583939966e+09
container_start_time_seconds{id="/machine.slice/libpod-d00efd0d860eb02fa83ba63ae2be68d5b2bc48ccbb12b5619e234dc2be5eff65.scope"} 1.583939959e+09
container_start_time_seconds{id="/machine.slice/libpod-d4b955b7aa4bd5d66e5c8f58151b2b507fcf73cd397f18ac5f41d252b32619b0.scope"} 1.583939966e+09
container_start_time_seconds{id="/machine.slice/machine-libpod_pod_3b0f9cb743496747fbfc7fe5b1c380caeba7267548c0639bed3784495dba1a1b.slice"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/machine-libpod_pod_3b0f9cb743496747fbfc7fe5b1c380caeba7267548c0639bed3784495dba1a1b.slice/libpod-0e4d4a8e740a3b4ac7c6c265e30d42e26ff16f4ca95cd95e0b48b76f76bd5eb8.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/machine-libpod_pod_3b0f9cb743496747fbfc7fe5b1c380caeba7267548c0639bed3784495dba1a1b.slice/libpod-16499218d8bbf8fb7ea5017ed2b794527122d2bfdb02cbab1a3078929ff42b31.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/machine-libpod_pod_3b0f9cb743496747fbfc7fe5b1c380caeba7267548c0639bed3784495dba1a1b.slice/libpod-99160a1f4156192bfda3016e60d3048ae500f898be11f29971e47d5b489e4fe1.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/machine-libpod_pod_3b0f9cb743496747fbfc7fe5b1c380caeba7267548c0639bed3784495dba1a1b.slice/libpod-conmon-0e4d4a8e740a3b4ac7c6c265e30d42e26ff16f4ca95cd95e0b48b76f76bd5eb8.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/machine-libpod_pod_3b0f9cb743496747fbfc7fe5b1c380caeba7267548c0639bed3784495dba1a1b.slice/libpod-conmon-16499218d8bbf8fb7ea5017ed2b794527122d2bfdb02cbab1a3078929ff42b31.scope"} 1.583939482e+09
container_start_time_seconds{id="/machine.slice/machine-libpod_pod_3b0f9cb743496747fbfc7fe5b1c380caeba7267548c0639bed3784495dba1a1b.slice/libpod-conmon-99160a1f4156192bfda3016e60d3048ae500f898be11f29971e47d5b489e4fe1.scope"} 1.583939482e+09
container_start_time_seconds{id="/system.slice"} 1.583939301e+09
container_start_time_seconds{id="/system.slice/NetworkManager-wait-online.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/NetworkManager.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/auditd.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/ca_api.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cadvisor-haproxy.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cadvisor-pod.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cadvisor.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cert-provisioner.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/chronyd.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cloud-config.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cloud-final.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cloud-init-local.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/cloud-init.service"} 1.583939482e+09
container_start_time_seconds{id="/system.slice/crond.service"} 1.583939482e+09

# ... snip ...

Either cAdvisor only supports getting names for Docker containers, in which case I ask for a feature request making cAdvisor work with any OCI container.

Or there is a bug in the version I am using (0.35.0), and cAdvisor should be able to get the names of OCI containers. In that case I am reporting a bug.

Or I am doing something wrong in my setup of cAdvisor, in which case I believe that the documentation should mention this to help users like me who want to use cAdvisor to monitor Podman containers.

Thanks!

@dashpole
Copy link
Collaborator

Either cAdvisor only supports getting names for Docker containers, in which case I ask for a feature request making cAdvisor work with any OCI container.

cAdvisor only supports getting names from a specific set of container runtimes. Each container runtime integrates with cAdvisor individually. There is a directory for each supported container runtime here: https://github.com/google/cadvisor/tree/master/container

@ephracis
Copy link
Author

Sorry, I might be a little confused regarding this low level container stuff and how they all relate to each other, but would this mean that Podman should be treated as an additional container runtime - a new directory to be precise. So it's yet another piece of code needed, next to Docker, CRI-O and containerd?

I thought the whole point with OCI and stuff like runc meant that projects like cAdvisor wouldn't have to deal with each individual tool used to create the containers.

Maybe I was a bit too optimistic. 🤷‍♂

@dashpole
Copy link
Collaborator

From what I understand, OCI just refers to the spec of how to define containers. cAdvisor interacts with cgroups, which AFAIK do not have a notion of container name.

@ephracis ephracis changed the title cAdvisor metrics does not contain name of my containers Add support for Podman containers Apr 26, 2020
@GadskyPapa
Copy link

Please add support for podman!

@towe75
Copy link

towe75 commented Jan 31, 2021

I've put together a simple podman integration based on the docker compatibility API, see PR #2794

@Crapshit
Copy link

Please add Podman support

@bck01215
Copy link

Kindly incorporate backing for Podman.

@korczis
Copy link

korczis commented Jun 16, 2023

Any updates here? Is this dead? Can I somehow help?

@Creatone
Copy link
Collaborator

@korczis #3021 work in progress

@kdryetyln
Copy link

Hi everyone, any updates for podman support? I m trying to use cadvisor for podman. But It s getting error when I want to reach /docker window.
Error:
failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Also this is my command for installation:

podman run --volume=/:/rootfs:ro -v /opt/visoft/.local/share/containers --pid=host --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/opt/visoft/.local/share/containers/:/var/lib/containers:ro --volume=/dev/disk/:/dev/disk:ro --privileged --device=/dev/kmsg --publish=8180:8080 --name=monitoring_cadvisor --volume=/etc/machine-id:/etc/machine-id:ro --rm -d gcr.io/cadvisor/cadvisor:v0.47.2 --podman="unix:///run/user/600/podman/podman.sock"

PS: My podman is working as rootless

Could you guve me an update on this issue?

Thank you

@Crapshit
Copy link

Hi everyone, any updates for podman support? I m trying to use cadvisor for podman. But It s getting error when I want to reach /docker window.
Error:
failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Also this is my command for installation:

podman run --volume=/:/rootfs:ro -v /opt/visoft/.local/share/containers --pid=host --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/opt/visoft/.local/share/containers/:/var/lib/containers:ro --volume=/dev/disk/:/dev/disk:ro --privileged --device=/dev/kmsg --publish=8180:8080 --name=monitoring_cadvisor --volume=/etc/machine-id:/etc/machine-id:ro --rm -d gcr.io/cadvisor/cadvisor:v0.47.2 --podman="unix:///run/user/600/podman/podman.sock"

PS: My podman is working as rootless

Could you guve me an update on this issue?

Thank you

#3021 from @Creatone is merged to master, but there is no new version of cadvisor created yet.
We need to wait for a new version created from master branch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants