-
Notifications
You must be signed in to change notification settings - Fork 388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some metrics are missing. #3
Comments
Hi @reefland, I'm using them with kube-prometheus-stack and they work well with the Thank you for the feedback! |
I'm using kube-prometheus-stack as well. k3s comes with containerd, but its a limited version. I install an external containerd/runc from Ubuntu 20.04.4 LTS:
To get k3s to use a different containerd, you just add a parameter to point it to the alternate socket.
(The built-in containerd overlay filesystem does not support ZFS filesystem snapshotter so I can't even test it.) |
Just had a quick look this morning and I think the @reefland Can you try replacing the |
This returns empty set:
This returns data:
|
Can you check if you drop some labels in your prometheus/kube-prometheus-stack configuration/values? |
Using Chart I've haven't done done relabeling or label drops (not sure how to even do that yet). That should all be "default" settings. My Prometheus settings for the Helm
Grafana / Alertmanager settings left out for brevity. As K3s does not deploy everything as a pod, I have some setup in the
|
This issue is related to k3s, I still need to reproduce. (sorry @reefland btw) |
@dotdc should I open a new ticket? Because I have it installed.
Anyway, I'm going to investigate it further. |
I've created a new ticket. |
I upgraded to kube-prometheus-stack-37.2.0 and pretty much every work around I did to get around my original issue no longer work. Tried your unedited versions, same issue. I get an empty query result just trying to look at |
Hi @reefland,
Anyway, something seems to block you access to the cAdvisor metrics, check the servicemonitors, servicemonitor selectors, access to the kubernetes server api... Let me know |
All my targets are up. None are reporting an error. |
Can you try to deploy with an empty |
Yeah, that gets me working again. I'll go through them one at a time and see which ones breaks it. |
I was able to add each of these back with no impact that I could find to any of my dashboards:
The last two, I'm trying to figure out the PromQL to use in Prometheus to review the metrics impacted:
Would that be something like |
I did the same tests this afternoon and had the same results. I'm opening an issue to discuss theses two rules because they are way too restrictive to be enabled by default in my opinion. |
Issue opened : prometheus-community/helm-charts#2279 |
CPU by node should be derived from |
Use the node_exporter CPU metrics to get system level data. Fixes: dotdc#3 Signed-off-by: SuperQ <superq@gmail.com>
Use the node_exporter CPU metrics to get system level data. Fixes: #3 Signed-off-by: SuperQ <superq@gmail.com>
I have the same issue with bitnami/kube-prometheus helm chart which installs prometheus. |
Seems like the issue with docker, cri-docker and cAdvisor.
|
The workaround is found here |
Use the node_exporter CPU metrics to get system level data. Fixes: dotdc/grafana-dashboards-kubernetes#3 Signed-off-by: SuperQ <superq@gmail.com>
Use the node_exporter CPU metrics to get system level data. Fixes: dotdc/grafana-dashboards-kubernetes#3 Signed-off-by: SuperQ <superq@gmail.com>
Use the node_exporter CPU metrics to get system level data. Fixes: dotdc/grafana-dashboards-kubernetes#3 Signed-off-by: SuperQ <superq@gmail.com>
Beautiful dashboards. Some of the panels show no data, and I've seen this before (Kubernetes LENS). In reviewing the JSON query it is referencing attributes or keys that are not included with cAdvisor metrics (that I have). For examples, your Global dashboard:
When I look at the CPU Utilization by namespace and inspect the JSON query it is based on
container_cpu_usage_seconds_total
. When I look in my Prometheus it does not haveimage=
, here is a random one that was on the top of the query:I'm using K3s based on Kubernetes 1.23 on bare metal with containerd, no docker runtime. I have no idea if this is from containerd, kublet, cAdivsor issue or just expected as part of life when you don't use docker runtime.
If you have any suggestions, be much appreciated.
The text was updated successfully, but these errors were encountered: