-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance kubeletstatsreceiver
to scrape non-standard endpoints
#26719
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@asweet-confluent sounds like a reasonable idea to me. Can you provide in this issue the metrics we'd be collecting? Are there any important differences between those endpoints and the stats/summary data we collect today? |
I updated the issue description with the raw metric names, presumably
As noted in the K8S docs:
I think the cadvisor metrics come directly from cadvisor itself so that makes sense. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
@asweet-confluent Can you please reopen the issue? I think the missing metrics are really necessary in order to make the receiver complete |
Agreed - can this get re-opened @asweet-confluent? |
Aren't some metrics listed in the If I remember correctly, the kubelet's We can consider getting additional metrics from other endpoints, but I believe we should be selective to metrics that are actually important. Once we have this specific list of metrics we could gradually start discussing them as part of the open-telemetry/semantic-conventions#1032 as well. On a slightly different note there were several discussions around these endpoints over the past years, so we would need to verify we are aligned with the most recent update. Some refs:
/cc @dashpole |
I don't think we should support scraping prometheus endpoints in the kubelet stats receiver. You can see the proposal behind the CRI-direct feature here: https://github.com/kubernetes/enhancements/tree/6f648005d3b10d9c24984d139f96077f720726f7/keps/sig-node/2371-cri-pod-container-stats That would be a good option to consider after it graduates to beta. |
Can you elaborate on why? I like the CRI active approach for sure - but I see that as unrelated to fully supporting kubelet stats. The kubelet stats approach is generic and easier to implement on the operator side (less permissions/volume mount configuration).. |
The prometheus receiver already supports the endpoints in question. Given how large the Prometheus ecosystem is, it doesn't seem sustainable to have specific receivers to translate from prometheus conventions to OTel conventions for each source of Prometheus metrics. |
Given that - I might argue that the kubelet receiver then should be deprecated. I think it's worse to have half a solution than no dedicated solution at all. I do like the idea of using the kubelet receiver because it's simpler to configure and standardized the metric names that are exported into something otel specific though.. |
Hi everyone, Receivers such as I agree with @diranged's on the fact that the |
My question from #26719 (comment) is still valid here: I think we still miss a well defined proposal which lists specific metrics that are not provided by the In addition, I think I'd agree with what @dashpole mentioned at #26719 (comment). The Last but not least, standardizing a prometheus based input on top of the prometheus receiver sounds like a good example for open-telemetry/opentelemetry-collector#8372. |
Sorry for my late reply. I agree with you that we should start by listing which metrics we miss with the As for the way it should be implemented, multiple scrapers on the same receiver would make sense. Our objectives is to use OpenTelemetry as much as possible within our observability pipeline to avoid any conversion issue. Scraping the Prometheus endpoint and having the collector doing the conversion to otlp can be cumbersome. In the current situation, we have some metrics coming via the |
That would make sense. You could also create a standalone issue to propose this new batch of metrics and link back to open-telemetry/semantic-conventions#1032 (we can use that issue as a meta issue). TBH though, regarding the implementation we would need to think of the details thoroughly. As I mentioned already maybe the work for supporting templates on open-telemetry/opentelemetry-collector#8372 can help here. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
receiver/kubeletstats
Is your feature request related to a problem? Please describe.
kubeletstatsreceiver
scrapes kubelet's/metrics
Prometheus endpoint, but kubelet also exports other metrics at non-standard endpoints:/metrics/cadvisor
/metrics/resource
/metrics/probes
Describe the solution you'd like
kubeletstatsreceiver
should be enhanced to scrape those other endpoints. This is what's done by Datadog's kubelet integration - see the config here.I've compiled a list of metrics from the source code as well as direct queries to the endpoints. Note that this may not be an exhaustive list:
Metric List
Describe alternatives you've considered
As a workaround, you can configure Prometheus scrape jobs to hit those endpoints. This is not ideal because
kubeletstatsreceiver
renames the default metric attributes, e.g.namespace
becomesk8s.namespace.name
. Mixingkubeletstatsreceiver
and Prometheus scrape jobs would create disjointed label sets unless you add a separate processing step that renames them.Additional context
No response
The text was updated successfully, but these errors were encountered: