-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom collector for multiple metrics #49
Comments
As you stated above, a clean solution is not possible today, i.e. would require some changes within In the meantime would sharing some state between the custom metrics be an option? That shared state would be updated with a new aggregate by a single custom metric per scrape. Consecutive custom metrics within the same scrape don't need to aggregate themselves, but instead access the shared state. |
Thanks for the reply! Yep, I thought as much, no worries - I'll have a think about potential APIs.
Yeah, I've got a solution where each custom metric shares a reference to some shared state, but there's a few things about it that are suboptimal:
|
We could give each Prometheus scrape an ID and then make that ID available through the I think something worth exploring is the ability to register a |
Can you say more about this? What work is involved? |
I've now changed the implementation so that comment isn't quite correct, although my current implementation still feels a bit suboptimal. Previously I'd thrown together a I do something a bit different now - I've pushed it up to http://github.com/sd2k/tokio-metrics-prometheus in case it's of any use. I'd like to get it up to crates.io soon but want to figure out #57 first! |
I'm going to describe some fundamental assumptions of Prometheus, because it's not totally clear to me if these crates abide those requirements. But I'm no expert and it's entirely possible that the issues you're talking about have nothing to do with this stuff. If that's true, or if I'm telling you stuff you already know, my apologies! Prometheus operates in a "pull" model whereby Prometheus servers scrape targets on a regular interval. Scrape means make an HTTP GET which should yield the current state of all metrics (timeseries) known to the process. But reading the current state of a metric should — with some exceptions — not be an expensive operation. The expectation is that each timeseries value is maintained as a simple primitive value, which is cheap to both read and write. (The exception is "func" metrics, like gauge funcs, which are implemented as functions that get called during scrapes, and return values. These can be expensive! But shouldn't be. Scrapes are expected to be fast.) So...
When Prometheus performs a scrape, the expectation is that the HTTP handler is doing a bunch of relatively cheap, likely atomic, reads of simple primitive values. Which is to say that any "calculation" is expected to be done ahead of time. edit: Another way of saying all of this is that Is that not the case here? |
A scrape will indeed invoke encode on a set of metrics, but encode shouldn't know anything about scrapes. Rather, encode should operate on an immutable snapshot, i.e. a copy, of a metric value, which scrapes should capture. |
Indeed, we're in agreement on most things here I think! In my specific case I'm trying to instrument an external crate so I don't have the ability to increment counters or anything when it's actually happening. Instead, the tokio_metrics::RuntimeMonitor::intervals method is the only way to get hold of the state I need and it comes in the form of an iterator, which I need to advance at the start of each scrape to get the current value. The quoted part of your comment:
is a (poor) workaround I had to use due to the lack of such an API 🙂 I've since switched to a much less expensive method, but that still requires sharing state between multiple custom metrics which isn't a super clean implementation. The aim of this issue is to provide a convenient API that allows efficient scraping of multiple metrics that represent some state that's out of my control, without resorting to complex state sharing. What I'd like is to be able to:
That would allow me to do something like:
(Note: I'm not proposing that this is an API that would actually work, but hopefully it conveys my meaning). |
The Collectors example in the Golang client docs explains this in better detail than I possibly could, too. |
Check.
Ah! So this shouldn't be EncodeMetrics, but rather CollectMetrics. I think the disconnect here may be that client_rust doesn't currently provide a well-defined abstraction layer between collection and encoding. A registry is something that holds long-lived mutable metric values which can be mutated and collected; a collector is typically a trait implemented by a registry which yields a "snapshot" of each metric which can be encoded; and an encoder is something that encodes those fixed metrics for scraping. tl;dr: collecting != encoding
I think not scraping but collecting? Which I think could be solved by defining a new collector trait? |
Yep, I think that's accurate! (I did find it strange that I was implementing something called |
The `Collector` abstraction allows users to provide additional metrics and their description on each scrape. See also: - https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#hdr-Custom_Collectors_and_constant_Metrics - prometheus#49 - prometheus#29
Cross referencing proposal here #82 |
The `Collector` abstraction allows users to provide additional metrics and their description on each scrape. See also: - https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#hdr-Custom_Collectors_and_constant_Metrics - prometheus#49 - prometheus#29
The `Collector` abstraction allows users to provide additional metrics and their description on each scrape. See also: - https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#hdr-Custom_Collectors_and_constant_Metrics - prometheus#49 - prometheus#29 Signed-off-by: Max Inden <mail@max-inden.de>
The `Collector` abstraction allows users to provide additional metrics and their description on each scrape. See also: - https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#hdr-Custom_Collectors_and_constant_Metrics - prometheus#49 - prometheus#29 Signed-off-by: Max Inden <mail@max-inden.de>
The `Collector` abstraction allows users to provide additional metrics and their description on each scrape. See also: - https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#hdr-Custom_Collectors_and_constant_Metrics - #49 - #29 Signed-off-by: Max Inden <mail@max-inden.de>
Hi! I've been looking at implementing a Prometheus collector for the recently announced tokio-metrics crate. Every scrape, I'd like to gather runtime metrics for the currently Tokio runtime. The problem is that doing so requires a non-trivial amount of up-front work to aggregate all of the stats across the N workers in the runtime, which I'd rather not do during every metric's
encode
function (following the custom metric example).Instead I think it'd be ideal if there was a way to do something similar to the client_python Custom Collector example, which allows custom collectors to record values for multiple metrics at each scrape time - that'd avoid me having to duplicate work (non-atomically) on every scrape. Do you think such an API would be possible?
Alternatively if you know of another pattern to get around this, I'd love to hear it!
The text was updated successfully, but these errors were encountered: