Description
Hi! I've been looking at implementing a Prometheus collector for the recently announced tokio-metrics crate. Every scrape, I'd like to gather runtime metrics for the currently Tokio runtime. The problem is that doing so requires a non-trivial amount of up-front work to aggregate all of the stats across the N workers in the runtime, which I'd rather not do during every metric's encode
function (following the custom metric example).
Instead I think it'd be ideal if there was a way to do something similar to the client_python Custom Collector example, which allows custom collectors to record values for multiple metrics at each scrape time - that'd avoid me having to duplicate work (non-atomically) on every scrape. Do you think such an API would be possible?
Alternatively if you know of another pattern to get around this, I'd love to hear it!