We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The PeriodicReader internally uses a sync::Mutex to synchronize on the PeriodicReaderInner type:
PeriodicReader
sync::Mutex
PeriodicReaderInner
https://github.com/open-telemetry/opentelemetry-rust/blob/d19187db46ec445143ecdc0271794f71dce3055d/opentelemetry-sdk/src/metrics/periodic_reader.rs#L201C1-L205C2
This can possibly cause deadlocks if the mutex is held across an .await point(https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use). And it looks like there is at least one such case of a user reporting this on the #otel-rust slack channel.
.await
0.21
0.21.2
N/A
$ RUST_LOG=debug cargo run -- --poll-interval 30s [2024-01-25T19:25:41Z INFO nomad_otel_metrics_scraper] Polling http://localhost:4646/ every 30s { "resourceMetrics": { "resource": { "attributes": [ { "key": "telemetry.sdk.version", "value": { "stringValue": "0.21.2" } }, { "key": "telemetry.sdk.language", "value": { "stringValue": "rust" } }, { "key": "telemetry.sdk.name", "value": { "stringValue": "opentelemetry" } }, { "key": "service.name", "value": { "stringValue": "unknown_service" } } ] }, "scopeMetrics": [] } } ^C[2024-01-25T19:25:43Z INFO nomad_otel_metrics_scraper] Provider, as we know it. MeterProvider { pipes: Pipelines( [ Pipeline, ], ), meters: Mutex { data: { InstrumentationLibrary { name: "nomad_metrics", version: None, schema_url: None, attributes: [], }: Meter { scope: InstrumentationLibrary { name: "nomad_metrics", version: None, schema_url: None, attributes: [], }, }, }, poisoned: false, .. }, is_shutdown: false, } [2024-01-25T19:25:43Z INFO nomad_otel_metrics_scraper] Flushing metrics.
The text was updated successfully, but these errors were encountered:
cijothomas
No branches or pull requests
What happened?
The
PeriodicReader
internally uses async::Mutex
to synchronize on thePeriodicReaderInner
type:https://github.com/open-telemetry/opentelemetry-rust/blob/d19187db46ec445143ecdc0271794f71dce3055d/opentelemetry-sdk/src/metrics/periodic_reader.rs#L201C1-L205C2
This can possibly cause deadlocks if the mutex is held across an
.await
point(https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#which-kind-of-mutex-should-you-use). And it looks like there is at least one such case of a user reporting this on the #otel-rust slack channel.API Version
0.21
SDK Version
0.21.2
What Exporters are you seeing the problem on?
N/A
Relevant log output
The text was updated successfully, but these errors were encountered: