-
Notifications
You must be signed in to change notification settings - Fork 593
feat: support backoff/retry in OTLP #3126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3126 +/- ##
=======================================
+ Coverage 79.6% 80.8% +1.2%
=======================================
Files 124 128 +4
Lines 23174 23090 -84
=======================================
+ Hits 18456 18676 +220
+ Misses 4718 4414 -304 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
3847b26 to
fb141db
Compare
603c31e to
75f0d71
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements retry logic with exponential backoff and jitter for OTLP exporters to handle transient failures gracefully, addressing issue #3081. The implementation supports both HTTP and gRPC protocols with protocol-specific error classification and server-provided throttling hints.
- Adds a new
retrymodule toopentelemetry-sdkwith configurable retry policies and exponential backoff - Implements protocol-specific error classification in
opentelemetry-otlpfor HTTP and gRPC responses - Integrates retry functionality into all OTLP exporters (traces, metrics, logs) for both HTTP and gRPC transports
Reviewed Changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| opentelemetry-sdk/src/retry.rs | Core retry module with exponential backoff, jitter, and error classification |
| opentelemetry-otlp/src/retry_classification.rs | Protocol-specific error classification for HTTP and gRPC responses |
| opentelemetry-otlp/src/exporter/tonic/*.rs | gRPC exporter integration with retry functionality |
| opentelemetry-otlp/src/exporter/http/*.rs | HTTP exporter integration with retry functionality |
| opentelemetry-otlp/Cargo.toml | Feature flags and dependencies for retry support |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
af933a2 to
f1636a0
Compare
ba1f9e2 to
ff33723
Compare
bantonsson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the HTTP exporters look good now. Love all those red lines.
c0dba75 to
059943f
Compare
059943f to
693765b
Compare
bantonsson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏼 for the HTTP code. I can't see a clear way to reuse more of the Tonic code.
|
Sorry for delay. I would like to review during this week - assigning to myself. |
af8f832 to
27d303a
Compare
… we don't want retry
01e6526 to
9ba3a06
Compare
| RetryErrorType::Retryable if attempt < policy.max_retries => { | ||
| attempt += 1; | ||
| // Use exponential backoff with jitter | ||
| otel_warn!(name: "OtlpRetry", message = format!("Retrying operation {:?} due to retryable error: {:?}", operation_name, err)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be info level, as we are not yet giving up and losing data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack
|
|
||
| match error_type { | ||
| RetryErrorType::NonRetryable => { | ||
| otel_warn!(name: "OtlpRetry", message = format!("Operation {:?} failed with non-retryable error: {:?}", operation_name, err)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's stick with structured logging as much as possible instead of stringifying. i.e Operation and Error should be own fields.
|
|
||
| match error_type { | ||
| RetryErrorType::NonRetryable => { | ||
| otel_warn!(name: "OtlpRetry", message = format!("Operation {:?} failed with non-retryable error: {:?}", operation_name, err)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name:"OtlpRetry" - this is not the best usage of event names - as its reused many times in this file itself, each time with different event. We need to do dedicated event name for each distinct type of logs, and ensure the schema (fields/etc) is same for a given event name.
| RetryErrorType::Throttled(server_delay) if attempt < policy.max_retries => { | ||
| attempt += 1; | ||
| // Use server-specified delay (overrides exponential backoff) | ||
| otel_warn!(name: "OtlpRetry", message = format!("Retrying operation {:?} after server-specified throttling delay: {:?}", operation_name, server_delay)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please downgrade to info level.
| pub jitter_ms: u64, | ||
| } | ||
|
|
||
| /// A runtime stub for when experimental_async_runtime is not enabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I follow the use case for this. This PR already adds a runtime implementation for the dedicated thread case in the Sdk crate. Should the OTLP exporter be aware of anything more than letting the Sdk Runtime implementation do its own delay - either using asynchronous delay or blocking sleeps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is vestigial and slipped through. We can remove it.
| endpoint: self.collector_endpoint.to_string(), | ||
| }); | ||
|
|
||
| // Select runtime based on HTTP client feature - if we're using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might need to be made more robust..OTLP exporter having to pick the runtime feels flaky - it won't know of all possible runtime implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
open to suggestions; there was a brief discussion around this.
|
|
||
| - Update `opentelemetry-proto` and `opentelemetry-http` dependency version to 0.31.0 | ||
| - Add HTTP compression support with `gzip-http` and `zstd-http` feature flags | ||
| - Add retry with exponential backoff and throttling support for HTTP and gRPC exporters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets add some more details here so the user reading the changelog would know how to use this feature. (given this is experimental and opt-in).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure
cijothomas
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know the PR is merged, but left some comments. We can follow up separately to address them.
Fixes #3081, building on the work started by @AaronRM 🤝
Changes
A new retry module added to
opentelemetry-sdkModels the sorts of retry an operation may request (retry / can't retry / throttle), and provides a helper
retry_with_backoffmechanism that can be used to wrap up a retryable operation and retry it. The helper relies onexperimental_async_runtimefor its runtime abstraction, to provide the actual pausing. It also takes a lambda to classify the error, so the caller can inform the retry mechanism if a retry is required.A new retry_classification module added to
opentelemetry-otlpThis bit takes the actual error responses that we get back over OTLP and maps them back to the retry model. Because this is OTLP-specific stuff it belongs here rather than alongside the retry code.
Retry binding
... happens in each one of the concrete exporters to tie it all together.
Also ...
Open Questions
Merge requirement checklist
CHANGELOG.mdfiles updated for non-trivial, user-facing changes