-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporterhelper] New exporter helper for custom requests #7874
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #7874 +/- ##
==========================================
- Coverage 90.25% 90.11% -0.14%
==========================================
Files 301 302 +1
Lines 15551 15709 +158
==========================================
+ Hits 14035 14156 +121
- Misses 1227 1258 +31
- Partials 289 295 +6
☔ View full report in Codecov by Sentry. |
9d5bbe4
to
5fc2162
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking forward! I see this as an important step for batching by request size.
@open-telemetry/collector-approvers, please review whenever you have a chance. I'm planning to put more PR's on top of it next week |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the delay in reviewing this, I missed this
) | ||
|
||
// Request represents a single request that can be sent to the endpoint. | ||
type Request interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we make this interface unimplementable and have some public base request struct to provide default implementations for future methods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternative is to have also RequestItemer
(or something like that) so we just don't need an interface and we use any
. And ask users to provide funcs that work with the request. Another option is to ask user to implement the Send on that request as well, but right now ItemCount
is on the request but Send is not, so inconsistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternative is to have also RequestItemer (or something like that) so we just don't need an interface and we use
any
. And ask users to provide funcs that work with the request
In that case, do you suggest we make this an optional interface? We could do that, but the new itemized queue and batching exporter helpers won't work. We can probably allow that in case clients don't need that.
Another option is to ask user to implement the Send on that request as well, but right now ItemCount is on the request but Send is not, so inconsistent.
What if we ask clients to implement Request
with only one required Send
function, and if they want itemized queue or batching helpers, they implement the optional interface RequestItemer
. Then, we can possibly introduce bytes-sized queue or batching helpers if they implement RequestSizer
.
type Request interface {
Send(ctx context.Context) error
}
type RequestItemer interface {
ItemsCount() int
}
type RequestSizer interface {
BytesSize() int
}
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I submitted another PR with the suggested approach #8178
Introduce a new exporter helper that operates over client provided requests instead of pdata. It opens a door for moving batching to the exporter where batches will be built from clients data format, instead of pdata. The batches can be properly sized by custom request size which can be different from OTLP. The same custom request sizing will be applied to the sending queue. It will also improve performance of the sending queue retries for non-OTLP exporters, they don't need to translate pdata on every retry. This is an experimental API, once stabilized it's intended to replace the existing helpers.
I would like to review this before it gets merged, especially if it conflicts with #7510. |
Introduce a new exporter helper that operates over client-provided requests instead of pdata. The helper user now has to provide `Converter` - an interface with a function implementing translation of pdata Metrics/Traces/Logs into a user-defined `Request`. `Request` is an interface with only one required function `Export`. It opens a door for moving batching to the exporter, where batches will be built from client data format, instead of pdata. The batches can be properly sized by custom request size, which can be different from OTLP. The same custom request sizing will be applied to the sending queue. It will also improve the performance of the sending queue retries for non-OTLP exporters, they don't need to translate pdata on every retry. This is an implementation alternative to #7874 as suggested in #7874 (comment) Tracking Issue: #8122 --------- Co-authored-by: Alex Boten <alex@boten.ca>
Superseded by #8178 |
Introduce a new exporter helper that operates over client-provided requests instead of pdata. The helper user now have to provide:
It opens a door for moving batching to the exporter, where batches will be built from client data format, instead of pdata. The batches can be properly sized by custom request size, which can be different from OTLP. The same custom request sizing will be applied to the sending queue. It will also improve performance of the sending queue retries for non-OTLP exporters, they don't need to translate pdata on every retry.
This is an experimental API. Once stabilized, it's intended to replace the existing helpers.
Tracking Issue: #8122
Related issue: #4646