-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
batchprocessor: send_batch_max_size_bytes limit #6046
Comments
👍 for this feature. We're currently doing some trial and error to figure out the right balance of |
@evandam I made some recommendations here https://github.com/monitoringartist/opentelemetry-trace-pipeline-poisoning#mitigation-of-huge-4mb-trace |
Nice link, thank you! It definitely still relies on some back-of-the-envelope math which is bound to be wrong sooner or later, and it would be great to have an easy way to do this at the exporter/collector level. |
The size-based batching will only work if the processor is being used with OTLP exporter, but other exporters will have different batch sizes due to different encoding. I believe if we go with #4646, we should be able to provided this for any exporter |
For those willing to configure a different amount of memory to be allocated for each GRPC message on the downstream OTLP Collectors' Receiver config, there also is the |
|
No it won't. the maximum receive message size is only for the gRPC server side. On the gRPC client side, the client's max receive message size must be provided in the call options when the client makes a call to the gRPC server. What is your OTEL collector use case where the exporter receives such large messages from the remote OTLP receiver though? I cannot think of a scenario where this would even be the case. |
did you manage to solves this issue ? |
I think this is the issue which would resolve this eventually. |
Is there a way to dump / debug spans causing that? Update: I have figured it out configuring the otel collector this way, so it prints both the error message and the all of the Span details it sends
In my case the culprit was python Pymongo instrumentation with enabled |
Is your feature request related to a problem? Please describe.
Golang GRPC server has default message limit 4MB. Batch processor can generate bigger message size, so receiver will reject batch and whole batch can be dropped:
Current batchprocessor config options doesn't provide opportunity to prevent this situation, because they works only with span counts, but not with whole batch size.
send_batch_max_size
is also count of spans.Describe the solution you'd like
New config option
send_batch_max_size_bytes
(maybe there can be better name), where will be defined default GRPC 4MB size (4194304), which will ensure that batch won't exceed this size.Describe alternatives you've considered
At the moment user can customize
send_batch_size
/send_batch_max_size
, but in theory there can be a few traces with huge spans (e.g. Java backtraces with logs) and default 4MB grpc message limit can be exceeded. Maybe OTLP exporter may handle this message limitation.The text was updated successfully, but these errors were encountered: