-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collector randomly stops sending spans #31758
Comments
Pinging code owners for exporter/kafka: @pavolloffay @MovieStoreGuy. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
We are continuing to encounter this issue once- twice daily. It happens only on our environment with biggest traffic. There is nothing in the logs (debug level). We analyzed the pprof profiles and goroutines graphs for working and broken collector- broken collector doesn't run sarama producer process. I'm attaching the profile graphs here (for |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
We're encountering this same issue using: otel/opentelemetry-collector-k8s:0.102.1 No errors logged other than the send queue being full. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
No response
What happened?
Description
Otel-collector randomly stops sending spans. We encountered this situation twice this week. It happens to just one of the collector pods, the rest works correctly. We are notified by alert about sending queue being full- after inspecting pod metrics, turns out that it is caused by otelcol_exporter_sent_spans dropping to 0.
There is nothing in the logs before the error about sending queue being full.
Are there some additional ways to diagnose the issue before resorting to pprof?
Steps to Reproduce
Expected Result
Actual Result
Collector version
0.95.0
Environment information
Environment
https://github.com/utilitywarehouse/opentelemetry-manifests
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: