-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OTEL pod crashed when kafka broker has connectivity issue #24029
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Here's another description of the issue, rephrasing what Rupesh wrote above. Steps to reproduce, assuming there's Kafka running on the host with the service broker at the default exporters:
kafka:
logging:
receivers:
hostmetrics:
scrapers:
memory:
service:
pipelines:
metrics:
exporters:
- kafka
- logging
receivers:
- hostmetrics Scenario A: Start collector when Kafka is up
Scenario B: Start collector when Kafka is down
Actual behavior: The collector fails to start: $ ./otelcol-sumo-0.80.0-sumo-0-linux_amd64 --config ./config.yaml
2023-07-07T10:44:35.305Z info service/telemetry.go:81 Setting up own telemetry...
2023-07-07T10:44:35.305Z info service/telemetry.go:104 Serving Prometheus metrics {"address": ":8888", "level": "Basic"}
Error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused
2023/07/07 10:44:36 collector server run finished with error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused Expected behavior: The collector starts correctly and writes error logs to console until the endpoint is available. |
I disagree that the Kafka component should warn if communicating with the brokers is an issue. The last thing I would want is data to be silently discarded but I don't know of a reasonable outcome that ensures that Kafka errors are surfaced while ensure the data in transport makes it to the endpoint. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I agree that the collector should not crash when kafka is down. For example, configuring the collector with invalid zipkin endpoint, the collector starts but when you try sending a trace, an output can be like
After some retries, it fails to send but does not crash the collector which sounds like an acceptable behaviour |
This is the design principle we follow broadly in the collector. We should only fail to start if the problem is clearly permanent. Otherwise, we should run and retry as possible. While this can cause some situations where errors go unnoticed, this should motivate us to improve observability of the collector itself. We're close to adding a notion of component status, which will give us an obvious signal that something is wrong. Aside from that, custom metrics describing failed connection attempts, dropped data, etc will be useful. |
@djaglowski Just realised there's an option in the kafka exporter to handle intermittent failures with |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I agree @djaglowski that 'We should only fail to start if the problem is clearly permanent' but we saw an instances when complete crashed during restarts because of kafka broker is down. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Note: I have tried out kafka using docker with port 29092 Pod Logs as below 2024-07-18 16:25:51 Error: cannot start pipelines: kafka: client has run out of available brokers to talk to: dial tcp 172.20.0.9:29092: connect: connection refused Additional information OTEL Configuration
|
Can this be reopened? |
Component(s)
exporter/kafka
Describe the issue you're reporting
During otel pod initialisation normally OTEL pod checks if it can connect with Kafka broker & once connection establishes it starts running.
Sometimes when kafka broker has issue may be due to network or storage and OTEL is not able to connect kafka, then otel pod goes into CrashedLoop state which impacts complete log forwarding.
Ideally issue in one kafka broker shouldn't take down whole OTEL setup & stop log forwarding.
Can we remove this hard dependancy of OTEL connection establishment & instead just throw a warning?
Here's the console output from the collector in case the Kafka broker is down:
The text was updated successfully, but these errors were encountered: