Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OTEL pod crashed when kafka broker has connectivity issue #24029

Closed
rupeshnemade opened this issue Jul 7, 2023 · 13 comments
Closed

OTEL pod crashed when kafka broker has connectivity issue #24029

rupeshnemade opened this issue Jul 7, 2023 · 13 comments

Comments

@rupeshnemade
Copy link

Component(s)

exporter/kafka

Describe the issue you're reporting

During otel pod initialisation normally OTEL pod checks if it can connect with Kafka broker & once connection establishes it starts running.
Sometimes when kafka broker has issue may be due to network or storage and OTEL is not able to connect kafka, then otel pod goes into CrashedLoop state which impacts complete log forwarding.
Ideally issue in one kafka broker shouldn't take down whole OTEL setup & stop log forwarding.

Can we remove this hard dependancy of OTEL connection establishment & instead just throw a warning?

Here's the console output from the collector in case the Kafka broker is down:

$ ./otelcol-sumo-0.80.0-sumo-0-linux_amd64 --config ./config.yaml 
2023-07-07T10:44:35.305Z        info    service/telemetry.go:81 Setting up own telemetry...
2023-07-07T10:44:35.305Z        info    service/telemetry.go:104        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
Error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused
2023/07/07 10:44:36 collector server run finished with error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused
@rupeshnemade rupeshnemade added the needs triage New item requiring triage label Jul 7, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Jul 7, 2023

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@JaredTan95 JaredTan95 added enhancement New feature or request and removed needs triage New item requiring triage labels Jul 10, 2023
@andrzej-stencel
Copy link
Member

Here's another description of the issue, rephrasing what Rupesh wrote above.

Steps to reproduce, assuming there's Kafka running on the host with the service broker at the default localhost:9092 endpoint and the following Otelcol configuration:

exporters:
  kafka:
  logging:

receivers:
  hostmetrics:
    scrapers:
      memory:

service:
  pipelines:
    metrics:
      exporters:
      - kafka
      - logging
      receivers:
      - hostmetrics

Scenario A: Start collector when Kafka is up

  1. Make sure the Kafka service broker is running at localhost:9092.
  2. Start the collector with the below config.
  3. Observe that the collector starts correctly.
  4. Shut down the Kafka service broker.
  5. Observe that the collector continues to run, writing errors to logs about not being able to reach the service broker.
  6. Start the Kafka service broker back up.
  7. Observe that the collector picks up the connection to the service broker again and resumes sending data.

Scenario B: Start collector when Kafka is down

  1. Make sure the Kafka service broker is NOT running at localhost:9092.
  2. Start the collector with the below config.

Actual behavior:

The collector fails to start:

$ ./otelcol-sumo-0.80.0-sumo-0-linux_amd64 --config ./config.yaml 
2023-07-07T10:44:35.305Z        info    service/telemetry.go:81 Setting up own telemetry...
2023-07-07T10:44:35.305Z        info    service/telemetry.go:104        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
Error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused
2023/07/07 10:44:36 collector server run finished with error: failed to build pipelines: failed to create "kafka" exporter for data type "metrics": kafka: client has run out of available brokers to talk to: dial tcp 127.0.0.1:9092: connect: connection refused

Expected behavior:

The collector starts correctly and writes error logs to console until the endpoint is available.

@MovieStoreGuy
Copy link
Contributor

I disagree that the Kafka component should warn if communicating with the brokers is an issue.

The last thing I would want is data to be silently discarded but I don't know of a reasonable outcome that ensures that Kafka errors are surfaced while ensure the data in transport makes it to the endpoint.

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Sep 11, 2023
@EOjeah
Copy link
Contributor

EOjeah commented Oct 6, 2023

I agree that the collector should not crash when kafka is down.
It could work like some other exporters like zipkin, even if the zipkin host you specify at the exporters config is unreachable. it will log messages to the console when/if spans are attempting to send. Optionally retry after X time but eventually drop, alerts and monitoring dashboards can be easily created to monitor when "Exporting Failed" or even the metrics exposed by the collector like otelcol_exporter_send_failed_spans metric

For example, configuring the collector with invalid zipkin endpoint, the collector starts but when you try sending a trace, an output can be like

opentelemetry-collector_1  | 2023-10-06T12:45:50.546Z   info    exporterhelper/queued_retry.go:426      Exporting failed. Will retry the request after interval.       {"kind": "exporter", "data_type": "traces", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://zipkins:9411/api/v2/spans\": dial tcp: lookup zipkins on 127.0.0.11:53: no such host", "interval": "30.173935436s"}
opentelemetry-collector_1  | 2023-10-06T12:46:20.731Z   info    exporterhelper/queued_retry.go:426      Exporting failed. Will retry the request after interval.       {"kind": "exporter", "data_type": "traces", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://zipkins:9411/api/v2/spans\": dial tcp: lookup zipkins on 127.0.0.11:53: no such host", "interval": "37.873234376s"}
opentelemetry-collector_1  | 2023-10-06T12:46:58.616Z   error   exporterhelper/queued_retry.go:175      Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "traces", "name": "zipkin", "error": "max elapsed time expired failed to push trace data via Zipkin exporter: Post \"http://zipkins:9411/api/v2/spans\": dial tcp: lookup zipkins on 127.0.0.11:53: no such host", "dropped_items": 1}
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).onTemporaryFailure
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/queued_retry.go:175
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/queued_retry.go:410
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/traces.go:137
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/queued_retry.go:205
opentelemetry-collector_1  | go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1
opentelemetry-collector_1  |    go.opentelemetry.io/collector@v0.69.0/exporter/exporterhelper/internal/bounded_memory_queue.go:61

After some retries, it fails to send but does not crash the collector which sounds like an acceptable behaviour
@MovieStoreGuy what'd you think?

@djaglowski
Copy link
Member

Ideally issue in one kafka broker shouldn't take down whole OTEL setup & stop log forwarding.

This is the design principle we follow broadly in the collector. We should only fail to start if the problem is clearly permanent. Otherwise, we should run and retry as possible.

While this can cause some situations where errors go unnoticed, this should motivate us to improve observability of the collector itself. We're close to adding a notion of component status, which will give us an obvious signal that something is wrong. Aside from that, custom metrics describing failed connection attempts, dropped data, etc will be useful.

@github-actions github-actions bot removed the Stale label Oct 7, 2023
@EOjeah
Copy link
Contributor

EOjeah commented Oct 18, 2023

@djaglowski Just realised there's an option in the kafka exporter to handle intermittent failures with metadata
Setting the metadata.full to false helps with the issue when pod fails to start if brokers are unavailable 🤦. This way, it acts like zipkin/jaeger and will drop the traces (after some retries)

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Dec 18, 2023
@rupeshnemade
Copy link
Author

I agree @djaglowski that 'We should only fail to start if the problem is clearly permanent' but we saw an instances when complete crashed during restarts because of kafka broker is down.

@github-actions github-actions bot removed the Stale label Feb 15, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Apr 16, 2024
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 15, 2024
@harshalschaudhari
Copy link

Note: I have tried out kafka using docker with port 29092
kafka container started first then otel-collector container runs after kafka.

Pod Logs as below

2024-07-18 16:25:51 Error: cannot start pipelines: kafka: client has run out of available brokers to talk to: dial tcp 172.20.0.9:29092: connect: connection refused
2024-07-18 16:25:51 2024/07/18 10:55:51 collector server run finished with error: cannot start pipelines: kafka: client has run out of available brokers to talk to: dial tcp 172.20.0.9:29092: connect: connection refused

Additional information
I am able to connect Kafka using Offset Explorer 3.0.
image

OTEL Configuration

exporters:
  kafka:
    brokers:
      - kafka:9092

@tamland
Copy link

tamland commented Nov 19, 2024

Can this be reopened?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants