Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hono constantly fails to publish messages to the kafka broker without being able to recover #3544

Open
JeffreyThijs opened this issue Sep 20, 2023 · 4 comments
Labels

Comments

@JeffreyThijs
Copy link
Contributor

Hi,

We recently stumbled upon an occurrence where hono suddenly started to failing to publish messages to our kafka broker. Our kafka broker was working perfectly fine upon discovering this issue but we do not know for sure if a small disturbance of the kafka broker in the mean time might have caused this issue. Nevertheless, we would assume that hono can recover from this since dropping messages is really not desirable. However, when restarting the pod and coming back live the adapter started working as before and the issue resolved itself.

Sadly, we do not have saved any logs of this issue and also have not been able to reproduce this issue. Although, i tried to look into the code I encountered the following which might be the culprit of why the system was not able to recover:

The following condition determines whether the cached KafkaProducer will be closed or not:

This condition is determined by:

https://github.com/eclipse-hono/hono/blob/master/clients/kafka-common/src/main/java/org/eclipse/hono/client/kafka/producer/CachingKafkaProducerFactory.java#L259

So the condition to invalidate the cached KafkaProducer (which might be essential to being able to recover from failed publishes to the kafka broker) is only done if one of those error is thrown which raises some questions:

  • why these particular errors?
  • where they empirically determined?
  • are any of the others https://kafka.apache.org/11/javadoc/org/apache/kafka/common/errors/package-summary.html not considered as fatal?
  • why not specify the errors when it shouldn't reset?
  • isn't this condition dangerous for newly defined errors?
  • isn't there a better metric to determine whether to remove the KafkaProducer from the cache (like consecutive number of message that could not be produce, etc)?

On the other hand, maybe I am tunnel visioned and this might be caused by something else?

Any comments are well appreciated!

@sophokles73
Copy link
Contributor

@calohmn would you mind taking a look? I believe this falls into your area of expertise :-)

@calohmn
Copy link
Contributor

calohmn commented Sep 29, 2023

I think the exceptions being checked in the isFatalError method got chosen because they are mentioned as fatal exceptions in the KafkaProducer javadoc.
Looking at the different kinds of Kafka exceptions, I don't currently see any other Kafka exception to include in this isFatalError method. Of course there could potentially also be other kinds of exceptions like IllegalStateException, IllegalArgumentException, NullPointerException (caused by a bug in the (Vertx-) Kafka client) or an OutOfMemoryError leading to a defunct state of the producer.
In that sense, we could consider treating any non KafkaException also as a fatal error, as a precautionary measure.

I would be better to have an idea though what went wrong here (also because I haven't see any cases yet where a Hono Kafka producer stopped working and the pod had to be restarted).

@JeffreyThijs You also don't have any tracing data for this case? Have you set any Kafka producer config properties in your protocol adapter config?

@JeffreyThijs
Copy link
Contributor Author

Sorry for the late reply.

Unfortunately, our logging stack was not in place when this problem occured so we don't have any tracing data from the incident. It might indeed be a good idea to treat non kafka errors also a fatal errors (or just explicitly list the errors who should not be handled as a fatal error) in order to avoid keeping discarding messages due to an unrecoverable error.

@sophokles73
Copy link
Contributor

@JeffreyThijs is this still an issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants