You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would expect the connector to continue retrying up to 6-8 minutes given the number of retries configured and the backoff configuration
What I am seeing however is that the connector is failing immediately with an unrecoverable exception. I believe that this might be a bug
While looking for the errors logs and the io.confluent.connect.elasticsearch.RetryUtil package, I do not see any retries logged when this error happens
If it helps, I have a DLQ configured that works as expected for other failures. When this problem happens, the DLQ is not triggered and the task just fails with the unrecoverable exception
The text was updated successfully, but these errors were encountered:
yeikel
changed the title
[BUG] TOO_MANY_REQUESTS is not included in the retry mechanism
[BUG] TOO_MANY_REQUESTS does not seem to be included in the retry mechanism
Dec 8, 2023
yeikel
changed the title
[BUG] TOO_MANY_REQUESTS does not seem to be included in the retry mechanism
[BUG] TOO_MANY_REQUESTS error craches the tasks with a unrecoverable exceptions without retries
Dec 10, 2023
With a configuration such as
I would expect the connector to continue retrying up to 6-8 minutes given the number of retries configured and the
backoff
configurationWhat I am seeing however is that the connector is failing immediately with an unrecoverable exception. I believe that this might be a bug
While looking for the errors logs and the
io.confluent.connect.elasticsearch.RetryUtil
package, I do not see any retries logged when this error happensIf it helps, I have a DLQ configured that works as expected for other failures. When this problem happens, the DLQ is not triggered and the task just fails with the
unrecoverable exception
The text was updated successfully, but these errors were encountered: