You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're using 5.4.3 version of Confluent and observed that during a task rebalancing, it threw the below exception for multiple partitions:
[ERROR] 2021-12-12 03:34:16,891 org.apache.kafka.connect.runtime.WorkerTask doRun - WorkerSinkTask{id=hdfs-connector-prod-399} Task threw an uncaught and unrecoverable exception
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(Unknown Source)
at java.util.HashMap$KeyIterator.next(Unknown Source)
at io.confluent.connect.hdfs.TopicPartitionWriter.close(TopicPartitionWriter.java:463)
at io.confluent.connect.hdfs.DataWriter.close(DataWriter.java:457)
at io.confluent.connect.hdfs.HdfsSinkTask.close(HdfsSinkTask.java:161)
We're using 5.4.3 version of Confluent and observed that during a task rebalancing, it threw the below exception for multiple partitions:
It looks like this is thrown in this for loop
https://github.com/confluentinc/kafka-connect-hdfs/blob/5.4.3-post/src/main/java/io/confluent/connect/hdfs/TopicPartitionWriter.java#L463
It is looping over
writers
Map and also removing the encodedPartition from the Map herekafka-connect-hdfs/src/main/java/io/confluent/connect/hdfs/TopicPartitionWriter.java
Line 740 in 7550789
Could that be the reason for this exception?
The text was updated successfully, but these errors were encountered: