You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[2024-05-30 10:25:31,403] ERROR [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:237) java.lang.NullPointerException: Cannot invoke "io.confluent.connect.hdfs3.DataWriter.open(java.util.Collection)" because "this.hdfsWriter" is null
#701
Open
NhatDuy11 opened this issue
May 30, 2024
· 0 comments
[2024-05-30 10:25:31,403] WARN [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Offset commit failed during close (org.apache.kafka.connect.runtime.WorkerSinkTask:418)
[2024-05-30 10:25:31,403] ERROR [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Commit of offsets threw an unexpected exception for sequence number 1: null (org.apache.kafka.connect.runtime.WorkerSinkTask:276)
java.lang.NullPointerException: Cannot invoke "io.confluent.connect.hdfs3.DataWriter.getCommittedOffsets()" because "this.hdfsWriter" is null
at io.confluent.connect.hdfs3.Hdfs3SinkTask.preCommit(Hdfs3SinkTask.java:123)
at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:415)
at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:663)
at org.apache.kafka.connect.runtime.WorkerSinkTask.closeAllPartitions(WorkerSinkTask.java:658)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:208)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:229)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:284)
at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
[2024-05-30 10:25:31,403] ERROR [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:237)
java.lang.NullPointerException: Cannot invoke "io.confluent.connect.hdfs3.DataWriter.open(java.util.Collection)" because "this.hdfsWriter" is null
Does anyone have an opinion on this issue?
Thank you everyone for looking at this issue!
The text was updated successfully, but these errors were encountered:
Hi every one, i am using connector hdfs-s3 sink connectors kafka confluent sink to hdfs , my config sink :
{
"name": "hdfs3_sink-test_v4",
"config": {
"key.converter.schemas.enabled": "true",
"value.converter.schemas.enabled": "true",
"schema.enable": "true",
"value.converter.schema.registry.url": "http://kafka01:8081,http://kafka02:8081",
"key.converter.schema.registry.url": "http://kafka01:8081,http://kafka02:8081",
"name": "hdfs3_sink-test_v4",
"connector.class": "io.confluent.connect.hdfs3.Hdfs3SinkConnector",
"tasks.max": "1",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"errors.tolerance": "all",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"topics": "dwdb.GGADMIN.TEST_DECIMAL",
"errors.deadletterqueue.topic.name": "fail_topic",
"errors.deadletterqueue.topic.replication.factor": "3",
"errors.deadletterqueue.context.headers.enable": "true",
"hdfs.url": "",
"hadoop.conf.dir": "/opt/cloudera/parcels/CDH/lib/hadoop",
"logs.dir": "/opt/hdfs/log_table",
"flush.size": "0",
"enhanced.avro.schema.support": "true",
"connect.meta.data": "true",
"schema.compatibility": "NONE",
"topics.dir": "/opt/hdfs/test_table",
"store.url": "hdfs://*************:9000",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"timezone": "Asia/Ho_Chi_Minh",
"storage.class": "io.confluent.connect.hdfs3.storage.HdfsStorage",
"format.class": "io.confluent.connect.hdfs3.parquet.ParquetFormat",
"confluent.topic.bootstrap.servers": "kafka01:9092, kafka02:9092, kafka03:9092",
"confluent.topic.replication.factor": "3"
}
}
i am getting error :
[2024-05-30 10:25:31,403] WARN [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Offset commit failed during close (org.apache.kafka.connect.runtime.WorkerSinkTask:418)
[2024-05-30 10:25:31,403] ERROR [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Commit of offsets threw an unexpected exception for sequence number 1: null (org.apache.kafka.connect.runtime.WorkerSinkTask:276)
java.lang.NullPointerException: Cannot invoke "io.confluent.connect.hdfs3.DataWriter.getCommittedOffsets()" because "this.hdfsWriter" is null
at io.confluent.connect.hdfs3.Hdfs3SinkTask.preCommit(Hdfs3SinkTask.java:123)
at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:415)
at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:663)
at org.apache.kafka.connect.runtime.WorkerSinkTask.closeAllPartitions(WorkerSinkTask.java:658)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:208)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:229)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:284)
at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
[2024-05-30 10:25:31,403] ERROR [hdfs3_sink-test_v4|task-0] WorkerSinkTask{id=hdfs3_sink-test_v4-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:237)
java.lang.NullPointerException: Cannot invoke "io.confluent.connect.hdfs3.DataWriter.open(java.util.Collection)" because "this.hdfsWriter" is null
Does anyone have an opinion on this issue?
Thank you everyone for looking at this issue!
The text was updated successfully, but these errors were encountered: