-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rd_kafka_topic_destroy0 will crash ?? @ 0.8.6 #360
Comments
In gdb can you go do this for me please?
How many topics is your application using? Thanks |
@edenhill void rd_kafka_topic_destroy0 (rd_kafka_topic_t *rkt) {
} So, I think that it is wrong to maintaince the topic 's refcont . Q:How many topics is your application using? Q:How long does the program typically run before crashing? Q:Have you observed the program's memory usage, is it running out of memory? Do you understand me ? I am not good at english . THANKS |
@edenhill |
Can you elaborate more on what your code looks like? |
@edenhill int KP ::create(string & broker)
} int KP ::produce(string & topic, int partition, string ctx)
} i am sure that i have never to call topic_destory by myself. some error ? |
@edenhill KafkaProducer::create(string & broker) only be called one time at the begining |
may i have your skype ? |
i using the KP like below: // at the application begining KP kp; // in a single alone thread fuction
return 0; |
Not calling destroy() eventually is definately a problem, it means topic refcounts will keep leaking and eventually wrap around. Anyway, to fix your problem you need to call rd_kafka_topic_destroy(rkt) after you're done with the topic. |
You mean I should Create a topic ,produce,destory the topic .like this sequence?but in the rdkafka,it maintainces a topics array,and when the response is arrived it will destory the topic from the topics.i am not sure if I destory it by myself is okay.by the way, I will produce rate of 25000msgs/s . |
Yes, that is the sequence you need to use:
|
Oh,thanks for your suggestion . |
I have download the newest version-0.8.6 in tag.
It will be crashed when it runs for a long time and it will send the data by thousands of topics.
the stack call :
#0 0x00007f59acf2f885 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.49.tl1.x86_64 libgcc-4.4.6-3.el6.x86_64 libstdc++-4.4.6-3.el6.x86_64
(gdb) bt
#0 0x00007f59acf2f885 in raise () from /lib64/libc.so.6
#1 0x00007f59acf31065 in abort () from /lib64/libc.so.6
#2 0x00000000005a253b in rd_kafka_crash (file=, line=, function=, rk=0x1fe3f90, reason=) at rdkafka.c:1877
#3 0x00000000005b23d8 in rd_kafka_topic_destroy0 (rkt=0x7f562800fec0) at rdkafka_topic.c:410
#4 0x00000000005b3891 in rd_kafka_topic_metadata_update (rkb=0x1fe9d60, mdt=) at rdkafka_topic.c:1056
#5 0x00000000005b0bf7 in rd_kafka_metadata_handle (rkb=0x1fe9d60, err=0, reply=0x7f56f0000940, request=0x7f56f0000aa0, opaque=0x7f5694001440) at rdkafka_broker.c:976
#6 rd_kafka_broker_metadata_reply (rkb=0x1fe9d60, err=0, reply=0x7f56f0000940, request=0x7f56f0000aa0, opaque=0x7f5694001440) at rdkafka_broker.c:1027
#7 0x00000000005ac587 in rd_kafka_req_response (rkb=0x1fe9d60) at rdkafka_broker.c:1321
#8 rd_kafka_recv (rkb=0x1fe9d60) at rdkafka_broker.c:1513
#9 0x00000000005acf30 in rd_kafka_broker_io_serve (rkb=0x1fe9d60) at rdkafka_broker.c:2452
#10 0x00000000005af0da in rd_kafka_broker_ua_idle (arg=0x1fe9d60) at rdkafka_broker.c:2475
#11 rd_kafka_broker_thread_main (arg=0x1fe9d60) at rdkafka_broker.c:4150
#12 0x00007f59adc2a7f1 in start_thread () from /lib64/libpthread.so.0
#13 0x00007f59acfe2ccd in clone () from /lib64/libc.so.6
(gdb)
The text was updated successfully, but these errors were encountered: