-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gevent and KafkaClient.copy patch? #271
Comments
So i got around this by passing a batch of messages into kproducer.send_messages effectively batching on my own. This is fine with me I'll leave the ticket open since it's broke but it's non urgent for me at this point. |
Yep, this is broken for me too. +1. I worked around with async=False for now. |
It'd be good if this was fixed to allow proper use in gevent-based applications. |
In my opinion, gevent only works well (increase throughput) when there are lots of independent connections. As for each green thread, it can use an independent connection. However kafka-python does not maintain a connection pool for each broker. So for current kafka-python, gevent could only work properly when publishing messages to different brokers (each broker has a owned connection) at the same time. We'd better to implement a connection pool if we want to use gevent to increase throughput. Other option could be integrated with Tornado. |
Most of the inner-workings of kafka-python have changed, so I'm going to close this and only keep open a single 'work with kombu' issue. |
This is referencing #145 I'm seeing copy failing still when monkey patching and then constructing a multiprocess. If this was punted on that's cool I can look at adding my own multiprocessing or threading pool to scale my upstream socket reads but gevent is nicer for this pattern.
versions:
gevent==1.0.1
kafka-python==0.9.2
Works:
Fails:
The text was updated successfully, but these errors were encountered: