Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sarama Snappy Compression #593

Closed
rphillips opened this issue Jan 12, 2016 · 6 comments
Closed

Sarama Snappy Compression #593

rphillips opened this issue Jan 12, 2016 · 6 comments

Comments

@rphillips
Copy link

Does anyone have any figures on how fast you can process snappy [edit] encoded messages? In my environment, I am only able to process 500 messages a second. Nodejs will process around 5000 messages a second. Are there any tips or tricks to get faster performance from Sarama?

@rsrsps
Copy link

rsrsps commented Jan 12, 2016

Tens of thousands per second per handoff thread. We stopped optimizing above 70k/sec. Getting to 25k was straightforward then it took some work.

protobuf format; non-trivial messages sent to a worker pool via a shared channel;

to do it, though, we had to crank down the max wait time and do a client per t,p stream. we also needed to write a python tool that just sends huge blocks of messages to fill up a topic,partition prior to a test run so writer side didn’t interfere with testing. the python test tool could also mimic some of the processing and tapped out at about 6k/sec.

(we are still on a somewhat older version of sarama so I guess it’s possible it got slower)

On Jan 11, 2016, at 5:13 PM, Ryan Phillips notifications@github.com wrote:

Does anyone have any figures on how fast you can process speedy encoded messages? In my environment, I am only able to process 500 messages a second. Nodejs will process around 5000 messages a second. Are there any tips or tricks to get faster performance from Sarama?


Reply to this email directly or view it on GitHub.

@eapache eapache changed the title Sarama Speedy Compression Sarama Snappy Compression Jan 12, 2016
@eapache
Copy link
Contributor

eapache commented Jan 12, 2016

Unless you have extremely fat low-latency network connections and a slow CPU, network is more likely the bottleneck than snappy compression. Try increasing Consumer.Fetch.Default.

FWIW, consuming with snappy has almost certainly gotten faster over the last year, mainly due to #446, #485, and #527.

@rphillips
Copy link
Author

I changed those options and am not seeing anything different. Am I doing anything outrightly terrible with this test case: https://play.golang.org/p/-KqXKBNlve

@eapache
Copy link
Contributor

eapache commented Jan 12, 2016

Looks ok to me, but since you're starting at the newest offset you'll only be able to consume as fast as messages are being produced... Could that be the bottleneck?

@rphillips
Copy link
Author

I'll double check that the go version is doing the same thing as the node version in the morning. Thank you for the feedback.

@rphillips
Copy link
Author

Thank you for the feedback! The node version was looking back to the oldest offset. For reference, I am now at parity with both applications and Sarama can process 43k messages a second without any optimizations. I appreciate the help. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants