-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide peekInt8 to reduce allocations #1373
Conversation
@@ -27,6 +27,7 @@ type packetDecoder interface { | |||
remaining() int | |||
getSubset(length int) (packetDecoder, error) | |||
peek(offset, length int) (packetDecoder, error) // similar to getSubset, but it doesn't advance the offset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this change work, do we still need peek method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no existing uses of peek
. Should I remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's wait. @bai you have any inputs?
Any update? |
Thanks for your contribution! |
Background:
In a project I use, sarama produces by far the most ongoing allocations (mainly, I think, because have lots of small messages) which negatively impacts our GC stats. I recently went through and tried to reduce them where I could and have been quite successful in that sarama doesn't show up in the pprof top10 by
alloc_objects
. This is the simplest change I made so far. The others follow the pattern from #1161 and are more invasive.I benchmarked this change by consuming 5 million messages and then dumping out a heap profile.
Before the change:
Here you can see that peek is responsible for 16.4% of the objects allocated, a little over one per message (makes sense, because of the record batching in kafka).
After the change:
Peek doesn't show up at all and we allocate 25.6 million objects vs 31 million, which is the expected reduction.