-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
grpc: fix regression by freeing request bufferslice after processing unary #7571
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #7571 +/- ##
==========================================
- Coverage 81.92% 81.76% -0.16%
==========================================
Files 361 361
Lines 27825 27826 +1
==========================================
- Hits 22796 22753 -43
- Misses 3840 3868 +28
- Partials 1189 1205 +16
|
@PapaCharlie would you mind taking a quick look at this? |
Yeah looking now. The tests are passing so I'm not too concerned there, but I'm surprised I missed this. This isn't restricted to unary RPCs, so it may need fixes elsewhere! |
Nevermind, this is the equivalent for stream RPCs: https://github.com/grpc/grpc-go/blob/master/rpc_util.go#L923 Nice catch! I just have a tiny nit, for consistency with the other places |
server.go
Outdated
@@ -1391,6 +1391,8 @@ func (s *Server) processUnaryRPC(ctx context.Context, t transport.ServerTranspor | |||
} | |||
ctx = NewContextWithServerTransportStream(ctx, stream) | |||
reply, appErr := md.Handler(info.serviceImpl, ctx, df, s.opts.unaryInt) | |||
defer d.Free() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind moving this up near to where d
is initialized, at line 1361?
d, err := recvAndDecompress(&parser{r: stream, bufferPool: s.opts.bufferPool}, stream, dc, s.opts.maxReceiveMessageSize, payInfo, decomp, true)
if err != nil {
if e := t.WriteStatus(stream, status.Convert(err)); e != nil {
channelz.Warningf(logger, s.channelz, "grpc: Server.processUnaryRPC failed to write status: %v", e)
}
return err
}
defer d.Free()
Just for consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, of course.
Done!
Let's do a patch release of 1.66 with this fix when it's ready. |
LGTM! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix!
Will there be a v1.66.1 release with this fix included? |
Yes, we are planning on doing that very soon. |
Summary
I was excited to see recycled buffers be released (#6619), and wanted to gauge performance before putting into production. It was surprising to see that the default codec in
v1.65.0
performed better than the new one inv1.66.0
.There was a missed
(mem.BufferSlice).Free()
in the server duringprocessUnaryRPC
, leaving the incoming request bytes hanging around while still paying the added allocations that comes with managing the buffers.I'm not super attached to the added
BufferSlice
docs, but it would have saved me a lot of time trying to figure out what was wrong with my interaction code. It wasn't clear to me that sometimes the provided slice kept a reference to the buffer, and would be released by SDK logic.Testing
This is far from a pure benchmark (bufconn, client, etc contributing to allocs). However, it demonstrates the regression and fix.
Results
Code:
RELEASE NOTES: