Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement new Codec that uses mem.BufferSlice instead of []byte #7356

Merged
merged 100 commits into from
Aug 21, 2024

Conversation

PapaCharlie
Copy link
Contributor

@PapaCharlie PapaCharlie commented Jun 26, 2024

This new codec, as outlined in #6619 will allow the reuse of buffers to their fullest extent. Note that this deliberately does not (yet) implement support for stathandlers, however all the relevant APIs have been updated and both old and new Codec implementations are supported.

RELEASE NOTES:

  • TBD

This new codec, as outlined in grpc#6619 will allow the reuse of buffers to their fullest extent. Note that this deliberately does not (yet) implement support for stathandlers, however all the relevant APIs have been updated and both old and new Codec implementations are supported.
encoding/encoding_v2.go Outdated Show resolved Hide resolved
encoding/encoding_v2.go Outdated Show resolved Hide resolved
test/timeouts.go Outdated Show resolved Hide resolved
@easwars easwars assigned PapaCharlie and unassigned easwars Aug 21, 2024
Comment on lines +881 to +883
// TODO: Can/should this still be preserved with the new BufferSlice API? Are
// there any actual benefits to allocating a single large buffer instead of
// multiple smaller ones?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is massively more efficient. See #2635, golang/go#50774.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have absolutely no doubt that in the context of buffers not being reused, it's more efficient. But in this case where buffers are pooled and produce as little garbage as possible, it may not? I don't know what the default buffer size used by the compressor is, but if it roughly lines up with one of the sized pools used by the mem package, then I'm not sure it would be more efficient to allocate one large buffer instead of representing the data in a mem.BufferSlice. That being said, it's 100% something that can be benchmarked and tested.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies, I had not caught up with that point.
A quick profile with gRPC 1.67 suggests io.ReadAll is no longer causing excess garbage, thanks!.

image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉 🎉 🎉

Awesome! That's really good to hear. Good to know this is a win for this new package. Do you think we would actually get a benefit from implementing this TODO? Or would you expect it to actually perform worse? If that's the case, maybe we can get rid of this commented block altogether instead

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to this function: #7653. io.Copy() makes lots of allocations - most for my use case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, I was hoping someone had the answer to that.

Do you think we would actually get a benefit from implementing this TODO?

I cannot think a major benefit right now.

ijsong added a commit to kakao/varlog that referenced this pull request Oct 21, 2024
This PR removes the following gRPC buffer pool flags:

- `--grpc-server-recv-buffer-pool`
- `--grpc-client-recv-buffer-pool`

These flags are no longer necessary since gRPC introduced the `mem` package in
version [1.66.0](https://github.com/grpc/grpc-go/releases/tag/v1.66.0).

See:
- https://github.com/grpc/grpc-go/releases/tag/v1.66.0
- grpc/grpc-go#7432
- grpc/grpc-go#7356
@alanprot
Copy link

alanprot commented Oct 23, 2024

I think that this change should be more explicitly called out that the memory is being reused by default on the release notes as it can break some edge cases where the memory from the request is still being used even after the request finish.

The only think i can see on the release notes about this is:

codec: Implement a new Codec which uses buffer recycling for encoded message (https://github.com/grpc/grpc-go/pull/7356)
introduce a mem package to facilitate buffer reuse (https://github.com/grpc/grpc-go/pull/7432)
Special Thanks: @PapaCharlie

Which does not call out that the feature is enabled by default now.

ijsong added a commit to kakao/varlog that referenced this pull request Dec 21, 2024
This PR updates the gRPC codec from v1 to v2, enhancing performance and reducing
memory allocations by utilizing a memory pool for encoded messages.

Refs:
- grpc/grpc-go#7356
- vitessio/vitess#16790
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Performance Performance improvements (CPU, network, memory, etc)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants