-
Notifications
You must be signed in to change notification settings - Fork 312
Pool buffers in wspb/wsjson #71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Will
I prefer |
I think it's unlikely I'll take an approach similar to gorilla/websocket. This is an internal implementation detail, I don't want to expose it unless absolutely necessary. At least until #65 is resolved. |
In my case, I use websocket to transfer other protocol traffic. For the sake of maintainability, I keep nesting |
I'm sorry I don't understand your use case. Can you elaborate on why you're nesting net.Conn and why you're allocating a []byte at each level? |
@coadler Would prefer to avoid the dependency on a library outside of the stdlib See the benchmarks linked by that library https://omgnull.github.io/go-benchmark/buffer/ sync.Pool is only 4.5ns slower so its best to just use it instead. The only buffers we can reuse are the bufio.Read/Writers and the buffers used by the wspb/wsjson subpackages used by client conns. The net/http hijacker always allocates for the server. I've already tackled reuse of the bufio read/writers in #81 @coadler if you have time, would appreciate you implementing reuse of buffers for wspb/wsjson |
The write side for wsjson already reuses buffers because it uses json.NewEncoder which uses a sync.Pool underneath. We'd have to switch json.NewDecoder to use json.Unmarshal if we want buffer reuse and for the wspb package, we'll have to use the proto.Buffer type for reuse. |
Definitely a good idea to have some benchmarks as well. |
Will tackle this tonight |
@coadler I think its best that instead of modifying wsjson and wspb, we just modify the |
And change the two packages to use it. The method I'm taking about is coming in #81 |
Lol, we can't do that, my bad. When we return from Read(), the buffer is in the hands of application code, so there is no way to put it back into the pool as we can't tell when the app is done with it. |
note: I've added this feature to the docs but I didn't actually implement it. |
I'm confused why we would only want to buffer messages over 128 bytes. As far as I can tell there's not a way for us to know the message length before reading it in wsjson/wspb. |
That's fair. I can't remember where but I remember reading sync.Pool isn't free and can be more expensive for small items than reallocating. We would need benchmarks to be sure. You'd read the first 128 bytes into a slice and if you don't get a EOF from the reader, then you'd know you need to read the rest into a buffer from the sync.Pool. |
👍 |
So the overhead of sync.Pool is in fact very low:
func BenchmarkSyncPool(b *testing.B) {
sizes := []int{
2,
16,
32,
64,
128,
256,
512,
4096,
16384,
}
for _, size := range sizes {
b.Run(strconv.Itoa(size), func(b *testing.B) {
b.Run("allocate", func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
buf := make([]byte, size)
_ = buf
}
})
b.Run("pool", func(b *testing.B) {
b.ReportAllocs()
p := sync.Pool{}
b.ResetTimer()
for i := 0; i < b.N; i++ {
buf := p.Get()
if buf == nil {
buf = make([]byte, size)
}
p.Put(buf)
}
})
})
}
} Not worth shaving nanoseconds off at lower sizes to not use it, we should just use it all the time. |
Going to get this in rn so the docs reflect the state of the lib. |
Opportunities for optimization available in
The text was updated successfully, but these errors were encountered: