Skip to content

Pool buffers in wspb/wsjson #71

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
nhooyr opened this issue Apr 26, 2019 · 18 comments · Fixed by #85
Closed

Pool buffers in wspb/wsjson #71

nhooyr opened this issue Apr 26, 2019 · 18 comments · Fixed by #85
Assignees
Milestone

Comments

@nhooyr
Copy link
Contributor

nhooyr commented Apr 26, 2019

Opportunities for optimization available in

  • wspb
  • wsjson
  • Dial bufio read writers.
@nhooyr nhooyr added the perf label Apr 26, 2019
@nhooyr nhooyr changed the title Pool buffers in wsjson and wspb packages Pool buffers Apr 26, 2019
@nhooyr nhooyr added this to the v0.3.0 milestone Apr 27, 2019
@Dreamacro
Copy link

Will sync.Pool can be passthrough to Dial?

gorilla/websocket implementation with a private struct (https://github.com/gorilla/websocket/blob/6a67f44b69004415802d2e678e483065b8fff6c8/conn.go#L480)

I prefer pool.Get().([]byte) to share with other io.CopyBuffer.

@nhooyr
Copy link
Contributor Author

nhooyr commented Apr 29, 2019

I think it's unlikely I'll take an approach similar to gorilla/websocket. This is an internal implementation detail, I don't want to expose it unless absolutely necessary. At least until #65 is resolved.

@Dreamacro
Copy link

In my case, I use websocket to transfer other protocol traffic. For the sake of maintainability, I keep nesting net.Conn like Russian Doll. There are allocate []byte each level, so I want to share a pool with whole application.

@nhooyr
Copy link
Contributor Author

nhooyr commented Apr 30, 2019

I'm sorry I don't understand your use case. Can you elaborate on why you're nesting net.Conn and why you're allocating a []byte at each level?

@coadler
Copy link
Contributor

coadler commented Apr 30, 2019

@nhooyr
Copy link
Contributor Author

nhooyr commented May 30, 2019

@coadler Would prefer to avoid the dependency on a library outside of the stdlib

See the benchmarks linked by that library https://omgnull.github.io/go-benchmark/buffer/

sync.Pool is only 4.5ns slower so its best to just use it instead.

The only buffers we can reuse are the bufio.Read/Writers and the buffers used by the wspb/wsjson subpackages used by client conns. The net/http hijacker always allocates for the server.

I've already tackled reuse of the bufio read/writers in #81

@coadler if you have time, would appreciate you implementing reuse of buffers for wspb/wsjson

@nhooyr
Copy link
Contributor Author

nhooyr commented May 30, 2019

The write side for wsjson already reuses buffers because it uses json.NewEncoder which uses a sync.Pool underneath. We'd have to switch json.NewDecoder to use json.Unmarshal if we want buffer reuse and for the wspb package, we'll have to use the proto.Buffer type for reuse.

@nhooyr
Copy link
Contributor Author

nhooyr commented May 30, 2019

Definitely a good idea to have some benchmarks as well.

@coadler
Copy link
Contributor

coadler commented May 31, 2019

Will tackle this tonight

@coadler coadler self-assigned this May 31, 2019
@nhooyr
Copy link
Contributor Author

nhooyr commented May 31, 2019

@coadler I think its best that instead of modifying wsjson and wspb, we just modify the Read method on the conn to use a buffer pool only if the received message is greater than 128 bytes.

@nhooyr
Copy link
Contributor Author

nhooyr commented May 31, 2019

And change the two packages to use it. The method I'm taking about is coming in #81

@nhooyr nhooyr modified the milestones: v0.3.0, v1.0.0 May 31, 2019
@nhooyr
Copy link
Contributor Author

nhooyr commented May 31, 2019

Lol, we can't do that, my bad. When we return from Read(), the buffer is in the hands of application code, so there is no way to put it back into the pool as we can't tell when the app is done with it.

nhooyr added a commit that referenced this issue Jun 1, 2019
nhooyr added a commit that referenced this issue Jun 1, 2019
@nhooyr nhooyr modified the milestones: v0.3.0, v1.1.0 Jun 1, 2019
@nhooyr nhooyr changed the title Pool buffers Pool buffers in wspb/wsjson Jun 1, 2019
@nhooyr
Copy link
Contributor Author

nhooyr commented Jun 1, 2019

note: I've added this feature to the docs but I didn't actually implement it.

@coadler
Copy link
Contributor

coadler commented Jun 1, 2019

I'm confused why we would only want to buffer messages over 128 bytes. As far as I can tell there's not a way for us to know the message length before reading it in wsjson/wspb.

@nhooyr
Copy link
Contributor Author

nhooyr commented Jun 1, 2019

That's fair. I can't remember where but I remember reading sync.Pool isn't free and can be more expensive for small items than reallocating. We would need benchmarks to be sure. You'd read the first 128 bytes into a slice and if you don't get a EOF from the reader, then you'd know you need to read the rest into a buffer from the sync.Pool.

@coadler
Copy link
Contributor

coadler commented Jun 1, 2019

👍

@nhooyr
Copy link
Contributor Author

nhooyr commented Jun 2, 2019

So the overhead of sync.Pool is in fact very low:

$ go test -bench=SyncPool -run=^$
goos: darwin
goarch: amd64
pkg: nhooyr.io/websocket
BenchmarkSyncPool/2/allocate-8         	100000000	        10.6 ns/op	       2 B/op	       1 allocs/op
BenchmarkSyncPool/2/pool-8             	100000000	        15.6 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/16/allocate-8        	100000000	        18.2 ns/op	      16 B/op	       1 allocs/op
BenchmarkSyncPool/16/pool-8            	100000000	        15.1 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/32/allocate-8        	100000000	        20.3 ns/op	      32 B/op	       1 allocs/op
BenchmarkSyncPool/32/pool-8            	100000000	        15.1 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/64/allocate-8        	50000000	        24.1 ns/op	      64 B/op	       1 allocs/op
BenchmarkSyncPool/64/pool-8            	100000000	        15.0 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/128/allocate-8       	50000000	        30.5 ns/op	     128 B/op	       1 allocs/op
BenchmarkSyncPool/128/pool-8           	100000000	        15.2 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/256/allocate-8       	30000000	        44.4 ns/op	     256 B/op	       1 allocs/op
BenchmarkSyncPool/256/pool-8           	100000000	        15.1 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/512/allocate-8       	20000000	        71.7 ns/op	     512 B/op	       1 allocs/op
BenchmarkSyncPool/512/pool-8           	100000000	        15.3 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/4096/allocate-8      	 3000000	       461 ns/op	    4096 B/op	       1 allocs/op
BenchmarkSyncPool/4096/pool-8          	100000000	        15.1 ns/op	       0 B/op	       0 allocs/op
BenchmarkSyncPool/16384/allocate-8     	 1000000	      1361 ns/op	   16384 B/op	       1 allocs/op
BenchmarkSyncPool/16384/pool-8         	100000000	        15.4 ns/op	       0 B/op	       0 allocs/op
PASS
ok  	nhooyr.io/websocket	27.759s
func BenchmarkSyncPool(b *testing.B) {
	sizes := []int{
		2,
		16,
		32,
		64,
		128,
		256,
		512,
		4096,
		16384,
	}
	for _, size := range sizes {
		b.Run(strconv.Itoa(size), func(b *testing.B) {
			b.Run("allocate", func(b *testing.B) {
				b.ReportAllocs()
				for i := 0; i < b.N; i++ {
					buf := make([]byte, size)
					_ = buf
				}
			})
			b.Run("pool", func(b *testing.B) {
				b.ReportAllocs()

				p := sync.Pool{}

				b.ResetTimer()
				for i := 0; i < b.N; i++ {
					buf := p.Get()
					if buf == nil {
						buf = make([]byte, size)
					}

					p.Put(buf)
				}
			})
		})
	}
}

Not worth shaving nanoseconds off at lower sizes to not use it, we should just use it all the time.

@nhooyr nhooyr assigned nhooyr and unassigned coadler Jun 2, 2019
@nhooyr
Copy link
Contributor Author

nhooyr commented Jun 2, 2019

Going to get this in rn so the docs reflect the state of the lib.

nhooyr added a commit that referenced this issue Jun 2, 2019
nhooyr added a commit that referenced this issue Jun 2, 2019
nhooyr added a commit that referenced this issue Jun 2, 2019
nhooyr added a commit that referenced this issue Jun 2, 2019
@nhooyr nhooyr closed this as completed in #85 Jun 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants