Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark json.NewEncoder vs json.Marshal #409

Closed
maggie44 opened this issue Oct 15, 2023 · 6 comments
Closed

Benchmark json.NewEncoder vs json.Marshal #409

maggie44 opened this issue Oct 15, 2023 · 6 comments

Comments

@maggie44
Copy link
Contributor

Sync pool can be useful, but in places like in wsjson are there worry about variable message sizes and the impact on memory?

See golang/go#23199 for info on memory growth through sync.pool.

The GC is much more effective than it used to be. Maybe still useful for fixed length pools? I haven't benchmarked it though.

@nhooyr nhooyr changed the title Consider removing sync.pool Benchmark json.NewEncoder vs json.Marshal Oct 15, 2023
@nhooyr
Copy link
Contributor

nhooyr commented Oct 15, 2023

Can't say, I haven't done any benchmarks. I'm just deferring to the stdlib as json.Encoder uses a sync.Pool under the hood for the buffer into which the JSON is written. If it didn't make a positive difference, I'm sure someone would have noticed by now and requested the pool be removed or be configurable

I strongly suspect it has a positive impact on performance though as most websocket messages tend to be similar sized and there are tons of them.

Feel free to open a PR against dev with benchmarks comparing the two. I'm not going to get to this for a while.

@nhooyr nhooyr added this to the v1.10.0 milestone Oct 19, 2023
nhooyr added a commit to wdvxdr1123/websocket that referenced this issue Oct 20, 2023
json.Encoder is 42% faster than json.Marshal thanks to the memory reuse.

goos: linux
goarch: amd64
pkg: nhooyr.io/websocket/wsjson
cpu: 12th Gen Intel(R) Core(TM) i5-1235U
BenchmarkJSON/json.Encoder-12            3517579           340.2 ns/op        24 B/op          1 allocs/op
BenchmarkJSON/json.Marshal-12            2374086           484.3 ns/op       728 B/op          2 allocs/op

Closes coder#409
@nhooyr
Copy link
Contributor

nhooyr commented Oct 20, 2023

Done in 293f204

json.Encoder is 42% faster than json.Marshal thanks to the memory reuse.

goos: linux
goarch: amd64
pkg: nhooyr.io/websocket/wsjson
cpu: 12th Gen Intel(R) Core(TM) i5-1235U
BenchmarkJSON/json.Encoder-12            3517579           340.2 ns/op        24 B/op          1 allocs/op
BenchmarkJSON/json.Marshal-12            2374086           484.3 ns/op       728 B/op          2 allocs/op

@nhooyr nhooyr closed this as completed Oct 20, 2023
@nhooyr
Copy link
Contributor

nhooyr commented Oct 20, 2023

That was at 128 byte messages which I think is realistic enough but not too big.

@nhooyr
Copy link
Contributor

nhooyr commented Oct 20, 2023

I extended the benchmark for more sizes and json.Encoder wins at every size.

[qrvnl@dios ~/src/websocket] 130$ go test -bench=. ./wsjson/
goos: linux
goarch: amd64
pkg: nhooyr.io/websocket/wsjson
cpu: 12th Gen Intel(R) Core(TM) i5-1235U
BenchmarkJSON/json.Encoder/8-12         14041426            72.59 ns/op  110.21 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/16-12        13936426            86.99 ns/op  183.92 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/32-12        11416401           115.3 ns/op   277.59 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/128-12        4600574           264.7 ns/op   483.55 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/256-12        2710398           433.9 ns/op   590.06 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/512-12        1588930           717.3 ns/op   713.82 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/1024-12        823138          1484 ns/op     689.80 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/2048-12        402823          2875 ns/op     712.32 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/4096-12        213926          5602 ns/op     731.14 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/8192-12         92864         11281 ns/op     726.19 MB/s          16 B/op          1 allocs/op
BenchmarkJSON/json.Encoder/16384-12        39318         29203 ns/op     561.04 MB/s          19 B/op          1 allocs/op
BenchmarkJSON/json.Marshal/8-12         10768671           114.5 ns/op    69.89 MB/s          48 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/16-12        10140996           113.9 ns/op   140.51 MB/s          64 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/32-12         9211780           121.6 ns/op   263.06 MB/s          64 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/128-12        4632796           264.2 ns/op   484.53 MB/s         224 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/256-12        2441511           473.5 ns/op   540.65 MB/s         432 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/512-12        1298788           896.2 ns/op   571.27 MB/s         912 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/1024-12        602084          1866 ns/op     548.83 MB/s        1808 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/2048-12        341151          3817 ns/op     536.61 MB/s        3474 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/4096-12        175594          7034 ns/op     582.32 MB/s        6548 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/8192-12         83222         15023 ns/op     545.30 MB/s       13591 B/op          2 allocs/op
BenchmarkJSON/json.Marshal/16384-12        33087         39348 ns/op     416.39 MB/s       27304 B/op          2 allocs/op
PASS
ok      nhooyr.io/websocket/wsjson  32.934s

@nhooyr nhooyr modified the milestones: v1.10.0, v1.9.0 Oct 20, 2023
@maggie44
Copy link
Contributor Author

Very interesting! Even greater margin than I would have anticipated. I would explore other libraries too: https://github.com/json-iterator/go-benchmark. Golang native is known to be slower than most. That said, for this project the zero dependencies is far more appealing, but perhaps a means of passing in a json encoder for those who want the feature could be nice.

@nhooyr
Copy link
Contributor

nhooyr commented Oct 20, 2023

Very interesting! Even greater margin than I would have anticipated. I would explore other libraries too: https://github.com/json-iterator/go-benchmark. Golang native is known to be slower than most. That said, for this project the zero dependencies is far more appealing, but perhaps a means of passing in a json encoder for those who want the feature could be nice.

Yea I was surprised too. You can just use c.Writer or c.Write and use whichever json encoder you want.

nhooyr added a commit to wdvxdr1123/websocket that referenced this issue Oct 20, 2023
json.Encoder is 42% faster than json.Marshal thanks to the memory reuse.

goos: linux
goarch: amd64
pkg: nhooyr.io/websocket/wsjson
cpu: 12th Gen Intel(R) Core(TM) i5-1235U
BenchmarkJSON/json.Encoder-12            3517579           340.2 ns/op        24 B/op          1 allocs/op
BenchmarkJSON/json.Marshal-12            2374086           484.3 ns/op       728 B/op          2 allocs/op

Closes coder#409
nhooyr added a commit to wdvxdr1123/websocket that referenced this issue Oct 26, 2023
json.Encoder is 42% faster than json.Marshal thanks to the memory reuse.

goos: linux
goarch: amd64
pkg: nhooyr.io/websocket/wsjson
cpu: 12th Gen Intel(R) Core(TM) i5-1235U
BenchmarkJSON/json.Encoder-12            3517579           340.2 ns/op        24 B/op          1 allocs/op
BenchmarkJSON/json.Marshal-12            2374086           484.3 ns/op       728 B/op          2 allocs/op

Closes coder#409
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants