-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
encoding/json: marshaling RawMessage has poor performance #33422
Comments
Investigating further, I believe the slowdown is caused by it trying to unnecessarily compact/validate the json. go/src/encoding/json/encode.go Line 456 in 2d6ee6e
e.Buffer.Write(b) yields much better performance - 477 ns/op.
|
In some applications, this may be considered true, but as a general principle, the |
As far as I can tell, |
Earlier I said: "In some applications, this may be considered true". I don't doubt that this is probably true of your use case. However, it is the current behavior and we can't just remove it as some are relying on this property. Keep in mind that
Yes. This is a problem that I've written about before regarding There are many reasonable features to add to |
I'd suggest investigating ways to optimize the current code without changing the API nor adding any options. If it's still too slow, perhaps file a proposal to change the API. My thinking is similar to @dsnet's; |
I've got same problem when I analysis performance of my application. Is it not ok to add options to encOpts? |
Change https://golang.org/cl/205018 mentions this issue: |
Hi all, we kicked off a discussion for a possible "encoding/json/v2" package that addresses the spirit of this proposal. The prototype v2 implementation has a better parser, able to verify and reformat the result of a |
Marshalling a json.RawMessage is not zero overhead. Instead, it compacts the raw message which starts to have an overhead at scale. golang/go#33422 Since we have full control over the message constructed, we can simply write the byte slice into the network stream. This gives considerable performance boost. ``` goos: linux goarch: amd64 pkg: github.com/mattermost/mattermost/server/public/model cpu: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz │ old.txt │ new_2.txt │ │ sec/op │ sec/op vs base │ EncodeJSON-8 1640.5n ± 2% 289.6n ± 1% -82.35% (p=0.000 n=10) │ old.txt │ new_2.txt │ │ B/op │ B/op vs base │ EncodeJSON-8 528.0 ± 0% 503.0 ± 0% -4.73% (p=0.000 n=10) │ old.txt │ new_2.txt │ │ allocs/op │ allocs/op vs base │ EncodeJSON-8 5.000 ± 0% 4.000 ± 0% -20.00% (p=0.000 n=10) ``` P.S. No concerns over changing the model API because we are still using 0.x https://mattermost.atlassian.net/browse/MM-54998 ```release-note Improve websocket event marshalling performance ```
Marshalling a json.RawMessage is not zero overhead. Instead, it compacts the raw message which starts to have an overhead at scale. golang/go#33422 Since we have full control over the message constructed, we can simply write the byte slice into the network stream. This gives considerable performance boost. ``` goos: linux goarch: amd64 pkg: github.com/mattermost/mattermost/server/public/model cpu: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz │ old.txt │ new_2.txt │ │ sec/op │ sec/op vs base │ EncodeJSON-8 1640.5n ± 2% 289.6n ± 1% -82.35% (p=0.000 n=10) │ old.txt │ new_2.txt │ │ B/op │ B/op vs base │ EncodeJSON-8 528.0 ± 0% 503.0 ± 0% -4.73% (p=0.000 n=10) │ old.txt │ new_2.txt │ │ allocs/op │ allocs/op vs base │ EncodeJSON-8 5.000 ± 0% 4.000 ± 0% -20.00% (p=0.000 n=10) ``` P.S. No concerns over changing the model API because we are still using 0.x https://mattermost.atlassian.net/browse/MM-54998 ```release-note Improve websocket event marshalling performance ```
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes.
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
Ran a benchmark to compare marshaling a
json.RawMessage
, astring
and a[]byte
.What did you expect to see?
I expected marshaling a
json.RawMessage
to have the best performance of the three, since it should be a no-op.What did you see instead?
It is 2 times slower than marshaling a
string
, and 3 times slower than marshaling a[]byte
.BenchmarkRawMessage-2 1000000 1513 ns/op 232 B/op 7 allocs/op
BenchmarkString-2 2000000 869 ns/op 112 B/op 3 allocs/op
BenchmarkBytes-2 3000000 561 ns/op 128 B/op 3 allocs/op
The text was updated successfully, but these errors were encountered: