-
Notifications
You must be signed in to change notification settings - Fork 312
Deflate extension #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We probably do want this. Maybe just add a |
Definitely something like #36 Would work just like browsers then which would be nice. |
@coadler this issue is also ripe for work if you're interested. |
Though I would suggest you wait for #81 as it is a rather large refactoring of the internals. |
I think the move here is to expose zero flags. We should automatically compress text frames larger than X bytes, where X is determined experimentally. My guess is anywhere from 128 -512 bytes. We should also test every level and see which works best. Last time I tested, the lowest level had the most bang for buck. For binary frames, I'm thinking we shouldn't compress at all as most binary protocols do not compress very well afaik unless you have a lot of duplicate data. These configuration would work fantastic out of the box for most setups and require zero knobs. |
So after some testing, I think the sweet spot is going to be 256 bytes for text. Above that, we should automatically use deflate compression at the lowest level if the peer supports it. Should be good for 99.99% of applications. For binary messages, we should use a cutoff of 512 bytes. The reason we compress anyway for binary protocols too is I'm guessing that when the message size exceeds 512 bytes, its extremely likely the message can be compressed well. Should also document this on Conn and link to this issue requesting feedback. Thoughts @coadler This should reduce your bandwidth significantly with the discord gateway for your bot. |
Disabling compression entirely would be as easy as setting |
I think the deflate extension is not worth support at all. See my investigation in gorilla/websocket#203 (comment) It will cause a large memory spike as every flate writer takes up a ridiculous 1.2 megabytes in memory. |
If deflate is out of the question, could we potentially still support zlib/zstd? Or would these be best implemented as some sort of wrapper library. For zstd we'd need https://github.com/DataDog/zstd or similar which now brings an external dep and cgo into the mix. I'm not overly familiar with how websocket compression negotiation works, but for Discord it's picked based on a query param e.g. |
Would be best as a wrapper lib. I'm happy to link to it from the README.md though. Also, why not https://golang.org/pkg/compress/zlib/? |
It doesn't have the same allocation overhead as compress/flate so it should have solid performance. |
I only meant zstd would incur an external dependency because zlib has a std package |
Ah I wasn't aware there is a difference between the two. There's also https://www.reddit.com/r/golang/comments/be7esc/a_pure_go_port_of_the_zstandard_zstd_decompression/ |
Another issue with WebSocket compression: https://bugs.chromium.org/p/chromium/issues/detail?id=163882 Looks like browsers will unconditionally compress every message, even smaller ones, thereby increasing their size. |
Interestingly, Firefox will compress all payloads but only send the ones smaller than the original payload: https://hg.mozilla.org/mozilla-central/diff/724f0a71d62171da1357e6c1f93453359e54206b/netwerk/protocol/websocket/WebSocketChannel.cpp#l1.277 |
To summarize: We do not support the deflate compression extension because Go's compress/flate library is very memory intensive and browsers do not handle WebSocket compression intelligently. |
See https://tools.ietf.org/html/rfc7692
Investigate whether we want this.
The text was updated successfully, but these errors were encountered: