Releases: uNetworking/uWebSockets
v20.3.0
Dedicated decompressors
Adds uWS::CompressOptions::DEDICATED_DECOMPRESSOR. This (decompressor) flag can be OR:ed with one (compressor) flag to easily create a complete compression preference such as:
.compression = uWS::CompressOptions(uWS::DEDICATED_COMPRESSOR_4KB | uWS::DEDICATED_DECOMPRESSOR),
See #1347. This change is backwards compatible. More specific decompressors will be added with time.
v20.2.0
- Moves TopicTree from individual WebSocketContextData's up to the one shared App.
- Fixes undefined behavior in the destructor of App (passes fuzzing build sanity checks).
- Fixes two fuzzing issues introduced in v20 (fuzzing should now be clean; but let's wait a week and find out).
v20.1.0
v20.0.0
Massively simplified & improved pub/sub
Pub/sub is now a lot more predictable and always guarantees strict ordering inside topics, across topics and between WebSocket::send and WebSocket::publish / App::publish. Subscription and unsubscription is always guaranteed to be strictly ordered with respect to publish. Support for MQTT wildcards has been removed.
Backwards compatible
Unless you're relying on MQTT wildcards ('#', '+'), v20.0.0 is entirely backwards compatible with v19.9.0.
Motivation
When pub/sub was introduced in 2018, it was designed based on a few assumptions:
- SHARED_COMPRESSOR being preferred by customers and customers being okay with supporting many application-level messages wrapped in and delivered as one bigger WebSocket message.
- Expectation: Customers being willing to optimize the application protocol to minimize compression overhead.
- Reality: This is way too complicated for customers and not even backwards compatible with their existing protocols. Customers simply want things to work, and they heavily prefer DEDICATED_COMPRESSOR.
- Subscription and unsubscription to/from topics being "malleable" and not strictly ordered.
- Expectation: Customers being fine with subscriptions and unsubscriptions being executed in as big of a batch as possible, with minor re-ordering of messages not being a problem.
- Reality: Many apps depend on a strict ordering guarantee and break down if order is not guaranteed. This is why the nonStrict hack was added, which performs really bad in worst-case and is badly understood.
- WebSocket::send and WebSocket::publish being two different streams out of order with each other.
- Expectation: There being a separate use for pub/sub, not related to send.
- Reality: Customers expect WebSocket::publish followed by WebSocket::send to be delivered in order, and the opposite likewise.
- MQTT syntax being useful
- Expectation: Efficient use of a logarithmic tree of topics and wildcards
- Reality: Customers don't care
v19.9.0
v19.8.0
Large messages & slow receivers
WebSockets are typically used for small message sending. In some cases you might end up with larger-than-ideal messages being sent, and these cases need to be handled efficiently. Especially if receivers are slow and backpressure is building up.
-
Only reduce backpressure in steps of 1/32th of the backpressure itself. This hinders excessive reallocation/shifting of especially large backpressure and improves drainage performance.
-
Use std::string::erase instead of std::string::substr. This alone is a 3x performance improvement for worst cases.
-
Write directly to backpressure if sending a large message, or if already draining. This improves sending performance of large messages as we avoid extra copying/allocations.
-
Adds ability to benchmark large message echoing with load_test. Use extra argument size_mb. This release is at least 1.5x the peformance when echoing 100mb messages.
v18.24.0
v19.6.0
v19.5.0
Boost Asio
Adding support for seamless integration with Boost Asio.
Compile with WITH_ASIO=1 and integrate with existing boost::asio::io_context on the same thread.
Caveat: Loop must be run by uWS (Loop::run() or us_loop_run) even if the io_context is third party.
v19.4.0
Stampede tweaks
- Tweaks the TLS handshake queue to better cope with mass connections and mass disconnections.
- Adjusts the WebSocket shutdown timeout from a default of 120 seconds to roughly 4 seconds, making mass disconnections finish quicker.
Please report any issues or undesired side effects introduced in this release.