You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Datagrams are the lowest-level building blocks we expose. But are they as performant as streams?
From #105: "imagine implementing QUIC streams (or TCP) on top of DatagramTransport."
If someone wants to send megabytes of data over datagrams, using their own homerolled framing scheme and back-channel for dealing with packet loss, can we support that?
If so, the examples on sending datagrams seem lacking here, not illustrating the necessary chunking of large data into transport.maxDatagramSize (with some custom framing needed obviously) and piping to datagrams using e.g. a pull-based ReadableStream to send at the max rate the user agent can do.
I should add this is still much lower performance than a mock file upload over bidirectionalStream with 16k chunks at 0.2 seconds. But there are likely JS reasons for that. E.g. I should probably use something more performant than blob.slice.
"If someone wants to send megabytes of data over datagrams, using their own homerolled framing scheme and back-channel for dealing with packet loss, can we support that?"
[BA] I hope so, because realtime communications use cases will depend on this (e.g. game streaming can consume upwards of 20 megabits/second).
jan-ivar
changed the title
Use case: stream large data over datagrams your own way with high throughput
Add example showing streaming of large data over datagrams performantly
Dec 20, 2023
Datagrams are the lowest-level building blocks we expose. But are they as performant as streams?
From #105: "imagine implementing QUIC streams (or TCP) on top of DatagramTransport."
If someone wants to send megabytes of data over datagrams, using their own homerolled framing scheme and back-channel for dealing with packet loss, can we support that?
If so, the examples on sending datagrams seem lacking here, not illustrating the necessary chunking of large data into
transport.maxDatagramSize
(with some custom framing needed obviously) and piping to datagrams using e.g. apull
-based ReadableStream to send at the max rate the user agent can do.I did a comparison in Canary, of a mock file upload over a bidirectionalStream using a tiny chunk size matching datagrams vs. a mock file upload over datagrams without any kind of framing or packet loss recovery.
Uploading a 4.4 megabyte file took 2.3 seconds with the former and 17 seconds with the latter. 🤔Should we add this use case to ensure this will be performant? Or do datagrams have some inherent performance limitation?
The text was updated successfully, but these errors were encountered: