-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow stream reader cancel() method to return bytes from queue instead of discarding them. #1147
Comments
Sorry, intended for WICG/serial |
@domenic, can you reopen this issue? If there's action to take here then it should be done at the Streams API level. While it might be nice syntactically to have const finalReadPromise = reader.read();
await reader.cancel();
const finalRead = await finalReadPromise; If there was any data left in the queue then it will be in |
This is indeed pretty interesting on the streams level. In 8a7d92b we made We did that because it was our belief that if you were canceling the read you were OK losing the memory. Canceling should be a relatively rare operation so re-allocating each time seems OK. Can you tell us more about your use case where you want to cancel the stream, you want the memory back, but you were unable to wait for the read to complete? |
Would it? If there are queued bytes then shouldn't the read complete before the call to |
That's pretty clever! 😀 It should work in simple cases, although I'm not sure how well it would translate to more complex scenarios (e.g. pipe chains or cross-realm streams) where chunks are buffered in multiple streams. I don't know if we want to make this easier. Perhaps we could have a reader property indicating if there are chunks available in the queue (i.e. if
Correct, the specification requires However, it will only pull one chunk. If multiple chunks were |
I want exactly this behavior, but it does not work this way in current Chrome. "finalReadPromise" is immediately resolved with { value:undefined, done:true} and nothing is read from the port at all.
|
I'd have to see a larger example but are you sure that when this code runs the data has already been received and hasn't already been read? If the next sequence number hasn't been received yet then there's nothing in the queue and Backing up though because maybe I don't understand what it is you are trying to accomplish at a higher level: Why are you canceling the stream? The only reason to cancel a stream is a) you are closing the port or b) you want to discard any queued data and start over with whatever the device sends next. From what you are saying about getting 1-2% data loss it sounds like you are running your snippet in a loop and are losing 1-2% of the data you are expecting to receive. This is expected because in addition to discarding data in the ReadableStream's queue it also tells the operating system and hardware to flush its buffers, which means that if the data arrives at just the right time it won't be read and will instead be discarded. There's no reason to run this kind of code in a loop however. If you want to read continuously from the device just use the same reader and keep calling read(). In essence, if you don't want to lose data then don't call cancel(). I'd like to understand why you are calling cancel() in the first place because it is probably the wrong solution for the problem you are trying to solve. |
So we agree that this does not preserve any data in buffer? Second, "reader.cancel()" is called to cancel reader NOT the stream itself. The whole stream has its own readable.cancel() to discard buffers. This simple change - the last outcome of the canceled read operation are bytes read so far and done=true, will open more possibilities. E.g writing simple, async sequential like code even with well known timed-out read. Otherwise, having all that modern JS , async, await, promises, stream apis - to send and read few bytes you need to implement state machine or own FIFO buffer. I literaly used it this way in a loop.
but why design API to loose data without any reason. Rigth after cancel(), opened port starts to collect the next bytes received. As spec says https://wicg.github.io/serial/#dom-serialport-readable The read loop pattern is just callback disguised as promise based api. But to do something non trivial this way you need to store and maintain at lest reader/writer and global state or FIFO.
|
I've reread the stream spec. |
This is not correct. readable.cancel() is the same as readable.getReader().cancel(). If you want to release a given reader, use reader.releaseLock(). |
https://streams.spec.whatwg.org/#rs-prototype
It seems to be non-normative implementation detail rather then specification part. It is deliberatelly designed this way so time to close. Maybe the desired behaviour will be implemented as abortable read.
|
It's not a "non-normative implementation detail", it's a non-normative summary of the intentionally-designed normative algorithms elsewhere in the specification. |
There are no benefits from returning {value: undefined, done: true} after "cancel()" and discarding already received bytes.
Returning {value: byes_read_so_far, done: true} will act much better, allowing to implement timeout and preserving data.
Also, "cancel()" implemented this way will be compatible with the current implementation. When done=true, "readable" will be unlocked. For "value" not eq. "undefined" the last chunk will be reliable processed.
It's possibly the "lowest hanging fruit" for many similar requests.
It will partially fulfill other requests eg. #1103
whatwg/fetch#180
The text was updated successfully, but these errors were encountered: