-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow request/response.blob() to resolve before reading the full stream #556
Comments
FWIW, while we do this optimization for blobs in IDB/etc, I don't think we do it for Cache API in gecko yet. |
I guess there's less need for this if no one does it already. w3c/ServiceWorker#913 is maybe still the best fix. |
The problem with the blob optimization here is that its different semantics to a ReadableStream. A blob you can random access read in any order, etc. What if we just added some kind of "skip" operation to ReadableStream? This would let you efficiently seek forwards to the part you want without necessarily loading it all into memory. |
Although upon seeing this issue initially I was sympathetic, thinking that the spec should not prohibit interesting optimizations, now I am not so sure. It really comes to down the question of, what does I see two possibilities:
If we say that having a Blob/a fulfilled
I feel like we discussed this before. I am amenable. IIRC last time we talked the idea was the default implementation would throw away data but the underlying byte source could provide a specialized skip operation to be more efficient. |
Yeah, I think optimising blobs here is pretty complicated, I was only driving it when I thought it was something Firefox already did. The right solution is fixing range requests, and maybe a skip/advance method on streams. |
It would be nice to expose a hook to js ReadableStreams as well. They may be able to do something smart depending on what their underlying source really is. If its computed they can just compute ahead, etc. |
Yeah, that's definitely what I meant; a new method you pass to the constructor. |
Apologies for the confusion. I oversimplified and oversold the awesome powers of Blobs in that issue. Gecko only ever hands out Blob instances when the entire contents are available. There are currently no plans to speculatively return Blobs before all the data is received. Having said that, there is the edge-case w3c/FileAPI#47 wherein Gecko will throw an error on reads if the backing file selected from Clarifying Gecko optimizations: As @wanderview points out, Gecko's Blob optimizations do not currently apply to Response.blob(). Its implementation is naive and treats all responses like they came over the network, consuming them in their entirety. Its returned Blob will be file-backed if it was large enough to spill to disk (and privacy/security settings allow), but it will be a new file on disk that is distinct from the one stored by the Cache API. Having said that, we're not far from being able to implement such an optimization, but it definitely will not return the Blob until the response has been received in its entirety (making them fully-written "incumbent records" instead of in-progress "fetching records"). |
ReadableStream based solution sounds good. |
@jakearchibald Was able to set When
When a number greater than
http://plnkr.co/edit/vci20DGSjX1fAjrHOcOY?p=preview Are you aware of the reason for this result? |
|
Seems ok for this to be an implementation detail. |
From #554 (comment) by @asutherland:
This is against the current spec which requires reading the full stream, but seems like a nice optimisation.
This should be limited to bodies with a known size, which means items with a
Content-Length
header, or items backed by disk (cache API).The blob should enter an errored state if the body doesn't eventually match the size of the blob.
The text was updated successfully, but these errors were encountered: