-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Body.arrayBuffer([begin, end]) #554
Comments
How is that better than getting the body as a blob, and then slicing and dicing that in whatever way you want? And would calling arrayBuffer like that still consume the whole body, meaning effectively you should have done a range request since you're never going to be able to read the rest of the body? |
Are you trying to request and only receive response having |
I've updated my code snippet to better reflect what I'm experiencing. A huge file is in my cache (fetched with "Background Fetch") and I would like to return only a subset of it on a fetch event instead of bringing the entire file in memory. Both code snippets are equivalent as far as I can tell in my browser in term of memory usage: const response = await caches.match('https://example.com/huge-file.mp4');
const data = await response.arrayBuffer();
const slicedData = data.slice(0, 1024); const response = await caches.match('https://example.com/huge-file.mp4');
const blob = await response.blob();
const slicedData = blob.slice(0, 1024); |
The blob() case is more efficient by far in terms of what the browser can and does optimize. Speaking for Gecko, the blob() call only creates a handle to the underlying file on disk. No reads of the file's contents need to be performed before returning the blob. This handle can be given to the new Response and passed across processes, allowing the target process to perform just the 1024-byte read directly, and without needing to involve the ServiceWorker or its global. In contrast, the arrayBuffer() call will result in the entirety of "huge-file.mp4" being read from disk and exposed to the ServiceWorker and its global. The read needs to complete before the new Response can be created and returned, and its contents may then need to be streamed between processes (unless some kind of underlying shared memory/copy-on-write thing is done, which I'm pretty confident Gecko will not do at the current time). |
I think w3c/ServiceWorker#913 is the solution here. |
Have little if any experience using caches. And not sure gathering requirement correctly. If you are trying to get only a range of time slices from a media resource already accessible at browser cache you could use Media Fragment URI at an
Alternatively, request resource as |
An alterantive approach to achieve requirement is to use media fragment concatenated to At initial request create a
When we want to play specific time slices from media, for example, from 30 seconds to 50 seconds, use appropriate media fragment identifier specifying the range of media to play
which references original
|
#556 - for standardising Mozilla's current behaviour with request/response blobs. |
@beaufortfrancois Is An array buffer of audio data having Curious what exactly you are trying to achieve within the application? |
This is probably not what you are looking for. The approach requests resource once, then creates You can then create an
|
@beaufortfrancois Tried to set The smallest value passed to
but rather flashes the A A A Was not able to produce media playback with a If the requirement is to play media from cache while the request is being processed you can use
|
Thank you so much @guest271314 for the explanation. |
Blob seems to provide the required API, although Firefox's implementation could be better. |
It would be very practical if
arrayBuffer()
method could takebegin
andend
optional parameters in order to give the ability to reduce memory usage for cache responses whose content is "huge" but can still be useful in chunks (video and audio).R: @jakearchibald
FYI @paullewis
The text was updated successfully, but these errors were encountered: