-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose RTCEncodedAudioFrame interface in AudioWorklets #226
Comments
In typical WebRTC applications, there is a thread for audio capture/rendering and there are threads dedicated to networking, media handling. I am not sure allowing to do encoding/networking in AudioWorklet is a good idea. |
For video, just no. Decoding audio on real-time threads has been seen in very specific scenarios, and can be done safely, if the decoder is real-time safe, etc. all the usual stuff. Encoding, I've never seen it and I don't really know how useful it would be or what it would bring. The Web Codecs API however won't work well (or at all) in I side with @youennf on this. Communicating with an Once we have apps for which the limiting factor is the packetization latency or something in that area, we can revisit. |
The RTCRtpScriptTransformer.readable is transferable, so could be posted to a worklet within the current shape. The former usecase @youennf mentioned - decoding+rendering Audio - is indeed the one I'm interested in getting on a worklet. IIUC libwebrtc does its audio decoding in the realtime thread, just before rendering, so that concept isn't all that wild.
Transferring the readablestream to the worklet would mean frames could be delivered directly there. Requiring JS work to be done elsewhere first would necessitate visiting another JS thread, scheduling an event there etc, so ~double the overhead plus the cost of allocating the intermediate objects to be re-transferred. I can see if I can get some more concrete numbers, but there's clearly additional work needed to be done by app+UA which could be skipped with this. |
This produces a "readable side in another realm", where the original realm feeding that readable is the dedicated worker provided by the webpage, at least in current implementations of that surface.
Can't the app just transfer the frame.data ArrayBuffer instead? i.e. not a copy. It'd be interesting to see the numbers.
This sounds like whatwg/streams#1124. |
I wasn't aware. Is this a special-case for |
It's an implementation concern, script doesn't know about it. |
Would you be open to just reframe the issue to exposing |
If we want to do decoding in real-time threads, in the
|
I don't think this should necessarily be tied to WebCodecs. While they can be useful in some cases, they are not going to be covering all use cases or experimental codec work that is inherently implemented in JS / WASM. |
If you're not using Web Codecs, there's not benefit to exposing |
Instead of transferring stream to worklets, the alternative would be to let script transform take a worklet instead of a worker. |
@aboba Can we add this to the agenda for next week's interim? |
This issue was discussed in WebRTC Interim, May 21st – 21 May 2024 (Issue 226: Expose RTCEncodedAudioFrame interface in Worklets) |
Working with encoded frames from worklets, particularly RTCEncodedAudioFrames from AudioWorklets, would be very useful for apps, allowing them to choose the best execution environment for encoded media processing, beyond just Window and DedicatedWorker.
Readable and WritableStreams already have
Exposed=Worklet
, so transferring the streams of encoded frames would make sense and allow more performant implementations than eg requiring apps to copy data & metadata in a DedicatedWorker before going to the worklet / after returning from it.I propose we add
Worklet
to the Exposed lists for RTCEncodedVideoFrame and RTCEncodedAudioFrame, and likely follow up with similar changes for the interfaces in the webcodecs spec.CC @alvestrand @aboba @guidou @youennf
The text was updated successfully, but these errors were encountered: