-
Notifications
You must be signed in to change notification settings - Fork 283
Support for connecting a WebRTC audio track to a Unity AudioSource #92
Comments
Yes, this is not implemented, as pointed out in the documentation : https://microsoft.github.io/MixedReality-WebRTC/manual/unity-remoteaudiosource.html Currently the audio output is managed internally in C++ by the WebRTC code which sends it out to the audio device directly for playback. Unity is never involved. Implementing what you need, which is a perfectly valid use case that we should support, requires disabling the automatic audio output, and instead use a callback mechanism like video, then plug that somehow into Unity, likely via an |
Thanks for the info. Is it on your roadmap to address this soon? :) |
FWIW, you can disable automatic rendering of audio tracks in UWP by editing api.cpp:189 For desktop, I haven't confirmed this yet, but it seems to be the null AudioDeviceModule passed to webrtc::CreatePeerConnectionFactory which, if null, gets a default module created much later in WebRtcVoiceEngine::Init() This still leaves the task of hooking up the frame event of course. |
Linking #99 |
From #99 the RemoteAudioSource component can now play to a sibling AudioSource component. The latency is much, much better than the previous AudioClip version thanks! And it shouldn't be too difficult to move the resampling code to a different thread if needed. (currently it happens lazily on the unity audio thread). There are still some TODOs: Experimentally, Unity will only call OnAudioFilterRead if there is a sibling (Unity)AudioSource component. Is checking for the existence of the sibling (say on Play()) sufficient to toggle between direct & spatial audio? Or should we have an additional flag to explicitly state the users intention? We could give a sensible error message if the flag doesn't match the existence of AudioSource. The original issue of webrtc sending audio direct to the speakers is still present. Currently there is a short echo as the audio is played twice. Will we want to control direct/spatial audio per track or per connection? |
The backbone for this has been implemented by #99. Open issues:
|
@fibann i'm curious about the current state. Is it yet possible to use spatial audio ? |
Docs is out of date. |
Logged as #683 |
@djee-ms Hello! |
Are there any possibilities to switch it off to just using the Unity sound system for spatial audio? |
Can you help me solve a mystery? Looking through the Unity sample project, I'm not seeing how the audio track is getting played. Scene doesn't have a Unity
AudioSource
MonoBehaviour. The code doesn't do anything with audio frames. How is audio getting to the speakers?Context: In my project, I need to pipe the WebRTC audio track to a spatial audio emitter so that the audio is played at the media player's location in the scene. How can I pipe the audio track to a Unity
AudioSource
?Thanks.
The text was updated successfully, but these errors were encountered: