You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: guides/advanced/modifying.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Modifying the session
2
2
3
-
So far, we focused on forwarding the data back to the same peer. Usually, you want to connect with multiple peers, which means adding
3
+
In the introductory tutorials we focused on forwarding the data back to the same peer. Usually, you want to connect with multiple peers, which means adding
4
4
more PeerConnection to the Elixir app, like in the diagram below.
5
5
6
6
```mermaid
@@ -31,7 +31,7 @@ new negotiation has to take place!
31
31
>
32
32
> But what does that even mean?
33
33
> Each transceiver is responsible for sending and/or receiving a single track. When you call `PeerConnection.add_track`, we actually look for a free transceiver
34
-
> (that is, one that is not sending a track already) and use it, or create a new transceiver if we don' find anything suitable. If you are very sure
34
+
> (that is, one that is not sending a track already) and use it, or create a new transceiver if we don't find anything suitable. If you are very sure
35
35
> that the remote peer added _N_ new video tracks, you can add _N_ video transceivers (using `PeerConnection.add_transceiver`) and begin the negotiation as
36
36
> the offerer. If you didn't add the transceivers, the tracks added by the remote peer (the answerer) would be ignored.
Other than just forwarding, we probably would like to be able to use the media right in the Elixir app to
4
-
e..g feed it to a machine learning model or create a recording of a meeting.
3
+
Other than just forwarding, we would like to be able to use the media right in the Elixir app to e.g.
4
+
use it as a machine learning model input, or create a recording of a meeting.
5
5
6
-
In this tutorial, we are going to build on top of the simple app from the previous tutorial by, instead of just sending the packets back, depayloading and decoding
7
-
the media, using a machine learning model to somehow augment the video, encode and payload it back into RTP packets and only then send it to the web browser.
6
+
In this tutorial, we are going to learn how to use received media as input for ML inference.
8
7
9
-
## Deplayloading RTP
8
+
## From raw media to RTP
10
9
11
-
We refer to the process of taking the media payload out of RTP packets as _depayloading_.
10
+
When the browser sends audio or video, it does the following things:
11
+
12
+
1. Capturing the media from your peripheral devices, like a webcam or microphone.
13
+
2. Encoding the media, so it takes less space and uses less network bandwidth.
14
+
3. Packing it into a single or multiple RTP packets, depending on the media chunk (e.g., video frame) size.
15
+
4. Sending it to the other peer using WebRTC.
16
+
17
+
We have to reverse these steps in order to be able to use the media:
18
+
19
+
1. We receive the media from WebRTC.
20
+
2. We unpack the encoded media from RTP packets.
21
+
3. We decode the media to a raw format.
22
+
4. We use the media however we like.
23
+
24
+
We already know how to do step 1 from previous tutorials, and step 4 is completely up to the user, so let's go through steps 2 and 3 in the next sections.
12
25
13
26
> #### Codecs {: .info}
14
-
> A media codec is a program used to encode/decode digital video and audio streams. Codecs also compress the media data,
27
+
> A media codec is a program/technique used to encode/decode digital video and audio streams. Codecs also compress the media data,
15
28
> otherwise, it would be too big to send over the network (bitrate of raw 24-bit color depth, FullHD, 60 fps video is about 3 Gbit/s!).
16
29
>
17
-
> In WebRTC, most likely you will encounter VP8, H264 or AV1 video codecs and Opus audio codec. Codecs that will be used during the session are negotiated in
18
-
> the SDP offer/answer exchange. You can tell what codec is carried in an RTP packet by inspecting its payload type (`packet.payload_type`,
19
-
> a non-negative integer field) and match it with one of the codecs listed in this track's transceiver's `codecs` field (you have to find
20
-
> the `transceiver` by iterating over `PeerConnection.get_transceivers` as shown previously in this tutorial series).
30
+
> In WebRTC, most likely you will encounter VP8, H264 or AV1 video codecs and Opus audio codec. Codecs used during the session are negotiated in
31
+
> the SDP offer/answer exchange. You can tell what codec is carried in an RTP packet by inspecting its payload type (`payload_type` field in the case of Elixir WebRTC).
32
+
> This value should correspond to one of the codecs included in the SDP offer/answer.
33
+
34
+
## Depayloading RTP
35
+
36
+
We refer to the process of getting the media payload out of RTP packets as _depayloading_. Usually a single video frame is split into
37
+
multiple RTP packets, and in case of audio, each packet carries, more or less, 20 milliseconds of sound. Fortunately, you don't have to worry about this,
38
+
just use one of the depayloaders provided by Elixir WebRTC (see the `ExWebRTC.RTP.<codec>` submodules). For instance, when receiving VP8 RTP packets, we could depayload
39
+
the video by doing:
40
+
41
+
```elixir
42
+
definit(_) do
43
+
# ...
44
+
state = %{depayloader:ExWebRTC.Media.VP8.Depayloader.new()}
45
+
{:ok, state}
46
+
end
47
+
48
+
defhandle_info({:ex_webrtc, _from, {:rtp, _track_id, nil, packet}}, state) do
49
+
depayloader =
50
+
caseExWebRTC.RTP.VP8.Depayloader.write(state.depayloader, packet) do
51
+
{:ok, depayloader} -> depayloader
52
+
{:ok, frame, depayloader} ->
53
+
# we collected a whole frame (it is just a binary)!
54
+
# we will learn what to do with it in a moment
55
+
depayloader
56
+
end
57
+
58
+
{:noreply, %{state |depayloader: depayloader}}
59
+
end
60
+
```
61
+
62
+
Every time we collect a whole video frame consisting of a bunch of RTP packets, the `VP8.Depayloader.write` returns it for further processing.
21
63
22
-
_TBD_
64
+
> #### Codec configuration {: .warning}
65
+
> By default, `ExWebRTC.PeerConnection` will use a set of default codecs when negotiating the connection. In such case, you have to either:
66
+
>
67
+
> * support depayloading/decoding for all of the negotiated codecs
68
+
> * force some specific set of codecs (or even a single codec) in the `PeerConnection` configuration.
69
+
>
70
+
> Of course, the second option is much simpler, but it increases the risk of failing the negotiation, as the other peer might not support your codec of choice.
71
+
> If you still want to do it the simple way, set the codecs in `PeerConnection.start_link`
>This way, you either will always have to send/receiveVP8 video codec, or you won't be able to negotiate a video stream at all. At least you won't encounter
81
+
> unpleasant bugs in video decoding!
23
82
24
83
## Decoding the media to raw format
25
84
26
-
_TBD_
85
+
Before we use the video as an input to the machine learning model, we need to decode it into raw format. Video decoding or encoding is a very
86
+
complex and resource-heavy process, so we don't provide anything for that in Elixir WebRTC, but you can use the `xav` library, a simple wrapper over `ffmpeg`,
87
+
to decode the VP8 video. Let's modify the snippet from the previous section to do so.
88
+
89
+
```elixir
90
+
definit(_) do
91
+
# ...
92
+
serving =# setup your machine learning model (i.e. using Bumblebee)
93
+
state = %{
94
+
depayloader:ExWebRTC.Media.VP8.Depayloader.new(),
95
+
decoder:Xav.Decoder.new(:vp8),
96
+
serving: serving
97
+
}
98
+
{:ok, state}
99
+
end
100
+
101
+
defhandle_info({:ex_webrtc, _from, {:rtp, _track_id, nil, packet}}, state) do
102
+
depayloader =
103
+
with {:ok, frame, depayloader} <-ExWebRTC.RTP.VP8.Depayloader.write(state.depayloader, packet),
104
+
{:ok, raw_frame} <-Xav.Decoder.decode(state.decoder, frame) do
105
+
# raw frame is just a 3D matrix with the shape of resolution x colors (e.g 1920 x 1080 x 3 for FullHD, RGB frame)
106
+
# we can cast it to Elixir Nx tensor and use it as the machine learning model input
107
+
# machine learning stuff is out of scope of this tutorial, but you probably want to check out Elixir Nx and friends
108
+
tensor =Xav.Frame.to_nx(raw_frame)
109
+
prediction =Nx.Serving.run(state.serving, tensor)
110
+
# do something with the prediction
111
+
112
+
depayloader
113
+
else
114
+
{:ok, depayloader} -> depayloader
115
+
{:error, _err} -># handle the error
116
+
end
117
+
118
+
{:noreply, %{state |depayloader: depayloader}}
119
+
end
120
+
```
121
+
122
+
We decoded the video and used it as an input of the machine learning model and got some kind of prediction - do whatever you want with it.
123
+
124
+
> #### Jitter buffer {: .warning}
125
+
> Do you recall that WebRTC uses UDP under the hood, and UDP does not ensure packet ordering? We could ignore this fact when forwarding the packets (as
126
+
> it was not our job to decode/play/save the media), but now packets out of order can seriously mess up the process of decoding.
127
+
> To remedy this issue, something called _jitter buffer_ can be used. Its basic function
128
+
> is to delay/buffer incoming packets by some time, let's say 100 milliseconds, waiting for the packets that might be late. Only if the packets do not arrive after the
129
+
> additional 100 milliseconds, we count them as lost. To learn more about jitter buffer, read [this](https://bloggeek.me/webrtcglossary/jitter-buffer/).
130
+
>
131
+
> As of now, Elixir WebRTC does not provide a jitter buffer, so you either have to build something yourself or wish that such issues won't occur, but if anything
132
+
> is wrong with the decoded video, this might be the problem.
133
+
134
+
This tutorial shows, more or less, what the [Recognizer](https://github.com/elixir-webrtc/apps/tree/master/recognizer) app does. Check it out, along with other
135
+
example apps in the [apps](https://github.com/elixir-webrtc/apps) repository, it's a great reference on how to implement fully-fledged apps based on Elixir WebRTC.
Copy file name to clipboardExpand all lines: guides/introduction/forwarding.md
+29-79Lines changed: 29 additions & 79 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ The `packet` is an RTP packet. It contains the media data alongside some other u
21
21
> RTP is a network protocol created for carrying real-time data (like media) and is used by WebRTC.
22
22
> It provides some useful features like:
23
23
>
24
-
> * sequence numbers: UDP (which is usually used by WebRTC) does not provide ordering, thus we need this to catch missing or out-of-order packets
24
+
> * sequence numbers: UDP (which is usually used by WebRTC) does not provide packet ordering, thus we need this to catch missing or out-of-order packets
25
25
> * timestamp: these can be used to correctly play the media back to the user (e.g. using the right framerate for the video)
26
26
> * payload type: thanks to this combined with information in the SDP offer/answer, we can tell which codec is carried by this packet
27
27
>
@@ -39,45 +39,36 @@ flowchart LR
39
39
WB((Web Browser)) <-.-> PC
40
40
```
41
41
42
-
The only thing we have to implement is the `Forwarder` GenServer. Let's combine the ideas from the previous section to write it.
42
+
The only thing we have to implement is the `Forwarder` process. In practice, making it a `GenServer` would be probably the
43
+
easiest and that's what we are going to do here. Let's combine the ideas from the previous section to write it.
We started by creating the PeerConnection and adding two tracks (one for audio and one for video).
77
68
Remember that these tracks will be used to *send* data to the web browser peer. Remote tracks (the ones we will set up on the JavaScript side, like in the previous tutorial)
78
69
will arrive as messages after the negotiation is completed.
79
70
80
-
> #### Where are the tracks? {: .tip}
71
+
> #### What are the tracks? {: .tip}
81
72
> In the context of Elixir WebRTC, a track is simply a _track id_, _ids_ of streams this track belongs to, and a _kind_ (audio/video).
82
73
> We can either add tracks to the PeerConnection (these tracks will be used to *send* data when calling `PeerConnection.send_rtp/4` and
83
74
> for each one of the tracks, the remote peer should fire the `track` event)
@@ -96,39 +87,14 @@ will arrive as messages after the negotiation is completed.
96
87
>
97
88
>If you want to know more about transceivers, read the [MasteringTransceivers](https://hexdocs.pm/ex_webrtc/mastering_transceivers.html) guide.
98
89
99
-
Next, we need to take care of the offer/answer andICE candidate exchange. Asin the previous tutorial, we assume that there's some kind
100
-
of WebSocket relay service available that will forward our offer/answer/candidate messages to the web browser and back to us.
101
-
102
-
```elixir
103
-
@impl true
104
-
def handle_info({:web_socket, {:offer, offer}}, state) do
def handle_info({:ex_webrtc, _from, {:ice_candidate, cand}}, state) do
121
-
web_socket_send(cand)
122
-
{:noreply, state}
123
-
end
124
-
```
90
+
Next, we need to take care of the offer/answer andICE candidate exchange. This can be done the exact same way as in the previous
91
+
tutorial, so we won't get into here.
125
92
126
-
Now we can expect to receive messages with notifications about new remote tracks.
93
+
After the negotiation, we can expect to receive messages with notifications about new remote tracks.
127
94
Let's handle these and match them with the tracks that we are going to send to.
128
95
We need to be careful not to send packets from the audio track on a video track by mistake!
129
96
130
97
```elixir
131
-
@impltrue
132
98
defhandle_info({:ex_webrtc, _from, {:track, track}}, state) do
133
99
state =put_in(state.in_tracks[track.id], track.kind)
134
100
{:noreply, state}
@@ -138,7 +104,6 @@ end
138
104
We are ready to handle the incoming RTP packets!
139
105
140
106
```elixir
141
-
@impltrue
142
107
defhandle_info({:ex_webrtc, _from, {:rtp, track_id, nil, packet}}, state) do
143
108
kind =Map.fetch!(state.in_tracks, track_id)
144
109
id =Map.fetch!(state.out_tracks, kind)
@@ -154,28 +119,13 @@ end
154
119
> change between two tracks, the payload types are dynamically assigned and may differ between RTP sessions), and some RTP header extensions. All of that is
155
120
> done by Elixir WebRTC behind the scenes, but be aware - it is not as simple as forwarding the same piece of data!
156
121
157
-
Lastly, let's take care of the client-side code. It's nearly identical to what we have written in the previous tutorial.
122
+
Lastly, let's take care of the client-side code. It's nearly identical to what we have written in the previous tutorial,
123
+
except for the fact that we need to handle tracks added by the Elixir's PeerConnection.
Copy file name to clipboardExpand all lines: guides/introduction/intro.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,4 +30,4 @@ your web application. Here are some example use cases:
30
30
In general, all of the use cases come down to getting media from one peer to another. In the case of Elixir WebRTC, one of the peers is usually a server,
31
31
like your Phoenix app (although it doesn't have to - there's no concept of server/client in WebRTC, so you might as well connect two browsers or two Elixir peers).
32
32
33
-
This is what the next section of this tutorial series will focus on - we will try to get media from a web browser to a simple Elixir app.
33
+
This is what the next tutorials will focus on - we will try to get media from a web browser to a simple Elixir app.
0 commit comments