-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad restriction: RCV Buffer <= Flow control window #700
Comments
This is probably the source of the restriction. If
|
I personally think that the receiver buffer should have never been used for latency compensation. The UDT original that was supporting only file transmission was treating the receiver buffer as intermediate place between the network and the application, that is, the application could make the buffer choke (when it's not reading the data), the overflowing receiver buffer could stop the transmission (just should still send ACKs periodically), then when the application finally reads the data from the buffer, it frees its space and allows to continue. This method cannot work properly in live mode at all. In live mode the data are being sent with appropriate speed and this process cannot be paused; swelling buffer should be handled completely differently. All data that come in must be received and must be placed in the receiver buffer. Latency compensation should use a separate buffer that grows separately and will drop the data by itself in case when the application doesn't read them. The latency buffer - in contradiction to the socket's receiver buffer - should be allowed to grow and the grow control should be done differently, mainly by having the top bitrate allowed for reception and latency. This should determine the initial size, and some spare value should be given to allow the buffer to grow up to. As the receiver buffer is to be reimplemented, this is an opportunity to make a separate latency buffer, although using the "units" from the same pool as the receiver buffer. This also gives opportunity to solve the application-pause problem differently - when the buffer grows up to the maximum allowed size, packets at the head of the buffer should be dropped in order to make space for the new ones. In the current implementation there's no good solution for it, this causes a discrepancy that can't be repaired so the only possible solution is to break the connection. |
SRT has a restriction that the receiver buffer must not be greater than FC size:
if (m_iRcvBufSize > m_iFlightFlagSize) { m_iRcvBufSize = m_iFlightFlagSize; }
At the same time, the TSBPD algorithm buffers received packets in the receiver’s buffer until it is time to deliver them further.
So the condition does not make sense.
It should rather be like:
Rcv_buf >= fc_window*packet_size + (latency*bitrate)
The number of packet in flight is also controlled by the available space in the receiver buffer, reported back by the receiver.
On top of that, File CC controls the congestion window.
So for file mode this probably does not hurt much, but still does not look reasonable.
Buffer Sizes Configuration
Default receiver buffer size is
8192 × (1500-28) = 12058624
bytes or approximately 96 Mbits.Default flow control window size is 25600 packets or approximately 300 Mbits.
The target number of packets in flight (FC window) should be (assuming max payload size):
FC = bps × RTTsec / 8 / (1500 - 28).
For example for 1,5 Gbps link with 150 ms RTT:
FC = 1500 × 106 × 0.15 / 8 / (1472) = 19106 packets (or 225 Mbits).
For 2,0 Gbps link with 150 ms RTT:
FC = 2 × 109 × 0.15 / 8 / (1472) = 25475 packets (or 300 Mbits).
The text was updated successfully, but these errors were encountered: