Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad restriction: RCV Buffer <= Flow control window #700

Open
maxsharabayko opened this issue May 22, 2019 · 2 comments
Open

Bad restriction: RCV Buffer <= Flow control window #700

maxsharabayko opened this issue May 22, 2019 · 2 comments
Assignees
Labels
[core] Area: Changes in SRT library core Priority: Low Type: Bug Indicates an unexpected problem or unintended behavior
Milestone

Comments

@maxsharabayko
Copy link
Collaborator

maxsharabayko commented May 22, 2019

SRT has a restriction that the receiver buffer must not be greater than FC size:
if (m_iRcvBufSize > m_iFlightFlagSize) { m_iRcvBufSize = m_iFlightFlagSize; }

At the same time, the TSBPD algorithm buffers received packets in the receiver’s buffer until it is time to deliver them further.
So the condition does not make sense.
It should rather be like:
Rcv_buf >= fc_window*packet_size + (latency*bitrate)

The number of packet in flight is also controlled by the available space in the receiver buffer, reported back by the receiver.
On top of that, File CC controls the congestion window.
So for file mode this probably does not hurt much, but still does not look reasonable.

Buffer Sizes Configuration

Default receiver buffer size is 8192 × (1500-28) = 12058624 bytes or approximately 96 Mbits.
Default flow control window size is 25600 packets or approximately 300 Mbits.

The target number of packets in flight (FC window) should be (assuming max payload size):
FC = bps × RTTsec / 8 / (1500 - 28).
For example for 1,5 Gbps link with 150 ms RTT:
FC = 1500 × 106 × 0.15 / 8 / (1472) = 19106 packets (or 225 Mbits).
For 2,0 Gbps link with 150 ms RTT:
FC = 2 × 109 × 0.15 / 8 / (1472) = 25475 packets (or 300 Mbits).

@maxsharabayko
Copy link
Collaborator Author

maxsharabayko commented Jun 25, 2020

The number of packet in flight is also controlled by the available space in the receiver buffer, reported back by the receiver.
On top of that, File CC controls the congestion window.

This is probably the source of the restriction. If RCV Buffer > Flow control window, then the receiver will be reporting more available space in its buffer (via ACK), than the value shared in the handshake.

  • Check this behavior in run time after condition is removed.

  • Replace this expression: m_ConnReq.m_iFlightFlagSize = (m_iRcvBufSize < m_iFlightFlagSize) ? m_iRcvBufSize : m_iFlightFlagSize; with m_ConnReq.m_iFlightFlagSize = min(m_iRcvBufSize, m_iFlightFlagSize);. Check if 0.

@maxsharabayko maxsharabayko modified the milestones: v1.5.0, v1.5.0 - Sprint 19 Jun 25, 2020
@mbakholdina mbakholdina removed their assignment Jul 14, 2020
@maxsharabayko maxsharabayko modified the milestones: v1.5.0 - Sprint 19, v1.5.0 - Sprint 20, v1.5.0 Jul 28, 2020
@ethouris
Copy link
Collaborator

ethouris commented Jan 25, 2021

I personally think that the receiver buffer should have never been used for latency compensation. The UDT original that was supporting only file transmission was treating the receiver buffer as intermediate place between the network and the application, that is, the application could make the buffer choke (when it's not reading the data), the overflowing receiver buffer could stop the transmission (just should still send ACKs periodically), then when the application finally reads the data from the buffer, it frees its space and allows to continue.

This method cannot work properly in live mode at all. In live mode the data are being sent with appropriate speed and this process cannot be paused; swelling buffer should be handled completely differently. All data that come in must be received and must be placed in the receiver buffer. Latency compensation should use a separate buffer that grows separately and will drop the data by itself in case when the application doesn't read them. The latency buffer - in contradiction to the socket's receiver buffer - should be allowed to grow and the grow control should be done differently, mainly by having the top bitrate allowed for reception and latency. This should determine the initial size, and some spare value should be given to allow the buffer to grow up to. As the receiver buffer is to be reimplemented, this is an opportunity to make a separate latency buffer, although using the "units" from the same pool as the receiver buffer.

This also gives opportunity to solve the application-pause problem differently - when the buffer grows up to the maximum allowed size, packets at the head of the buffer should be dropped in order to make space for the new ones. In the current implementation there's no good solution for it, this causes a discrepancy that can't be repaired so the only possible solution is to break the connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[core] Area: Changes in SRT library core Priority: Low Type: Bug Indicates an unexpected problem or unintended behavior
Projects
None yet
Development

No branches or pull requests

3 participants