-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TL Packet Drop supresses MAXBW limitation #713
Comments
There is an option (SRTO_TSBPDMODE) to turn TSBPD on/off, You probably means no URI query. |
If all issues were presented that way, I can't imagine where SRT would be. |
Yes. Improved the description. Thank you, Jean.
I am looking forward to have a cool automated toolset for such cool things. 😄 |
"While the time interval between the packets is 400 μs, that corresponds to 30 Mbps." Another question. |
Yes, the problem is in the "abnormal" usage scenario, when actual input bitrate exceeds maximum bandwidth value. In this case it looks like packet origin time somehow gains priority over the desired time intervals between the packets. |
Further investigation shows that waiting gets interrupted from |
Most probably |
Dropping packets is controlled by the |
Yes, I was thinking of TSBPD and TL packet drop mechanisms as a single. In fact this is about TL packet drop. |
Update. TL Packet drop mechanism triggers sending of the next scheduled packet after |
when i use srt live and set transtype=0 tsbpdmode=false tlpktdrop=0 |
When TL packet drop is turned on, and input bitrate exceeds the maxbw limitation, the MAXBW limit is suppressed. This can happen also when the amount of retransmission is high (see #638).
By default maxbw is limited to 1 Gbps (30 Mbps prior to v1.3.3).
Latest on the moment of testing SRT version 6f6b76b.
URI query both for receiver and sender:
transtype=live&messageapi=1&payloadsize=1456&rcvbuf=125000000&sndbuf=125000000
Link RTT is 0.24 ms (local 1 Gbps switch).
Sender sends packets at 900 Mbps. Nothing stops it. But looks like the receiver is ackowleding packets at slower rate, thus the sender's buffer gets full.
E.g. this query actually limits sending rate to 30 Mbps:
transtype=file&congestion=live&messageapi=1&payloadsize=1456&rcvbuf=125000000&sndbuf=125000000
The difference is that TL packet drop is turned off by
transtype=file
(there is no URI query socket option to turn off TSBPD directly).Sender side packets. Notice the packets start dropping only at the end. And the sender starts printing error messages:
SND-DROPPED 893 packets - lost delaying for 1032ms
.And the packets start dropping when the available sender's buffer size goes down to 0:
While the time interval between the packets is 400 μs, that corresponds to 30 Mbps.
Receiving rate:
The receiver's buffer size, although set to 1 GB, is actually limited by FC size, so it is actually only 300 Mbits (refer to #700).
In the end receiver closes the connection due to:
SRT:RcvQ:worker*E:SRT.c: %915948307:SEQUENCE DISCREPANCY, reception no longer possible. REQUESTING TO CLOSE.
Setting
maxbw=125000000
solves the problem, although in the described set up the size of the recevier's buffer is too small, and it anyway closes the connection with error message:But notice the receiving rate is 900 Mbps:
While for the case when
maxbw=30
Mbps the receiving rate is actually lower:Also refer to #553, where almost similar experiment was conducted.
The text was updated successfully, but these errors were encountered: