-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
12:58:34.505030/queue2:src*E: SRT.d: SND-DROPPED 35 packets - lost delaying for 1027ms #355
Comments
What you observe is back pressure from the receiver not draining the received packets fast enough, causing the sender buffer to fill to capacity and start dropping packets too old to be transmitted and played. I cannot help you on gstreamer or VLC. This can happen if the product of bitrate and Latency is too high for the receiver buffers or if the receiving device does not have enough CPU bandwidth. Based on the delay around 1020ms I assume the configured SRT Latency is the default or less than 1000ms. Try reducing the video bitrate at the input and verify CPU usage of the receiver. |
I also face the packet drop issue, I use FFmpeg to test srt ability, BUT I get the following log (very seldom): So how to configure srt to work in UDP reliable mode? BTW: I do not care about delay (eg, 3s delay is allowed), I care about reliable, No any packet should be dropped. Thanks. |
If 3s delay is allowed, then please set the latency ( This behavior you are observing is the result of TLPKTDROP feature, or "sacrifice the reliability to protect timely delivery". This accompanies the TSBPD feature, which aims to preserve the time distance between packets to be exactly as when they were sent. TLPKTDROP may be turned off, but this may lead to overflowing the buffer in a very unfortunate case of a packet being lost more than once and not recovered in the extra time defined by latency. You are observing it at the sender side because both sender and receiver are tracking the latency vs. sending time, so these are the packets that were lost on track, but recovery was abandoned by the premise that the receiver would drop them anyway, even if they are resent. The higher the latency then, the more probable it is that even in the very bad luck situation when even the recovered packet was lost again, it may still have extra time to retransmit the packet and be with it on time when the time has come to play it. The latency is recommended to be at least 4 times the RTT, so the default is 120ms as per usual internet RTT = 30ms. Actually the time SRT requires for packet recovery is: the time distance between the lost packet and the next packet + RTT (to send the request + receive the recovered packet) + some small time of processing on the sender side (usually due to operating system packet scheduling). This time shall be also multiplied by some small factor because this alone is only with a good luck that the first packet recovery attempt has succeeded. |
@ethouris Should I tune the sender/caller? or the listener? My listener is just stransmit itself, such as: My caller is FFmpeg, such as: I set the delay to 2.5s in ffmpeg. But I still get UDP packets drop issues. |
It is possible that packets are not transmitted on the wire as fast as they are submitted to SRT. The transmit rate is controlled by the PktSndPeriod. The PktSndPeriod is based on SRTO_MAXBW, SRTO_INPUTBW and SRTO_OVERHEAD (%). SRT evaluate the input bandwidth internally when SRTO_INPUTBW is set to 0. When the application like an encoder knows the bitrate it generates, setting SRTO_INPUTBW permit a faster reaction than the internally valuate input bw which is a moving average. I don't know if ffmpeg provides options to control this of if this is all internal to the application. |
The latency can be set by both sides and the effective latency (the value is exchanged in the handshake) is the maximum of these two. So it's enough if you set this latency in the stransmit listener:
(I don't think we handle the |
Looking at your current command line, if muxrate (7M=7000k) is somehow the stream bitrate then the inputbw setting (875000=875k) is lying and missing one 0. |
@ethouris the wrapper of libsrt in FFmpeg supports the option "tsbpddelay", such as: // in ffmpeg/libavformat/libsrt.c static const AVOption libsrt_options[] = { |
@jeandube m_llInputBW has the unit of "Bytes/s". But muxrate in FFmpeg has the unit of "bits/s" 7Mbps = 7000K bits/s = 875K bytes/s So my setting is correct. |
Sorry, no one has consulted with me when the support was added in libav first, and then propagated to ffmpeg. The statement in srt.h is maybe a bit unclear, but there's definitely no such thing declared that when |
Thanks |
@ethouris // in client/sender side //in server side(example.com) I wanna know if my settings are valid: I set 'tsbpddelay=2000&tlpktdrop=0' both. can 'tlpktdrop=0' and 'tsbpddelay=2000' be used at same? Thanks. |
These settings are independent; actually the TLPKTDROP mechanism wouldn't work (or even makes no sense) when TSBPDMODE is off (delaying packets until their time to play - and "latency", formerly called "tsbpddelay" is the delay time added to their converted timestamp to achieve the time to play). So, "latency" and "tlpktdrop" can be thought of as two independent parameters for the same mechanism. If you achieve any better results in this configuration, I'd be really glad if you can grab debug logs from such a session on the receiver - if you have any clean stream despite such delays that would result in drops when TLPKTDROP is on, it would be interesting for me with how big delay towards time-to-play the packets were delivered. If there's no such thing, then it's possible that the "snd-dropping" mechanism should be improved. The TLPKTDROP mechanism is introduced because without this there is a risk of breaking the timely delivery. Imagine a situation that out of 1, 2, 3, 4, 5 packets you've lost 3 and 4. Then there comes the time that the packet 5 should be played. TLPKTDROP would agree to drop 3 and 4 and allow 5 to be delivered to the player, but in this case it's still waiting. In the meantime, 6, 7, 8 come in and 3, 4 are still not recovered - the stream is paused. Probably this pause doesn't matter in case when 1 was the last portion of the previous frame and 2 starts a new frame - they will have to be buffered by the demuxer anyway. The problem is when 3 and 4 packets were last two packets of the frame - this way the frame isn't completed at the moment when it's expected by the decoder. It is possible that TLPKTDROP mechanism may need further improvements. |
Thanks for your detailed reply. // in srtcore/srt.h
// in apps/socketoptions.hpp
// in srtcore/core.cpp
There is no any code related to SRTO_LATENCY in srtcore/core.cpp: So 'latency' and 'tsbpddelay' are same, they are not independent, ie. there is no native implementation of 'latency', it is redirected to 'tsbpddelay' automatically. I think the TLPKTDROP mechanism needs further improvments. I have added an extra UDP layers which uses FEC( Reed-Solomon) algrithm , But I still obtain packets loss even in such scenario. I will try to find the reason of packet loss (I need a bit of time to debug srt source code). |
Sorry, I mis-understand your reply. "latency" and "tlpktdrop" are independent indeed. |
Well, I can see some cleanup is required; I was certain that I have lately removed every occurrence of If you turn on the debug logs, the log that contains Most likely all debug logs are qualified as "heavy", so in order to see them you have to enable them, for example by |
@jeandube This problem with |
@ethouris SND-DROPPED is a symptom not the problem. Raising the value may just delay its occurance. This being said it is already configurable. It is based on the configured latency. The minimum value (1000ms) is not configurable but could be. This value was selected to prevent dropping the tail of big I-Frame and assume a live video input. The problem I see with this application is that the input to srt not totally controlled. This may have to do with how data is buffered between the input (-i) and the output (-o) . The rate is probably more CPU-bound than video frame rate bound. |
Supporting the disabling Too-Late-Pkt-Drop on the sender (lost in 1.3.0) is a one-line fix: |
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/srtcore/core.cpp b/srtcore/core.cpp
index 45f2371..15f1259 100644
--- a/srtcore/core.cpp
+++ b/srtcore/core.cpp
@@ -4791,7 +4791,7 @@ int CUDT::receiveBuffer(char* data, int len)
void CUDT::checkNeedDrop(ref_t<bool> bCongestion)
{
- if (!m_bPeerTLPktDrop)
+ if (!m_bTLPktDrop || !m_bPeerTLPktDrop)
return;
if (!m_bMessageAPI) |
With the above fix the app can disable sender packet drop by setting option SRTO_TLPKTDROP to false. If the underlying problem remains, the sender buffer will fill to capacity (TXFULL) and send API will block or return EWOULDBLOCK. I think this is what we should make configurable: input flow control for applications pushing packets from non live streaming. |
Actually AFAIK the "snd-dropping" doesn't save the sender application the risk of filling the buffer because this may only occur in case when the underlying UDP-relying network system pushes the packets through the network slower than the application stows them. "snd-dropping" concerns always exclusively retransmitted packets, never "original" ones, and additionally, with default configuration in the live mode retransmission relies exclusively on lossreport messages (including periodic-nakreport-triggered), with FASTREXMIT triggerring only in case when TLPKTDROP is off. I don't think it would be a good idea to make |
Snd-dropping DO prevent filling the send buffer. This is like a bathtub overflow drain. It maintains a time-based amount of packets in the send buffer. Retransmitted or not is just a matter of the RTT and is irrelevant to this discussion. These packets have not been acknowledged so there is no guarantee they have been delivered. I suggested in the other PR to independently control SND and RCV TLPKTDROP but one have to realize this is mainly for diagnostic purpose to help find the root cause of the problem. |
This might be related with receiver buffer size and maxbw configuration. Default receiver buffer size is Default maxbw is 30 Mbps, refer to #552. |
I would like to ask how you observed the packet loss rate when you tested fec, and how do you see if the packet loss rate has dropped? |
We are trying to evaluate SRT for our project and we are getting a continuous flow of these messages
12:58:34.505030/queue2:src*E: SRT.d: SND-DROPPED 35 packets - lost delaying for 1027ms
We are using the following gstreamer command to launch, we added the -vv to get the debug output.
gst-launch-1.0 -vv
filesrc do-timestamp=true location=/tmp/fromfpga
! videoparse width=1920 height=1080 framerate=30/1 format=16
! queue
! videoconvert
! queue
! x264enc bitrate=$gstbitrate speed-preset=$H264_EFFORT
! video/x-h264,profile=baseline
! queue
! mpegtsmux
! srtserversink uri=srt://:8888/ &
We are using the latest version of VLC 4.0.0-dev. What we see is VLC connects and starts to play, then we see stuttering followed by frozen video. If we disconnect and reconnect we see the same issue.
Could someone help us debug this ? We are very motivated to use SRT.
Thank you,
-John
The text was updated successfully, but these errors were encountered: