Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

12:58:34.505030/queue2:src*E: SRT.d: SND-DROPPED 35 packets - lost delaying for 1027ms #355

Closed
JMILLER-LOT opened this issue Apr 19, 2018 · 25 comments
Labels
[core] Area: Changes in SRT library core Status: Abandoned There is no reply from the issue reporter Type: Bug Indicates an unexpected problem or unintended behavior

Comments

@JMILLER-LOT
Copy link

We are trying to evaluate SRT for our project and we are getting a continuous flow of these messages

12:58:34.505030/queue2:src*E: SRT.d: SND-DROPPED 35 packets - lost delaying for 1027ms

We are using the following gstreamer command to launch, we added the -vv to get the debug output.

gst-launch-1.0 -vv
filesrc do-timestamp=true location=/tmp/fromfpga
! videoparse width=1920 height=1080 framerate=30/1 format=16
! queue
! videoconvert
! queue
! x264enc bitrate=$gstbitrate speed-preset=$H264_EFFORT
! video/x-h264,profile=baseline
! queue
! mpegtsmux
! srtserversink uri=srt://:8888/ &

We are using the latest version of VLC 4.0.0-dev. What we see is VLC connects and starts to play, then we see stuttering followed by frozen video. If we disconnect and reconnect we see the same issue.

Could someone help us debug this ? We are very motivated to use SRT.

Thank you,
-John

@jeandube
Copy link
Collaborator

What you observe is back pressure from the receiver not draining the received packets fast enough, causing the sender buffer to fill to capacity and start dropping packets too old to be transmitted and played. I cannot help you on gstreamer or VLC. This can happen if the product of bitrate and Latency is too high for the receiver buffers or if the receiving device does not have enough CPU bandwidth. Based on the delay around 1020ms I assume the configured SRT Latency is the default or less than 1000ms. Try reducing the video bitrate at the input and verify CPU usage of the receiver.

@nxtreaming
Copy link

I also face the packet drop issue, I use FFmpeg to test srt ability, BUT I get the following log (very seldom):
10:34:20.108104/ffmpegE: SRT.d: SND-DROPPED 16 packets - lost delaying for 1044ms
10:34:20.118293/ffmpeg
E: SRT.d: SND-DROPPED 4 packets - lost delaying for 1024ms
10:34:20.158784/ffmpegE: SRT.d: SND-DROPPED 36 packets - lost delaying for 1044ms
10:34:20.179107/ffmpeg
E: SRT.d: SND-DROPPED 9 packets - lost delaying for 1024ms
10:34:20.189301/ffmpegE: SRT.d: SND-DROPPED 2 packets - lost delaying for 1024ms
10:34:20.229708/ffmpeg
E: SRT.d: SND-DROPPED 29 packets - lost delaying for 1054ms
10:34:20.260118/ffmpegE: SRT.d: SND-DROPPED 10 packets - lost delaying for 1034ms
10:34:20.280339/ffmpeg
E: SRT.d: SND-DROPPED 12 packets - lost delaying for 1034ms
10:34:20.311001/ffmpegE: SRT.d: SND-DROPPED 33 packets - lost delaying for 1034ms
10:34:20.341649/ffmpeg
E: SRT.d: SND-DROPPED 7 packets - lost delaying for 1045ms
10:34:20.392185/ffmpegE: SRT.d: SND-DROPPED 22 packets - lost delaying for 1045ms
10:34:20.422659/ffmpeg
E: SRT.d: SND-DROPPED 8 packets - lost delaying for 1024ms
10:34:21.040180/ffmpegE: SRT.d: SND-DROPPED 14 packets - lost delaying for 1044ms
10:34:21.080839/ffmpeg
E: SRT.d: SND-DROPPED 2 packets - lost delaying for 1024mss
10:34:21.263183/ffmpegE: SRT.d: SND-DROPPED 5 packets - lost delaying for 1024ms
10:34:21.313780/ffmpeg
E: SRT.d: SND-DROPPED 11 packets - lost delaying for 1034ms
10:34:21.344261/ffmpegE: SRT.d: SND-DROPPED 72 packets - lost delaying for 1034ms
10:34:21.364609/ffmpeg
E: SRT.d: SND-DROPPED 12 packets - lost delaying for 1023ms
10:34:21.405134/ffmpegE: SRT.d: SND-DROPPED 1 packets - lost delaying for 1044ms
10:34:23.026084/ffmpeg
E: SRT.d: SND-DROPPED 4 packets - lost delaying for 1034ms
10:34:23.056504/ffmpegE: SRT.d: SND-DROPPED 1 packets - lost delaying for 1034ms
10:34:23.086971/ffmpeg
E: SRT.d: SND-DROPPED 30 packets - lost delaying for 1024ms
10:34:23.107261/ffmpegE: SRT.d: SND-DROPPED 5 packets - lost delaying for 1034ms
10:34:23.147868/ffmpeg
E: SRT.d: SND-DROPPED 20 packets - lost delaying for 1024ms
10:34:23.178242/ffmpegE: SRT.d: SND-DROPPED 3 packets - lost delaying for 1034ms
10:34:23.330296/ffmpeg
E: SRT.d: SND-DROPPED 96 packets - lost delaying for 1024ms
10:34:23.370913/ffmpegE: SRT.d: SND-DROPPED 11 packets - lost delaying for 1024ms
10:34:23.381050/ffmpeg
E: SRT.d: SND-DROPPED 1 packets - lost delaying for 1024ms
10:34:23.411538/ffmpegE: SRT.d: SND-DROPPED 5 packets - lost delaying for 1034ms
10:34:23.431992/ffmpeg
E: SRT.d: SND-DROPPED 14 packets - lost delaying for 1024ms
10:34:23.472654/ffmpegE: SRT.d: SND-DROPPED 1 packets - lost delaying for 1044ms
10:34:23.493032/ffmpeg
E: SRT.d: SND-DROPPED 13 packets - lost delaying for 1035ms
10:34:23.503175/ffmpegE: SRT.d: SND-DROPPED 2 packets - lost delaying for 1024ms
10:34:23.523431/ffmpeg
E: SRT.d: SND-DROPPED 1 packets - lost delaying for 1035ms
10:34:23.564073/ffmpegE: SRT.d: SND-DROPPED 16 packets - lost delaying for 1035ms
10:34:23.594465/ffmpeg
E: SRT.d: SND-DROPPED 2 packets - lost delaying for 1035ms
10:34:23.614896/ffmpegE: SRT.d: SND-DROPPED 4 packets - lost delaying for 1025ms
10:34:25.154608/ffmpeg
E: SRT.d: SND-DROPPED 6 packets - lost delaying for 1023ms
10:34:25.174863/ffmpegE: SRT.d: SND-DROPPED 5 packets - lost delaying for 1033ms
10:34:25.195242/ffmpeg
E: SRT.d: SND-DROPPED 9 packets - lost delaying for 1023ms
10:34:25.205533/ffmpegE: SRT.d: SND-DROPPED 3 packets - lost delaying for 1024ms
10:34:25.245964/ffmpeg
E: SRT.d: SND-DROPPED 16 packets - lost delaying for 1054ms
10:34:25.357473/ffmpegE: SRT.d: SND-DROPPED 58 packets - lost delaying for 1044ms
10:34:25.377782/ffmpeg
E: SRT.d: SND-DROPPED 6 packets - lost delaying for 1023ms


So how to configure srt to work in UDP reliable mode?

BTW:
my RTT from source IP to destination IP is over 250ms

I do not care about delay (eg, 3s delay is allowed), I care about reliable, No any packet should be dropped.

Thanks.

@ethouris
Copy link
Collaborator

If 3s delay is allowed, then please set the latency (SRTO_LATENCY socket option) to 3s. The default latency is 120ms, so this is definitely not the value suitable for your connection, delicately speaking. I don't know how the SRT socket options can be modified from gstreamer or ffmpeg.

This behavior you are observing is the result of TLPKTDROP feature, or "sacrifice the reliability to protect timely delivery". This accompanies the TSBPD feature, which aims to preserve the time distance between packets to be exactly as when they were sent. TLPKTDROP may be turned off, but this may lead to overflowing the buffer in a very unfortunate case of a packet being lost more than once and not recovered in the extra time defined by latency. You are observing it at the sender side because both sender and receiver are tracking the latency vs. sending time, so these are the packets that were lost on track, but recovery was abandoned by the premise that the receiver would drop them anyway, even if they are resent.

The higher the latency then, the more probable it is that even in the very bad luck situation when even the recovered packet was lost again, it may still have extra time to retransmit the packet and be with it on time when the time has come to play it.

The latency is recommended to be at least 4 times the RTT, so the default is 120ms as per usual internet RTT = 30ms. Actually the time SRT requires for packet recovery is: the time distance between the lost packet and the next packet + RTT (to send the request + receive the recovered packet) + some small time of processing on the sender side (usually due to operating system packet scheduling). This time shall be also multiplied by some small factor because this alone is only with a good luck that the first packet recovery attempt has succeeded.

@nxtreaming
Copy link

nxtreaming commented Apr 24, 2018

@ethouris Should I tune the sender/caller? or the listener?

My listener is just stransmit itself, such as:
stransmit srt://:12345 udp://127.0.0.1:8888

My caller is FFmpeg, such as:
ffmpeg -xerror -y -i 'rtmp://127.0.0.1/app/stream' -c:v copy -c:a copy -f mpegts -muxrate 7M 'srt://server.com:12345?mode=caller&oheadbw=50&inputbw=875000&tsbpddelay=2500000'

I set the delay to 2.5s in ffmpeg.

But I still get UDP packets drop issues.

@jeandube
Copy link
Collaborator

It is possible that packets are not transmitted on the wire as fast as they are submitted to SRT. The transmit rate is controlled by the PktSndPeriod. The PktSndPeriod is based on SRTO_MAXBW, SRTO_INPUTBW and SRTO_OVERHEAD (%). SRT evaluate the input bandwidth internally when SRTO_INPUTBW is set to 0. When the application like an encoder knows the bitrate it generates, setting SRTO_INPUTBW permit a faster reaction than the internally valuate input bw which is a moving average. I don't know if ffmpeg provides options to control this of if this is all internal to the application.

@ethouris
Copy link
Collaborator

The latency can be set by both sides and the effective latency (the value is exchanged in the handshake) is the maximum of these two. So it's enough if you set this latency in the stransmit listener:

stransmit srt://:12345?latency=xxxx ...

(I don't think we handle the tsbpddelay option name anymore, latency should work everywhere).

@jeandube
Copy link
Collaborator

Looking at your current command line, if muxrate (7M=7000k) is somehow the stream bitrate then the inputbw setting (875000=875k) is lying and missing one 0.

@nxtreaming
Copy link

nxtreaming commented Apr 24, 2018

@ethouris
latency is deprecared.

the wrapper of libsrt in FFmpeg supports the option "tsbpddelay", such as:

// in ffmpeg/libavformat/libsrt.c

static const AVOption libsrt_options[] = {
{ "rw_timeout", "Timeout of socket I/O operations", OFFSET(rw_timeout), AV_OPT_TYPE_INT64, { .i64 = -1 }, -1, INT64_MAX, .flags = D|E },
{ "listen_timeout", "Connection awaiting timeout", OFFSET(listen_timeout), AV_OPT_TYPE_INT64, { .i64 = -1 }, -1, INT64_MAX, .flags = D|E },
{ "send_buffer_size", "Socket send buffer size (in bytes)", OFFSET(send_buffer_size), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, INT_MAX, .flags = D|E },
{ "recv_buffer_size", "Socket receive buffer size (in bytes)", OFFSET(recv_buffer_size), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, INT_MAX, .flags = D|E },
{ "maxbw", "Maximum bandwidth (bytes per second) that the connection can use", OFFSET(maxbw), AV_OPT_TYPE_INT64, { .i64 = -1 }, -1, INT64_MAX, .flags = D|E },
{ "pbkeylen", "Crypto key len in bytes {16,24,32} Default: 16 (128-bit)", OFFSET(pbkeylen), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 32, .flags = D|E },
{ "passphrase", "Crypto PBKDF2 Passphrase size[0,10..64] 0:disable crypto", OFFSET(passphrase), AV_OPT_TYPE_STRING, { .str = NULL }, .flags = D|E },
{ "mss", "The Maximum Segment Size", OFFSET(mss), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 1500, .flags = D|E },
{ "ffs", "Flight flag size (window size) (in bytes)", OFFSET(ffs), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, INT_MAX, .flags = D|E },
{ "ipttl", "IP Time To Live", OFFSET(ipttl), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 255, .flags = D|E },
{ "iptos", "IP Type of Service", OFFSET(iptos), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 255, .flags = D|E },
{ "inputbw", "Estimated input stream rate", OFFSET(inputbw), AV_OPT_TYPE_INT64, { .i64 = -1 }, -1, INT64_MAX, .flags = D|E },
{ "oheadbw", "MaxBW ceiling based on % over input stream rate", OFFSET(oheadbw), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 100, .flags = D|E },
{ "tsbpddelay", "TsbPd receiver delay to absorb burst of missed packet retransmission", OFFSET(tsbpddelay), AV_OPT_TYPE_INT64, { .i64 = -1 }, -1, INT64_MAX, .flags = D|E },
{ "tlpktdrop", "Enable receiver pkt drop", OFFSET(tlpktdrop), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 1, .flags = D|E },
{ "nakreport", "Enable receiver to send periodic NAK reports", OFFSET(nakreport), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 1, .flags = D|E },
{ "connect_timeout", "Connect timeout. Caller default: 3000, rendezvous (x 10)", OFFSET(connect_timeout), AV_OPT_TYPE_INT64, { .i64 = -1 }, -1, INT64_MAX, .flags = D|E },
{ "mode", "Connection mode (caller, listener, rendezvous)", OFFSET(mode), AV_OPT_TYPE_INT, { .i64 = SRT_MODE_CALLER }, SRT_MODE_CALLER, SRT_MODE_RENDEZVOUS, .flags = D|E, "mode" },
{ "caller", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = SRT_MODE_CALLER }, INT_MIN, INT_MAX, .flags = D|E, "mode" },
{ "listener", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = SRT_MODE_LISTENER }, INT_MIN, INT_MAX, .flags = D|E, "mode" },
{ "rendezvous", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = SRT_MODE_RENDEZVOUS }, INT_MIN, INT_MAX, .flags = D|E, "mode" },
{ NULL }

@nxtreaming
Copy link

@jeandube
I checked the srtcore/core.cpp

m_llInputBW has the unit of "Bytes/s".

But muxrate in FFmpeg has the unit of "bits/s"

7Mbps = 7000K bits/s = 875K bytes/s

So my setting is correct.

@ethouris
Copy link
Collaborator

Sorry, no one has consulted with me when the support was added in libav first, and then propagated to ffmpeg. The statement in srt.h is maybe a bit unclear, but there's definitely no such thing declared that when latency is deprecated, it's in favor of tsbpddelay, but in favor of rcvlatencty and peerlatency.

@nxtreaming
Copy link

Thanks
I replaced 'latency' to 'tsbpddelay' for better compatibility in future.

@nxtreaming
Copy link

@ethouris
To achieve the packet lostless, I now disabled packet drop (tlpktdrop=0)

// in client/sender side
ffmpeg -xerror -y -i 'rtmp://127.0.0.1/app/MatchStream' -c:v copy -c:a copy -f mpegts -muxrate 7M 'srt://example.com:12345?mode=caller&tlpktdrop=0&oheadbw=50&inputbw=875000&tsbpddelay=2000000'

//in server side(example.com)
stransmit 'srt://:12345?mode=listener&tsbpddelay=2000&tlpktdrop=0' udp://127.0.0.1:10240

I wanna know if my settings are valid: I set 'tsbpddelay=2000&tlpktdrop=0' both.

can 'tlpktdrop=0' and 'tsbpddelay=2000' be used at same?

Thanks.

@ethouris
Copy link
Collaborator

ethouris commented Apr 27, 2018

These settings are independent; actually the TLPKTDROP mechanism wouldn't work (or even makes no sense) when TSBPDMODE is off (delaying packets until their time to play - and "latency", formerly called "tsbpddelay" is the delay time added to their converted timestamp to achieve the time to play). So, "latency" and "tlpktdrop" can be thought of as two independent parameters for the same mechanism.

If you achieve any better results in this configuration, I'd be really glad if you can grab debug logs from such a session on the receiver - if you have any clean stream despite such delays that would result in drops when TLPKTDROP is on, it would be interesting for me with how big delay towards time-to-play the packets were delivered. If there's no such thing, then it's possible that the "snd-dropping" mechanism should be improved.

The TLPKTDROP mechanism is introduced because without this there is a risk of breaking the timely delivery. Imagine a situation that out of 1, 2, 3, 4, 5 packets you've lost 3 and 4. Then there comes the time that the packet 5 should be played. TLPKTDROP would agree to drop 3 and 4 and allow 5 to be delivered to the player, but in this case it's still waiting. In the meantime, 6, 7, 8 come in and 3, 4 are still not recovered - the stream is paused. Probably this pause doesn't matter in case when 1 was the last portion of the previous frame and 2 starts a new frame - they will have to be buffered by the demuxer anyway. The problem is when 3 and 4 packets were last two packets of the frame - this way the frame isn't completed at the moment when it's expected by the decoder.

It is possible that TLPKTDROP mechanism may need further improvements.

@nxtreaming
Copy link

Thanks for your detailed reply.

// in srtcore/srt.h

SRTO_TSBPDMODE = 22,  // Enable/Disable TsbPd. Enable -> Tx set origin timestamp, Rx deliver packet at origin time + delay
SRTO_LATENCY = 23,    // DEPRECATED. SET: to both SRTO_RCVLATENCY and SRTO_PEERLATENCY. GET: same as SRTO_RCVLATENCY.
SRTO_TSBPDDELAY = 23, // ALIAS: SRTO_LATENCY
SRTO_INPUTBW = 24,    // Estimated input stream rate.

// in apps/socketoptions.hpp

{ "latency", 0, SRTO_LATENCY, SocketOption::PRE, SocketOption::INT, nullptr},
{ "tsbpddelay", 0, SRTO_TSBPDDELAY, SocketOption::PRE, SocketOption::INT, nullptr},

// in srtcore/core.cpp

case SRTO_TSBPDDELAY:
    if (m_bConnected)
        throw CUDTException(MJ_NOTSUP, MN_ISCONNECTED, 0);
    m_iOPT_TsbPdDelay = *(int*)optval;
    m_iOPT_PeerTsbPdDelay = *(int*)optval;
    break;

There is no any code related to SRTO_LATENCY in srtcore/core.cpp:
void CUDT::setOpt(SRT_SOCKOPT optName, const void* optval, int optlen)

So 'latency' and 'tsbpddelay' are same, they are not independent, ie. there is no native implementation of 'latency', it is redirected to 'tsbpddelay' automatically.

I think the TLPKTDROP mechanism needs further improvments.

I have added an extra UDP layers which uses FEC( Reed-Solomon) algrithm ,
I use 20:10 FEC mode, ie, I send very 20 UDP packets with 10 FEC packets (150% packets),
I should recovery 100% packets if I can get 20 packets per 30 packets.

But I still obtain packets loss even in such scenario.

I will try to find the reason of packet loss (I need a bit of time to debug srt source code).

@nxtreaming
Copy link

So, "latency" and "tlpktdrop" can be thought of as two independent parameters for the same mechanism.

Sorry, I mis-understand your reply. "latency" and "tlpktdrop" are independent indeed.

@ethouris
Copy link
Collaborator

Well, I can see some cleanup is required; I was certain that I have lately removed every occurrence of SRTO_TSBPDDELAY from the code except the definition. This is the former name of the option, the SRTO_LATENCY is an alias name introduced later, as the name sounds clearer.

If you turn on the debug logs, the log that contains DELIVERED word appears at the moment when the packet has been extracted directly in the thread in which the application runs srt_recv* function. This log should also contain the time by which the packet is belated - usually it's some insignificant few microseconds resulting from the internal processing. If this time exceeds the predefined latency, it means that you might have had a situation of SND-DROPPED on sender, you just turned TLKPTDROP off. However if this time is high, but still it doesn't exceed the latency, it means that we have a bug.

Most likely all debug logs are qualified as "heavy", so in order to see them you have to enable them, for example by --enable-heavy-logging in configure call. If you --enable-debug, heavy logging is turned on by default. Then use -loglevel:debug option in stransmit (not sure if ffmpeg has any option to pass on the configuration for SRT logging).

@ethouris ethouris added Type: Bug Indicates an unexpected problem or unintended behavior Status: In Progress labels May 4, 2018
@ethouris
Copy link
Collaborator

ethouris commented May 4, 2018

@jeandube This problem with SND-DROPPED seems to be kicking us from multiple sides simultaneously. Shouln't we try to add some extra delay to the value of latency when deciding this sender-side TLPKTDROP, or even make it configurable?

@jeandube
Copy link
Collaborator

jeandube commented May 4, 2018

@ethouris SND-DROPPED is a symptom not the problem. Raising the value may just delay its occurance. This being said it is already configurable. It is based on the configured latency. The minimum value (1000ms) is not configurable but could be. This value was selected to prevent dropping the tail of big I-Frame and assume a live video input. The problem I see with this application is that the input to srt not totally controlled. This may have to do with how data is buffered between the input (-i) and the output (-o) . The rate is probably more CPU-bound than video frame rate bound.

@jeandube
Copy link
Collaborator

jeandube commented May 4, 2018

Supporting the disabling Too-Late-Pkt-Drop on the sender (lost in 1.3.0) is a one-line fix:

@jeandube
Copy link
Collaborator

jeandube commented May 4, 2018

 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/srtcore/core.cpp b/srtcore/core.cpp
index 45f2371..15f1259 100644
--- a/srtcore/core.cpp
+++ b/srtcore/core.cpp
@@ -4791,7 +4791,7 @@ int CUDT::receiveBuffer(char* data, int len)
 
 void CUDT::checkNeedDrop(ref_t<bool> bCongestion)
 {
-    if (!m_bPeerTLPktDrop)
+    if (!m_bTLPktDrop || !m_bPeerTLPktDrop)
         return;
 
     if (!m_bMessageAPI)

@jeandube
Copy link
Collaborator

jeandube commented May 4, 2018

With the above fix the app can disable sender packet drop by setting option SRTO_TLPKTDROP to false. If the underlying problem remains, the sender buffer will fill to capacity (TXFULL) and send API will block or return EWOULDBLOCK. I think this is what we should make configurable: input flow control for applications pushing packets from non live streaming.

@ethouris
Copy link
Collaborator

ethouris commented May 4, 2018

Actually AFAIK the "snd-dropping" doesn't save the sender application the risk of filling the buffer because this may only occur in case when the underlying UDP-relying network system pushes the packets through the network slower than the application stows them. "snd-dropping" concerns always exclusively retransmitted packets, never "original" ones, and additionally, with default configuration in the live mode retransmission relies exclusively on lossreport messages (including periodic-nakreport-triggered), with FASTREXMIT triggerring only in case when TLPKTDROP is off.

I don't think it would be a good idea to make SRTO_TLPKTDROP option control the sending and receiving stream simultaneously because this way you have one option that controls two behaviors simultaneously at once, with no possibility to be controlled separately. The PR I submitted that introduces a new option (#375) is mainly predicted for allowing experiments and extra testing to easier determine problems with transmission (not to fix anything broken).

@jeandube
Copy link
Collaborator

jeandube commented May 4, 2018

Snd-dropping DO prevent filling the send buffer. This is like a bathtub overflow drain. It maintains a time-based amount of packets in the send buffer. Retransmitted or not is just a matter of the RTT and is irrelevant to this discussion. These packets have not been acknowledged so there is no guarantee they have been delivered. I suggested in the other PR to independently control SND and RCV TLPKTDROP but one have to realize this is mainly for diagnostic purpose to help find the root cause of the problem.

@maxsharabayko maxsharabayko added this to the v.1.3.3 milestone Feb 5, 2019
@maxsharabayko
Copy link
Collaborator

This might be related with receiver buffer size and maxbw configuration.
The size should be at least Bufrcv >= bps × RTTsec / (8 × (1500 - 28)) + (latencysec × bps)
Bear in mind #700.

Default receiver buffer size is 8192 × (1500-28) = 12058624 bytes or approximately 96 Mbits.
Default flow control window size is 25600 packets or approximately 300 Mbits.

Default maxbw is 30 Mbps, refer to #552.

@maxsharabayko maxsharabayko modified the milestones: v.1.3.3, v.1.3.4 May 29, 2019
@maxsharabayko maxsharabayko modified the milestones: v1.3.4, v1.4.1 Aug 9, 2019
@ethouris ethouris added the [core] Area: Changes in SRT library core label Aug 12, 2019
@maxsharabayko maxsharabayko modified the milestones: v1.4.1, v1.4.2 Nov 4, 2019
@maxsharabayko maxsharabayko removed this from the v1.5.0 milestone Dec 27, 2019
@maxsharabayko maxsharabayko added Status: Abandoned There is no reply from the issue reporter and removed Status: Unclear labels Dec 27, 2019
@ypp1104
Copy link

ypp1104 commented Sep 30, 2021

感谢您的详细回复。

// 在 srtcore/srt.h

SRTO_TSBPDMODE = 22,  // Enable/Disable TsbPd. Enable -> Tx set origin timestamp, Rx deliver packet at origin time + delay
SRTO_LATENCY = 23,    // DEPRECATED. SET: to both SRTO_RCVLATENCY and SRTO_PEERLATENCY. GET: same as SRTO_RCVLATENCY.
SRTO_TSBPDDELAY = 23, // ALIAS: SRTO_LATENCY
SRTO_INPUTBW = 24,    // Estimated input stream rate.

// 在应用程序/socketoptions.hpp

{ "latency", 0, SRTO_LATENCY, SocketOption::PRE, SocketOption::INT, nullptr},
{ "tsbpddelay", 0, SRTO_TSBPDDELAY, SocketOption::PRE, SocketOption::INT, nullptr},

// 在 srtcore/core.cpp

case SRTO_TSBPDDELAY:
    if (m_bConnected)
        throw CUDTException(MJ_NOTSUP, MN_ISCONNECTED, 0);
    m_iOPT_TsbPdDelay = *(int*)optval;
    m_iOPT_PeerTsbPdDelay = *(int*)optval;
    break;

srtcore/core.cpp 中没有任何与 SRTO_LATENCY 相关的代码: void CUDT::setOpt(SRT_SOCKOPT optName, const void* optval, int optlen)

所以“延迟”和“tsbpddelay”是相同的,它们不是独立的,即。没有“延迟”的本地实现,它会自动重定向到“tsbpddelay”。

我认为 TLPKTDROP 机制需要进一步改进。

我添加了一个额外的 UDP 层,它使用 FEC(Reed-Solomon) 算法, 我使用 20:10 FEC 模式,即,我发送了 20 个 UDP 数据包和 10 个 FEC 数据包(150% 数据包), 如果我应该恢复 100% 数据包每 30 包我可以得到 20 包。

但即使在这种情况下,我仍然会丢失数据包。

我会尽量找到丢包的原因(我需要一点时间来调试srt源代码)。

I would like to ask how you observed the packet loss rate when you tested fec, and how do you see if the packet loss rate has dropped?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[core] Area: Changes in SRT library core Status: Abandoned There is no reply from the issue reporter Type: Bug Indicates an unexpected problem or unintended behavior
Projects
None yet
Development

No branches or pull requests

6 participants