-
Notifications
You must be signed in to change notification settings - Fork 855
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: No room to store incoming packet #409
Comments
The "No room to store incoming packet" message appears when the RX buffer gets completely full and is waiting for the receiving application to extract the waiting packets. This If you are using the SRT 1.2 version stransmit, just note that this was at that time only a testing tool and choking on the output medium may prevent it from reading from SRT medium on time. Although I have never seen it happening on UDP output, only with I would keenly see the debug logs from this session and see how fast the data extraction on RX happens, especially at the moment when the RX buffer level increases. Mind also that the "latency control" facility (TSBPD) is using the RX buffer to magazine the incoming packets until their "time to play" comes. I can see that you have set quite a long latency (5 seconds) and mind that effectively the RX buffer will have to magazine packets for 5 seconds of stream constantly. Might be that with high bitrates combined with high latency the RX buffer size might be too small, or at least balance on the edge. You can change the size of the receiver buffer through the |
Hello Ethouris, I´m also having a similar issue trying to send an hevc 60mbs TS, I can only get 42,5 at output of SRT whily trying the same with a 25TS its fine... where you can change the parameters you said above? something like this? ./srt-live-transmit -v -r:30 -s:30 -t:0 "udp://@224.168.204.41:2041" "srt://:9001?latency=500&sndbuf=8192&fc=25600" ./srt-live-transmit -v -r:30 -s:30 -t:0 "srt://127.0.0.1:9001?latency=500&rcvbuf=8192&fc=25600" "udp://@224.168.204.250:2250" Thanks |
@rmsoto you may want to set the max output bw that is capped at 30Mbps by default (this option is not working well and that could explain the 42.5 you get). The option "maxbw" is in bytes/sec. |
Sorry, I just read this issue again, my proposal would not fix the 'no room' on the receiver. but only ensure proper drainage of the sender's buffers. |
Hi @jeandube , the mxbw helped to achieve the same output bitrate than the input one but I still get the error: 14:44:53.373608/SRT:RcvQ:worker*E: SRT.c: %328810773:No room to store incoming packet: offset=0 avail=0 ack.seq=2053936868 pkt.seq=2053936868 rcv-remain=8191 _increasing the parameters rcvbuf=16384&fc=51200 does not make any good result as I get less bw than expected...I think I´m not introducing the correct units... Thanks |
@rmsoto Good if you fixed the sender but the noroom problem on the receiver is an indication that the receiving app is not draining the received packets fast enough. Increasing the received buffer just delay the problem. By fixing the sender you just exacerbated the receiver's problem. |
I think its now working better since we changed the values of sdnbuf, rcvbuf and flight window inside core.cpp and recompile...let’s see how it works during 24h Thanks! |
@ethouris I also got the same error and I checked the number of packets dropped by the kernel is 0:
In this case, the kernel should start dropping packets, right? Or I misunderstood RX buffer you mentioned? |
Again: this "No room to store incoming packet" may happen in two cases:
Both these things happen when the packet has already passed the network link, that is, the work for transporting the packet through the network has been already successfully finished. May happen that the application experiences some temporary slowdown, which will get evened up later; in this case increasing the size will help. But not in case when the application is systematically increasing the delay between the moment when the packet is given up to the application and the moment when the application reads it - in this case a buffer of any size will eventually be overflown. |
@ethouris so RX buffer is not the linux kernel buffer, right? I increased rmem_default and rmem_max but still getting the same error. |
We are talking SRT here, the userspace solution, which uses instrumentally UDP to implement the transmission. It provides its own buffers that work according to the rules it defines. Kernel parameters have nothing to do with this. Let me be more precise about the above: both these things happen when the packet has already passed the network link and all system buffers and is about to be stored by SRT in its private buffers. |
As you have once faced this problem that is likely to result from performance problems, would you be able to test the version from this branch? https://github.com/ethouris/srt/tree/dev-rcvworker-sleeps |
This looks like a problem with receiver buffer size. Default receiver buffer size is |
Related to #355 |
PR #1909 further improves the log message. The following two reasons for this message to appear can now be distinguished from the log message. 1 App is not reading fast enough:No room to store incoming packet seqno 1887365986, insert offset 1711. Space avail 190/849 pkts.
Packets ACKed: 658 (TSBPD ready in -1680 : -1219 ms), not ACKed: 0, timespan 460 ms. STDCXX_STEADY drift 0 ms. TSBPD ready in -1680 : -1219 ms indicates that there are 658 packets available for reading for already at least 1.6 seconds. 2. Too small bufferExample log message when there is almost nothing for the app to read, and not enough receiver buffer. Knowing the configured
Issue StatusClosing this issue due to the lack of activity (since Jun 11, 2020). Please feel free to reopen if further questions arise. |
Hello. The receiver buffer settings do not work according to your recommendations from: Use case:
The result: So, all your recommendations for receiver buffer settings are useless in my case. I am currently considering using a message broker, please let me know if there is some other simple way to reliably write stream data to disk or wait HDD. EDITED: Thanks |
Hi @yuri-devi
It would further help if you could use the latest SRT master with the updated warning message. |
@maxsharabayko Thank you for the answer. I didn't know that the packet control window size property has priority over the receiver buffer parameter. |
@maxsharabayko Could you please check me with my network values: Setting up the receiver buffer: Video bitrate: 6000Kbit/s The Result of rcvbuf: Setting up the flow control window size: My network bandwidth: 11140518 byte/s The result of fc: Is it correct ? Thanks |
Hi @yuri-devi
It is indeed not obvious at all and needs to be improved (#700).
The latest configuration guide can be found here. Assuming 0.15 sec RTT and 0.6 sec latency you would roughly need to store your 6 Mbps stream for 0.6 seconds. So the buffer must around (0.6 s × 6 Mbps) = 450 000 bytes. With the provided function I get target buffer size 565 248 bytes and FC = 384 packet. CalculateTargetRBufSize(150, 6000000, 1316, 600, 1500); With your environment the default configuration should be more than enough, unless you wold like to save memory. |
@maxsharabayko Thanks for your help. I tried CalculateTargetRBufSize logic by my side and it's not working. I still see errors like no room to store incoming packet... According your recommendations and the result of calculate CalculateTargetRBufSize method it will be 565 248 bytes and FC = 384 packet but for my bitrate 6000Kbit/s isn't correct values because it's much less default values 12058624 and 25600 accordingly and it doesn't make the results better. So, now I just increment the default rcvbuf and fc values the manual way but I still think what the control of these parameters can be user-friendly |
@yuri-devi Could you share a network capture on the sender side at least up until the moment the receiver shows the warning? |
@maxsharabayko Sorry for the delay. Yes, I will do it when i have the opportunity. Thanks. |
Same problem in flussonic Dec 26 21:41:59 localhost run[218599]: "21:41:59.274365/SRT:RcvQ:w45964!W:SRT.qr: @494418234:No room to store incoming packet: offset=3292 avail=3072 ack.seq=615021140 pkt.seq=615024432 rcv-remain=5119 drift=288\n"} |
@Neolo please use a newer version of SRT library. |
App is Flussonic and it's latest version, log still floods with these messages. I see no reason why app wouldn't read data from SRT. Does not make any sense. 2023-01-12 03:41:46.712 <0.328.0> 03:41:46.712550/SRT:RcvQ:w12379!W:SRT.qr: @518511538:No room to store incoming packet: offset=3359 avail=3170 ack.seq=423736348 pkt.seq=423739707 rcv-remain=5021 drift=-422 |
The latest release is 1.5.1, and in this version this message looks more-less like this:
What you have shown is the message from the older version, and the information provided by this message is too sparse to allow to investigate anything. The reason could be the application's too slow reading, but it can be as well some rare internal problem in SRT (clock skew, unusual drift, bitrate spike etc.), at least in theory, in result of which the number of packets not yet ready to be delivered are suddenly occupying too big portion of the buffer. Only in the 1.5.1 version this message contains enough information to determine whether it was the case and then what might have caused it. Ah, and one more thing - we have seen sometimes applications that do not exactly follow the required rules of SRT to read the packets all the time, and sometimes allow themselves to stop reading for a longer while (treating SRT as if it were UDP, which will at worst drop packets that can't be stored in the buffer, but you can always resume reading and everything's ok). Anyway, from the latest SRT's messages this will be known - how many packets are waiting for the right time (so it's SRT to keep them) and how many packets are waiting for the application to be picked up (so it's the application). |
Version 1.5.1 here, I'm getting something like this while startupping a stream with a huge probe buffer in ffmpeg since the keyframe are sparse:
I tried raising rcvbuf but it seems the problem here is with the packet number, it's possible to raise that one? |
The time in "TSBPD ready in" is the time remaining for the earliest packet that is ready for extraction (that is, in non-blocking mode this long time should be waited until the call to If this time is negative, it means that the application doesn't read packets from the SRT socket fast enough (not as fast as they come in, or by some reason it has stopped or paused doing it). |
Hello. I'm using I already made the recommended change:
But even so, I still get the following log after some running time (≃12 minutes):
The encoder that is generating and transmitting the stream (obviously configured in I'm using v1.5.3 and using this script:
|
@quintanaplaca Please see Configuration Guidelines |
I'm taking a look at what should be in this log and this condition seems not to be satisfied (as the content of it isn't printed in the log):
As |
This is my first real contact with this protocol, so I feel like I missed something. However, even though it has a higher limit than expected, I think it should work. Anyway, I did a simple test, but reversing the request side; that is, now I am as
So I suspect this is related to the
|
The main reason for a receiver buffer overflow is that the application doesn't read packets as fast as the new ones come in. In live mode the situation is a bit specific because:
The latency compensation is simply the number of packets that represent the duration equal to the latency value - so it depends on the stream bitrate and the latency value. Might be that the default buffer size is not big enough for this - that should be checked with the guidelines, as Max showed you above. Theoretically the latency compensation is the only reason for buffering, but for safety reasons the buffer should also have space for:
They should be normally empty, but they might temporarily grow and lead to buffer overflow if they grow too big. We were suspecting many times problems with TSBPD calculation, which could result in that the latency compensation fragment of the buffer is temporarily too big. We have never detected for sure such a case so far, but then just in case this information about the TSBPD time span in the buffer is added to this log. As you didn't have it in your log, this is weird, but it must have resulted from some different settings set on the connection sides. But this information is vital to detect, what the reason was for this to happen. |
@ethouris, Thank you for your detailed explanation on the topic. The flow has been working for almost a day, as I reported in the previous post. Anyway, I intend to debug that logic (receiving stream as I'm just a little worried because I'm using the latest version of the repository, and I did as per the guide, without removing any parameter that would imply hiding information about the TSBPD. I may try to build this again. |
Note that settings set only on one side may influence the whole process, that is, apply to the peer, too. Therefore make sure what options are set on both connection sides. |
Hello
We are experiencing "No room to store incoming packet" error on certain conditions. The source multicast is from a professional encoder (Tandberg)
Condition 1: When TS bit rate of source is up to 16Mbps, then SRT works perfect
Condition 2: When we increase the TS bit rate higher (eg. 17 Mbps), then we see the below error message
RX side:
Accepted SRT source connection
18:11:36.374843/SRT:RcvQ:workerE: SRT.c: %1005060842:No room to store incoming packet: offset=0 avail=0 ack.seq=489148889 pkt.seq=489148889 rcv-remain=8191
18:11:36.375572/SRT:RcvQ:workerE: SRT.c: %1005060842:No room to store incoming packet: offset=1 avail=0 ack.seq=489148889 pkt.seq=489148890 rcv-remain=8191
TX side:
155288 bytes lost, 21560028 bytes sent, 31287900 bytes received
15353772 bytes lost, 21560028 bytes sent, 46486384 bytes received
31324748 bytes lost, 21560028 bytes sent, 62457360 bytes received
47340468 bytes lost, 21560028 bytes sent, 78473080 bytes received
63335132 bytes lost, 21560028 bytes sent, 94467744 bytes received
Condition 3: We start SRT streaming initially with 16Mbps and when it is fully functioning, we increase the source TS bit rate from the encoder settings on the fly to 25Mbps. Still SRT continues to function without any errors and at the RX side we see 25Mbps on output multicast.
It seems SRT is currently handling traffic only upto 16Mbps. Any suggestions ?
Our commands:
TX: stransmit "udp://@226.24.112.4:2000?ttl=64" "srt://1.2.3.4:7005?mode=caller&latency=5000" -t:-1 -s:3000
RX: stransmit "srt://:7005?mode=listener" "udp://@225.10.10.10:2000?ttl=64" -t:-1 -s:3000 -v
The text was updated successfully, but these errors were encountered: