-
Notifications
You must be signed in to change notification settings - Fork 861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LOCAL STORAGE DEPLETED #486
Comments
Some of them look like performance problem. The first log looks like some very unlikely fallback, when the unit queue for incoming packets is depleted (its size is hardcoded to 32, but there are multiple queues). This keeps all packets as they come in and theoretically they should be then quickly processed and passed further - command packets interpreted, data packets stored in the receiver buffer. As I have seen in the code, it may happen in a situation when the queue buffer is stretched to the limit, but if this was a data packet, it still undergoes retransmission. The second one is something I thought it's IPE, but it looks like it may happen during a normal processing. SRT (since UDT) features the "ACK response", which is used to confirm against the sender that these packets are now removed from the sender buffer (otherwise ACK should be repeated). ACK nodes are stored in a special contained so that ACKACK can extract the node so that the ACK window can be updated. I'm not sure exactly about all possible situations when the ACK node disappears (beside that it was already processed by ACKACK once in the past). Anyway, the only impact it has is that the statistics updated for that case do not get this information in this case updated in the window, which calculates the reception speed and RTT. TLPKTDROP is a mechanism of intentional dropping a packet, when it precedes an already received packet that is ready to play - it's preferred to waiting indefinitely for retransmission. This is controlled by a flag (SRTO_TLPKTDROP) and it's on by default. Increasing latency obviously increases the chance that the lost packet will be retransmitted on time. If you also observe drops on the sender (SND-DROPPED in the log), the SRTO_SNDDROPDELAY option may help. The value is an extra time delay (ms) to wait until the sender drops packets, or when set to -1, the sender won't drop packets at all. |
The only thing I'm mainly concerned about is TLPKTDROP. Right now, I am testing 3-4 instances of application with 25mbps video over the LAN. Although, I am seeing from stats that RTT: 0.06ms but I am wondering that If I still increase latency to say 500, it will affect my performance when I use just one instance of 25mbps or even less than that. Actually, I want to get their constant values which should not affect the performance if i increase/decrease instance or bit-rate of videos to certain extent. |
Latency doesn't change anything in the performance. It's just the matter of how long a packet is being kept in the buffer (that is, it affects the time when "packet sign-off" happens). It may increase the size of intermediately used receiver buffer, so by some extreme values of latency it may need to be extended, but I remember guys that were used 1080p/60fps with 6 seconds latency and default size of the receiver buffer. RTT 0.06ms is for me kinda unlikely, even in a local network I can barely achieve RTT below 1ms. Anyway, if the network is more likely to drop a packet, a longer latency may be indeed necessary. |
Thank you for the information. For latency, I think I was testing with some 20 mbps videos for error recovery using tc qdisc around latency values ranging 120-1000, but for higher range. It was losing connection but I was keeping default buffer sizes. |
Hi @ethouris , I increased latency to 2000 and oheadbw to 40% and could run 4 instances of 25mbps for hours without any packet drop, but as I switched to 60Mbps video, it couldn't survive for more than 2 seconds. When I set default value of latency or increase buffers to 100000, I am able to get it working but still, there are some packet drops and it doesn't report anything in logs. |
@prabh13singh Do the maths , you may not have enough buffers to hold 2000 ms of 60 Mbps video. I guess around 20Mb is required and default is around 10Mb (SRTO_SNDBUF/SRTO_RCVBUF). |
Sorry, I dont' understand. What is 10Mb (SRTO_SNDBUF/SRTO_RCVBUF) -the ratio of buffers? |
SRTO_SNDBUF and SRTO_RCVBUF are the configured number of send and receive buffer to handle retransmission. SRTO_MSS is the size of each (1500) each buffer hold one packet. your 60mbps video should generates aound 6000 pkt/secs. with a latency of 2000 ms you need 12000 buffers and default is lower than that. by reducing latency to default, you need a lot less buffers. |
But where do we set them? We have m_iSndBufSize and m_iRcvBufSize in core.cpp. I am increasing those ones and increasing latency to prevent too late packet drops. Are we also supposed to change m_iUDPSndBufSize and m_iUDPRcvBufSize as well? |
How do different application instances affect the performance of the video? If I run single or two instances, then I see no packet drops, but if I increase the number of instances, I see CC Errors on final udp output but I don't see any dropseq like log. All I see in logs is No room to store incoming packet, IPE: ACK node overwritten when acknowledging and LOCAL STORAGE DEPLETED in all of them. I'm not getting where the packets are getting dropped. |
Please redo the tests on my experimental branch where I'm attempting to get rid of CPU abuse in the RcvQ:worker thread: https://github.com/ethouris/srt/tree/dev-rcvworker-sleeps |
It can't go above 30 mbps even if set maxbw=100000000 |
The question is whether you could with the previous version (1.2.0). Note that maxbw is only an option to limit the bandwidth, and by default it uses as much as it is available. |
Yes, by also increasing buffer sizes. |
Ok, then please try to get some information about the network characteristics (RTT, available overall bandwidth) for the network you used for testing, as well as the stream you are trying to send (video paramteres). We'll try to reinstate this environment in out lab. |
Is this issue still being considered? We are also seeing errors like:
They occur very sporadically but would like to get to the root cause. I'd be happy to provide more detail etc. We are currently running SRT 1.4.1. |
Hi @amiller-isp "LOCAL STORAGE DEPLETED" happens when there is no available memory unit (in the pre-allocated pool) to place incoming packet into. This message means that this dynamic increase has failed. I've prepared this branch with additional logs: branch. |
There is a synchronization issue in accessing At the same time, another thread may call The |
So we considered deploying your branch fix but we would have to deploy it into our production environment so it would need to be fully validated first. The message is very infrequent. In our environment appears roughly once every day per hundred channels. |
The branch I referred above only has improved logs to track down the issue. |
Covers some issues related to Local storage depleted issue Haivision#486
Refers to issue Haivision#486 getting available uit to receive a packet
Covers some issues related to Local storage depleted issue Haivision#486
Refers to issue Haivision#486 getting available uit to receive a packet
Covers some issues related to Local storage depleted issue Haivision#486
Refers to issue Haivision#486 getting available uit to receive a packet
Fixes partially Haivision#486
Refers to issue Haivision#486 getting available uit to receive a packet
Covers some issues related to Local storage depleted issue #486
Refers to issue #486 getting available uit to receive a packet
Hi @amiller-isp @prabh13singh |
@maxsharabayko |
@amiller-isp |
I am seeing these error issue in the logs on the receiver side quite often and dropping lots of packets:
18:16:46.245466/SRT:RcvQ:workerE: SRT.c: LOCAL STORAGE DEPLETED. Dropping 1 packet: DATA: msg=11813898 seq=1232055391
18:58:02.878448/SRT:RcvQ:workerE: SRT.c: IPE: ACK node overwritten when acknowledging 1095568, ack extracted: 1330856570
20:40:51.114986/SRT:TsbPd D: SRT.c: TLPKTDROP seq 1576924112-1576924113 (1 packets)
20:40:51.115012/SRT:TsbPd*E: SRT.t: %143964518:TSBPD:DROPSEQ: up to seq=1576924113 (2 packets) playable at 20:40:51.111753 delayed 3.240 mspackets_dropped:1
The text was updated successfully, but these errors were encountered: