-
Notifications
You must be signed in to change notification settings - Fork 869
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
questions about latency #2600
Comments
Hi @lcksk
The behavior you as looking for is currently a FR #2460. |
Thank you for your quick reply. I really appreciate these learning links.
I mainly want to use srt for screen cast transmission. When transmitting mirror video stream, low latency is the top priority. Therefore, I hope that each packet can be transmitted as soon as possible, rather than all packets have a fixed delay. |
And you want to "drop" packets that have not arrived in time (within the maximum latency limit), right? |
Yes, if the retransmission fails within the set delay time or for any other reason fails to arrive within the specified delay time, drop it. |
Hi experts, I am a newbie in srt, and I would like to ask a naive question: when I set peer-latency and recv-latency for caller srt socket in live mode, will it affect the delivery time of each package, or the most a maximum delay value for bad case packets? What I mean is, assuming that my rtt is very low, the physical network can guarantee that each packet only needs 20ms,assuming that there is no network jitter, and according to the srt default latency setting of 120ms, this packet needs 20ms or does it take 120ms to be received by the listener? In other words, as a live stream, I hope that each packet can be delivered as quickly as possible from the caller to the listener. The set delay is only used as a tolerance measure that affects the worst case, not for each packet., is there a way to achieve this?
I did a simple test, when I set the peer-latency and recv-latency to 300ms on the caller side, the delay of the picture is about 300ms by measurement. If the "latency" is increased to 1s, then the end-to-end screen delay is about 1s. However, what I hope is that even if I set the latency to 1s, my delay will not really become 1s, but make out-of-order or unreceived packets wait at most 1s, while other normally received packets are still can be delivered at the fastest speed without being affected by the latency setting value. Is it possible to achieve this by modifying some options?
Maybe what I learned above is completely wrong,any comment is highly appreciated.
The text was updated successfully, but these errors were encountered: