-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keep-Alive missing for HTTP responses #279
Comments
Yes. OME does not implement the full specification of the HTTP protocol. But I have a question. If the TS download time is longer than the length of TS, and if it is a live stream, is it possible to play without stuttering? It is possible in VoD, but in live, it will eventually stutter again in a situation where it is underflowed more than the server's buffer. Any comments on this would be appreciated. |
Yes, it is possible if the code running in the webbrowser can download multiple .ts files at the same time. For example if the list has 6 chunks of 2s and the webbrowser is 4 chunks behind (between 8 and 10 seconds), then the webbrowser could download multiple chunks (5 and 6) at the same time (if the internet connection is not saturated downstream), and thus have the next chunks available when they are needed. However, that is not what this issue here is about, not at all. This here issue is about latency between the edge server and the client's web browser hitting each download of a .ts file three times, and thus severely impacting the available time that the web browser has to actually download the next chunk. |
I'll review this to see if it's a feature that can be easily added in the current structure. |
Any chance to get a first review of this topic? Without support for reusing tcp connections (a major part of the http/1.1 standard), it is impossible to HLS/DASH/LL-DASH stream anything with more than 400Kb/s to clients who have >550ms latency (under load) and some packet loss, thus OME is currently not really usable for international video streaming from the usual cheap server locations. What I have already tried to improve the issue:
However, for each .ts file download it still takes at least 1,5 seconds for the initial connection setup plus TLS 1.3, and then roughly two more seconds for tcp SlowStart to reach a reasonable download bitrate on a connection with for example 550ms latency. If the transport then also has for example a 0,5% chance of packet loss, even a 6 seconds chunk size and 500 Kb/s CBR video quality will not play without stuttering after almost each chunk. If the webbrowser would be allowed to reuse the tcp connection, playback would be just fine, even for much higher packet loss and higher latency (for example a 3G cellular internet user in australia, trying to watch a stream form a server in Europe). |
@basisbit We will implement this. However, these days we are having a very busy day working on other commercial projects, and for OME we are working on the highest priority (WebRTC Input, improvement of socket performance, Signed Callback) tasks. We will consider this as the next priority. |
linking this to #268 as the long connection-establishing time seems to be one of the major reasons why LL-DASH from an OME server dies after a few seconds (approximately one or two chunk sizes) of playback, when used over typical internet connections with not so good latency. |
any update on this? It makes it very difficult to use OME for live-videostreaming for world-wide visitors even when using servers in multiple regions to stream 1 Mb/s video streams... Edit: I wonder if maybe it would be easier to always use NginX or HAProxy in front of the HTTP port of OME, thus making it easier for server operators to adjust any HTTP or TLS related options using all the documentation of these tools. Did anyone here already benchmark how using such reverse proxy in front of OME would impact CPU and memory requirements? |
@basisbit |
Once again, I apologize for the delay in this task. We also think we need Keep-alive, and I would like to remind you that this task has the highest priority for me, but it is being delayed due to other commercial projects. (This commercial project is necessary for our "survival", so I can't lower its priority.) I will try to complete the development before the end of this year. |
It took too long to release the HTTP/1.1 persistent connections feature. Many thanks to everyone who has been waiting for this feature. I am working hard to support LLHLS these days. The new HTTP module has only been tested in my environment yet. So many bugs can be found. So you should not use the master branch in a commercial environment yet. I hope many people test new HTTP module. I am well aware that OME is becoming a more stable system as many contributors test OME in various environments. Thanks to all contributors. |
Hi @getroot thank you for this. I tried a new build and API calls are not working, showing this error:
Is that enough of a clue? |
@bchah Thanks for the quick report. I fixed the problem and committed it. |
Since this feature has been released, I am closing this issue. If you need a discussion about Keep-Alive or Persistent Connection, please create a new issue. |
Describe the bug
The HTTP server implementation of OME advertises that it supports http/1.1. However, it ignores the Keep-Alive header in http requests, does not send Keep Alive header (
Connection: Keep-Alive
) and also not the keep-alive parameters as headers (for exampleKeep-Alive: timeout=5, max=1000
) in http responses.This makes it impossible to successfully use HLS/DASH streaming with small chunk lengths in environments with high latency between edge and web-browser.
Hls.js and dash.js do not download .ts chunk files concurrently, but only one at a time. With the current implementation of OME, the latency/round-trip-time to the edge server will hit a couple of times, because for each chunk, it will start a new tcp connection, wait for the reply, start the TLS handshake with TLS hello, wait for reply containing the TLS server hello, send client key exchange stuff, wait for the server to respond change cipher spec stuff, and then send the HTTP GET plus wait for the response.
If all of this waiting because of network latency time plus the time to actually download the video chunk is bigger than the chunk length, then the video will always stutter, no matter the settings.
Expected behavior
Correctly implement HTTP/1.1
Server (please complete the following information):
I think the problem is the implementation in https://github.com/AirenSoft/OvenMediaEngine/blob/master/src/projects/publishers/segment/segment_stream/segment_stream_server.cpp#L207
The text was updated successfully, but these errors were encountered: