-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support connection reuse / keep-alive #14523
Comments
previous discussion related to this #13734 (comment). |
Thanks for the reminder. I don't even remember that ticket :/ As that ticket is closed, let's continue here. I've just done some experiments with |
@yan12125 tried that with YouTube but the connection is dropped: the server is setting |
For me twitch-connection-reuse-via-requests (5 reqs, 36-41 sec):
master (5 reqs, 31-36 sec):
|
That video is a good test case. I'd like to share my testing results, too. (removed irrelevant logs) I've update that branch with some more bug fixes. There shouldn't be performance changes in comparison with the previous version. On twitch-connection-reuse-via-requests
On master
Environment:
In 5 trials @lilydjwg: I got "Connection: keep-alive" for https://go.twitch.tv/videos/182470407. Are you testing with Twitch VODs? |
I tried with YouTube in the previous comment. That Twitch video is so huge so I only test to a small percent. 1min47s to 2.0% with Both are not the first invocation to avoid the startup delay. The speed is repainted on a newline on every request due to logging. |
For the nrk video, it's 00:20 vs 00:54. The chunk size (2.8M) is smaller than Twitch ones (4.7M) and I only get up to 2.4MiB/s with |
I notice that I can download Twitch videos without using a proxy (I'm in China). Here's the results of direct connection: 56s to 0.5% at ~1.7MiB/s with The proxy I'm using is using bbr, and that seems to make a difference. |
Hmm what's bbr proxy? |
@yan12125 it's a shadowsocks proxy. I mean the server is using the bbr TCP congestion algorithm. |
Thanks for the info and testing results. |
ffmpeg added support for both HTTP persistent connection in FFmpeg/FFmpeg@b7d6c0c and HTTP pipelining(simulate) in FFmpeg/FFmpeg@1f0eaa0, they are enabled by default. |
@remitamine Yeah! I think it's working. To verify this, I installed ffmpeg-20171227-8f9024f-win64-static and started capture of a twitch stream with:
Then I fired up Wireshark and set the display filter to |
Tested with 1. Twitch VOD https://go.twitch.tv/videos/182470407 2. Trailing garbages in gzipped contents, see the new test in test_http.py
I rebased the reuse commit on to current master and tested with https://www.youtube.com/watch?v=Fhs_H9hgXwM. It did not seem to make much difference. I enabled logging with
and kept getting this message after each chunk.
|
How can this unconnected commit be accessed with git? I would like to try it but cherry-pick can not find the hash. I use youtube-dl through a proxy and reconnecting to an http connect proxy which then must reconnect to the server for each hls fragment is very slow and would benefit from persistent connections. I have tried using --hls-prefer-ffmpeg but it disconnects/reconnects for each https fragment as well. Edit: I applied it manually. It does not respect the --proxy argument, but by setting the "https_proxy" environment variable the download from ITV is much much faster using 1 persistent connection. |
Adds support for HTTPS proxies and persistent connections (keep-alive) Closes #1890 Resolves #4070 Resolves ytdl-org/youtube-dl#32549 Resolves ytdl-org/youtube-dl#14523 Resolves ytdl-org/youtube-dl#13734 Authored by: coletdjnz, Grub4K, bashonly
Adds support for HTTPS proxies and persistent connections (keep-alive) Closes yt-dlp#1890 Resolves yt-dlp#4070 Resolves ytdl-org/youtube-dl#32549 Resolves ytdl-org/youtube-dl#14523 Resolves ytdl-org/youtube-dl#13734 Authored by: coletdjnz, Grub4K, bashonly
What is the purpose of your issue?
Many videos from YouTube contain a lot of small video chunks. Creating one connection per file download significantly slows down the speed. I only get up to 200-500KiB/s for a chunk then the connection is closed and another is made for the next chunk. When downloading a big video file from YouTube I can get 5-6MiB/s. (The actual speed difference depends on the available bandwidth and round-trip time.)
TCP slow start significantly limits the speed when the connection is starting. With connection reuse / HTTP keep-alive we can avoid all the speeding-up phrase.
The text was updated successfully, but these errors were encountered: