-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chaturbate] fix url extraction and parsing #23012
Conversation
Noob here, how do I implement these changes? |
@Ubernoob7 Here's one way to do it:
I'm assuming you already have Python installed and in your PATH. |
A simple interim fix: inserting a cookie "cb_legacy=1" reverts to the old behaviour. *** youtube_dl/extractor/chaturbate.py.orig *** 31,37 **** ! webpage = self._download_webpage(url, video_id)
--- 31,39 ---- ! webpage = self._download_webpage(url, video_id, headers={
|
Question for you, I have a crontab and a run-one system to automagiclly record streams when they go live, or come back from one of the private modes. The script it as follows "run-one youtube-dl -o '/home/ubuntu/video/%(title)s.%(ext)s' https://chaturbate.com/URL/" How would I take your branch that i've installed and make it run from anyfolder when "youtube-dl" is called? |
@pcjamesy Install it globally.
|
This worked for me! Many thanks! |
guys who can record a video for me how to fix it on windows 7 |
Before submitting a pull request make sure you have:
In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under Unlicense. Check one of the following options:
What is the purpose of your pull request?
Description of your pull request and other information
Fixes issue #23010
The chaturbate extractor broke some time this evening. The regex looking for .m3u8 URLs no longer matched anything. Additionally, the URL now needs a bit of extra processing.
The page source contains a large JavaScript string containing an encoded JSON object. The nested quotes, as well as other special characters, are encoded as e.g. \u0022. So, we match the URL delimited by \u0022 or \u0027 (double or single quotes, respectively) and then decode any \uXXXX sequences in the match group.