-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Site Support Request] Wikipedia and Wikimedia #1443
Comments
Not at the moment. |
ok then. thanks for reply :) |
After looking about it a bit, Wikipedia (and any Mediawiki website, in general) has an API that can be used to retrieve images from an article (and surely other pages) An example:
I guess I could try to implement an extractor if I someday find the time for it 0:) |
I wonder if there's a public out-of-source-code info on the Mediawiki URL syntax ... I couldn't find with an extremely fast try, and don't feel like checking the source code. At 1st I was thinking "Match until a question mark after |
Random question for @mikf (it is slightly related to this issue, but I do not see any better place to post it): is there a documentation that specifies how to write an extractor? By that, I mean how to use the |
- support mediawiki.org - support mariowiki.com (#3660) - combine code into a single extractor (use prefix as subcategory) - handle non-wiki instances - unescape titles
I have done this in my own repository: download.py. I think you may get inspired. |
- support mediawiki.org - support mariowiki.com (mikf#3660) - combine code into a single extractor (use prefix as subcategory) - handle non-wiki instances - unescape titles
Is there any way to download from Wikipedia and Wikimedia domains?
Unsuccessfully, my commands:
The text was updated successfully, but these errors were encountered: