-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support "html5" type to use html5lib parser #83
Comments
I'll work on this 💪 |
Is there any update on this? Example Code
|
@grahamanderson You can try and review the pull request at #133 Alternatively, you can use the following workaround in a downloader middleware or in the callbacks of your spider: from bs4 import BeautifulSoup
# …
response = response.replace(body=str(BeautifulSoup(response.body, "html5lib"))) |
Thank you @Gallaecio !
|
From @whalebot-helmsman:
|
Hi all, |
Start by reading up http://gsoc2020.scrapinghub.com/participate and the links at the top to Python and Google resources. Mind that student applications have just started and will close in a couple of weeks. |
So should I start contributing to the project or start making a good proposal ? |
You can start with whichever you prefer, but you need to do both before the deadline, proposals from student that have not submitted any patch will not be considered. If you start with your proposal, and you can manage to isolate a small part of the proposal that you can implement in a week or less, you could implement that as your contribution, and that would speak high of your ability to complete the rest of the project. |
Parsel can extract data from the Html and XML but due to some exceptions in the Html like the use of # in the attributes of tag and having the different technique to visualize the tags in HTML, there is need of html5lib parser. |
Make sure you have a look at the issues linked from this thread. Another benefit of supporting a parser like html5lib, for example, is that the HTML tree that it builds in memory is closer to what you see in a web browser when you use the Inspect feature. |
In my tests it looked quite slow (e.g. 130 ms to parse an html which took lxml.html only 9 ms), while html5-parser looks fast (only 7 ms for the same html) and returns lxml tree as well: https://html5-parser.readthedocs.io/en/latest/ EDIT: although there is a problem that html5-parser returns |
can I work on it for GSoC |
That would be great. Please have a look at https://gsoc2021.zyte.com/participate for details. |
sir,is it a continuation of previous contributions or should i do it completely new. |
There has been a previous attempt with feedback, #133, which could serve as a starting point or inform an alternative approach. Other than that, this would need to be done from scratch, yes. |
Hello I am mew here should I work on this project? There are not many new issues listed here. |
Do you mean as a Google Summer of Code student candidate? |
Hello, my name is Garry Putranto Arimurti, a GSoC candidate. I am interested in contributing to this project and I would like to learn more about the issue so I can work on it. Is there any specific issue I can work on and improve here? Thanks! |
@garput2 It’s hard to provide feedback without specific questions, but I guess #153 is a somehow related pull request that gives a view of what would probably be a good first step towards supporting an HTML5 parser. On the other hand, to participate in GSoC with us you need a pre-application pull request, in addition to presenting a proposal. Since today is the last day to present a proposal, your timing is a little tight. |
Create Selector for html5: from lxml.html.html5parser import document_fromstring
def selector_from_html5(response):
root = document_fromstring(response.text)
selector = Selector(response, type='html', root=root)
return selector |
I think recent work done by @whalebot-helmsman on https://github.com/kovidgoyal/html5-parser/ is relevant here - now it's possible to use a fast and compliant html5 parser (using a variant of the gumbo parser) and get an |
Yes, it is possible. There is one thing which makes widespread adoption of |
You can get a charset error using this, if the original page was not utf-8 encoded, because the response has set to other encoding. In addition, there may be a problem of character escaping.
"html.parser" is faster, but
|
Another option is selectolax. The only issue would be a possible (idk if this is an actual issue) legal problem: rushter/selectolax#18. |
I believe there is no legal issue. That said, Parsel heavily relies on lxml, whereas https://github.com/rushter/selectolax seems to go a different route, offering much better performance according to them. So I think integrating selectolax into Parsel while keeping the Parsel API and behavior would be rather hard, compared to something like #83 (comment). On the other hand, if the upstream benchmark results are to be trusted (~7 times faster than lxml), in the long term it may be worth looking into replacing, or at least allowing to replace, the Parsel lxml backend with one based on selectolax. But that should probably be logged as a different issue. Maybe a good idea for a Google Summer of Code project. |
Seems like selectolax does not offer support for XPath selectors and supports only CSS selectors, if lxml backend were to be replaced with selectolax should XPath selectors be supported by converting XPath to CSS? This can be done by adding support for conversion int cssselect, I found a quick workaround by using this library cssify. |
I would not go that route because while all CSS Selectors expressions can be expressed as XPath 1.0, it does not work the other way around. I think supporting CSS Selectors expressions only would be OK in this case. |
So, should the existing backend be preserved for supporting xpath along with new parser for css ? or should another parser which supports xpath be added? |
I am just thinking out loud here, I have no strong opinions, but my guess is that, from the user perspective, you would choose a parser (or pass an instance of it) when creating a Selector, and for this alternative parser calls to |
Hey!! Is this issue available? |
hi @aadityasinha-dotcom yes the issue is open and available, please continue discussion here. |
So, I want to work on this project for GSOC'22.
|
Also, I want to know how a pre-application pull request looks like? |
It can be anything, really. Please, check out https://gsoc2022.zyte.com/participate#pre-application-pull-request and let us know if you have any question beyond what is covered there. |
@Gallaecio Can I make a PR with some description and ideas/tasks regarding this? |
Sure, go ahead. |
Would it be better to use selectolax as the parser for css and if xpath method is called on the object, parse it through html5lib or html5-parser or lxml? This way it would be easy to use both css and xpath selectors. |
I don’t think it is a good idea to have a If a user really wants that, I think it is OK to ask them to instance 2 different |
So the basic idea is to add support for
right? If there are no other changes I'll upload a draft proposal tomorrow. |
I believe the idea is to implement 1 of those 3 solutions (and it is open to alternative solutions as well), not all 3. Ideally we should compare different aspects of proposed solutions, and choose one. I think performance may be the main determining factor, although query language support (e.g. XPath 1.0, XPath 2, CSS Selector) and behavior details (i.e. can it be an in-place replacement for lxml, or would some outputs differ?) may play a role if 2 or more solutions offer similar performance. |
It still requires compilation, so we'd still have to provide wheels to be able to install scrapy without a compiler, like it's possible today. |
I have uploaded a draft proposal it still needs a lot of work, the timeline on the website doesn't specify separate timelines for 175 and 350hr project so I still have to change that part a bit. Please suggest any changes that I need to make in my proposal |
Every now and then we get a bug report about some HTML source not being parsed as a browser would.
There was the idea in Scrapy of adding an "html5" type to switch to an HTML5 compliant parser.
One of these is html5lib that can be used with lxml.
The text was updated successfully, but these errors were encountered: