-
-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to scan a whole site, but only get ~200 results #84
Comments
can you try to explain this with example? I am not able to understand |
So I run this:
All churns on as you'd expect until I get:
That's for basically any site I craw. So I get 200 or so scans but the site is much bigger. |
There is config for the maximum number of routes to scan This was implemented as the stability of the worker and the UI starts degrading around here and it's quite easy for a site scan to end up queueing thousands of routes. I've pushed up a warning that will be triggered when you hit the limit to give better visibility, it will be available in v0.6.0 which will be released soon. You can read more about how the large sites are handled on this page. |
So @harlan-zw this should work to scan 500 URLs vs the default 200? It would also take 2 samples rather than just one. Assuming this is in the directory where you execute the script: unlighthouse.config.ts
|
Details
I've got a few big sites I'd like to scan, but it keeps stopping about 200 or so pages in.
Is there a way to override that? It's still a good measure, but would be useful to be able to scan the whole site if needed.
Maybe I'm just missing something in the config.
The text was updated successfully, but these errors were encountered: