Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(constants): increase default maxWaitForFcp #9509

Merged
merged 1 commit into from
Aug 8, 2019

Conversation

patrickhulce
Copy link
Collaborator

Summary
This PR increases the default maxWaitForFcp to 30s while leaving it 15s in LR configs. The original reason we made the limit so low was to avoid timeouts in LR. My gut says this aggressive value in DevTools/CLI is what is causing such a strong uptick in NO_FCP reports. Anecdotally the last 3 filed issues were all pages that took ~20-25 seconds to first paint for me locally, not ones that didn't paint at all.

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess one question this leads us back to is should we have this timeout after all?

Looking back at #7174, we didn't do a ton of digging into root causes, and our error handling is more flexible now. We could instead try to more actively detect the interstitial and DNS failure and etc cases (I believe those were a lot of the cases we wanted the early exit from?) and let everyone else hit the regular timeout?

@patrickhulce
Copy link
Collaborator Author

I'm still of the mindset that this should essentially be a fatal error, just that we should be a little more forgiving in how slow you're allowed to be before we say "you never paint content". 15s is too close to reality for a lot of sites unfortunately :)

@brendankenny
Copy link
Member

I'm still of the mindset that this should essentially be a fatal error, just that we should be a little more forgiving in how slow you're allowed to be before we say "you never paint content". 15s is too close to reality for a lot of sites unfortunately :)

definitely agree on the fatal error, I just meant that the original goal (I believe) was to not keep users waiting around for the full timeout if we know they're going to get NO_FCP at the end regardless.

But if the timeout is pushing up to the full timeout because we don't actually know a page is going to fail regardless, is it worth it? We could instead fail at the end the same way if the sawFcp flag isn't set or whatever. Meanwhile if there are cases we could speed up getting to failure, we could handle them more precisely than the more blunt instrument that is maxWaitForFcp

@patrickhulce
Copy link
Collaborator Author

Ah gotcha! I agree that angle would be nice as well, but the risk and level of effort to be successful there IMO is much higher there than a bump to maxWaitForFcp as we gather more information. It requires a move back to the security error-style racing in driver that ended up proving very thorny to do correctly and resulted in lots of cases where it was impossible to run Lighthouse on valid URLs.

Not saying we should never do it, just that the level investment to do it right deserves prioritization compared to this simple and potentially valuable drive-by fix :)

@brendankenny
Copy link
Member

Not saying we should never do it, just that the level investment to do it right deserves prioritization compared to this simple and potentially valuable drive-by fix :)

yeah, 100% good point :)

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants