You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we show Crux Data, we need to offer better context as to how it compares to the synthetic run and why it's useful and interesting. Also, some sort of messaging when crux varies dramatically could be helpful to display:
"WebPageTest's findings in this test varied from Chrome real user data significantly. On average, Chrome users experience better/worse performance than this run shows. That doesn't mean WPT's findings aren't useful, just that the location/speed/etc conditions of this particular test is atypical for this site. We recommend testing in a variety of browser/location/network speeds to inform your understanding of this site's overall performance in different conditions"
The text was updated successfully, but these errors were encountered:
New design sketch of the CrUX data section of our summary that attempts to present CrUX info clearer and in context alongside the synthetic result.
Some notes:
prioritize showing the metrics themselves (at p75) since that's most interesting
second, make note of the difference between that metric and the test run
offer a link to see the full report
offer contextual info about why a wpt test run may differ greatly.
scottjehl
changed the title
Crux presentation needs to lead with a metric number, and charts need better explanation
CrUX presentation needs to lead with a metric number and more clearly communicate differences from test run
Feb 16, 2023
The biggest value of this section is the correlation (oh crap, we're so much slower/faster than what the real-world looks like) and that part feels a bit "quieter" here
Styling the numbers like we do our test metrics kinda makes sense, but I also wonder if it's confusing? Like...are people gonna connect that to the actual test run rather than recognize it as another source?
I kinda miss the good/bad/improve percentages. No good rationale for why I do, honestly. :)
As we chatted in a side thread, there are probably two use cases for this section to balance: one for folks who are looking for the CrUX metrics first, and one for folks who want to know how much CrUX differs from the test run.
Here's another pass that shifts priorities in the blocking a bit
When we show Crux Data, we need to offer better context as to how it compares to the synthetic run and why it's useful and interesting. Also, some sort of messaging when crux varies dramatically could be helpful to display:
"WebPageTest's findings in this test varied from Chrome real user data significantly. On average, Chrome users experience better/worse performance than this run shows. That doesn't mean WPT's findings aren't useful, just that the location/speed/etc conditions of this particular test is atypical for this site. We recommend testing in a variety of browser/location/network speeds to inform your understanding of this site's overall performance in different conditions"
The text was updated successfully, but these errors were encountered: