-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
report: add perf score explanation and link #9355
Conversation
Gah beat me to it! 😭 |
"lighthouse-core/lib/lh-error.js | internalChromeError": { | ||
"message": "Âń îńt̂ér̂ńâĺ Ĉh́r̂óm̂é êŕr̂ór̂ óĉćûŕr̂éd̂. Ṕl̂éâśê ŕêśt̂ár̂t́ Ĉh́r̂óm̂é âńd̂ t́r̂ý r̂é-r̂ún̂ńîńĝ Ĺîǵĥt́ĥóûśê." | ||
}, | ||
"lighthouse-core/lib/lh-error.js | missingRequiredArtifact": { | ||
"message": "R̂éq̂úîŕêd́ {artifactName} ĝát̂h́êŕêŕ d̂íd̂ ńôt́ r̂ún̂." | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these two were from a merge mismatch where en-XL
was checked in but #9284 wasn't built on top of that commit, so these weren't updated. Shouldn't happen again (at least for any new PRs)
@@ -591,6 +592,9 @@ | |||
color: var(--color-gray-600); | |||
margin: var(--section-padding-vertical) 0; | |||
} | |||
.lh-metrics__disclaimer a { | |||
color: var(--color-gray-700); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
800 was too dark compared to the text color
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Obviously love this, the only thing really different in my take was that I added explicit little subclauses to opportunities and diagnostics that they don't affect the score, just to really hammer it in :)
lol I was literally just writing this comment: We also know from experience that people don't read all the explanatory text, so maybe we also want to augment the opportunity and diagnostic descriptions so it's clear wherever users are looking? We could do something like
or go even more explicit
or something |
👍 I had it flipped in my branch ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, nice addition to clear up confusion! I didn't realize it was a link at first, but I think it's great that the text links directly to the source!
"These are generalized web performance optimizations, they do not directly contribute to the Lighthouse Performance score." Just ideas 🤔 |
So trying out these variations:
express what we mean, but they also take up a (relatively) lot of visual space to explain what the opportunities are not, which is UI noise and it kind of undercuts the perceived usefulness of the opportunities right before we ask users to pay attention to them :) Another option (kind of mentioned above) is phrasing positively, though that makes the score disclaimer more implicit. Something like:
So maybe today (so we can ship 5.2), pick one of:
|
👍👍 I'm also fine with the version with 'but' in there. I think some version of this should be shipped.
I think this moves us in the wrong direction. This makes it sounds like the opportunities are a direct factor of the score to me. I'd be disappointed if we punted. This is one of the few things I tell pretty much every single person I introduce lighthouse to and, IMO, is central to reacting appropriately to the opportunities (treating them as a guide but not freaking out over a single resource showing up there you can't control). |
so my issues are
I can kind of see that, but is it a matter of phrasing or is it fundamental to the approach? :)
totally agree on all this, I meant ship the first part because everyone seems fine enough on it and so we have something and then we can reconvene on the second part. |
example report with this string to see it in context. Maybe the negative thing is just me :) |
what about something under |
that's in there :) See #9355 (comment) for why we might want more |
Hi all! Elizabeth forwarded this thread to me to see if I could weigh in (for those who don't know me, I'm a UXW for Chrome – hello!). I like the general consensus where everyone is heading: we're offering useful insight while transparently mitigating user expectations. With the currently proposed drafts, one small nudge is that we typically put the positive factors first; so, it should be ordered from 'speed up' → 'score impact': "These optimizations can help your page load faster, but they don't directly impact the Performance score." To address Brendan's feedback directly:
|
Thanks for the valuable insight @meggynw!!
The primary reason it's so vague is that each opportunity affects different aspects of load performance. Some will affect "FCP", others will affect "TTI", still others will not directly reflect in our set of metrics but still positively impact the amount of time taken for the entire page to download. We had thrown around in years past the idea of directly stating what each opportunity does, but it's certainly a UX challenge for information overload and more on this in the next response...
One of the main challenges with the opportunities is that they can impact the page in many different ways. While we are definitely prioritizing them by their estimated overall impact (and hiding them completely if we think they're not worthwhile), it's not an exact science and promising an exact score increase would be overpromising (and as you so excellently put it, jeopardize user trust). Another mismatch with the opportunities:score relationship is that we construct our incentives around the score while opportunities are always meant to be just helpful advice. Example: if you're doing really well on a particular metric, we might give you a great score there to incentivize moving on to another metric, but there still might be room for improvement we found, so an opportunity could still be visible (albeit low priority). On the flip side, there are cases where the score is very low but there are few automatically identifiable opportunities for improvement. Disconnecting the score from the opportunities goes a long way IMO to helping users better interpret the information we're giving. |
Thanks for the thorough answer :) That all makes a lot of sense. Based on my understanding, I've tweaked our string to the following. I propose "suggestions" instead of "optimizations" as it makes the noun is less redundant with "page load faster"; it also emphasizes that these are nice-to-haves instead of priority items. I've tried to use "directly related" in the 2nd sentence to clarify that, while there may be a minor positive impact, devs can't rely on opportunities for score improvements.
Reactions? Open to feedback :) |
Reactions! :)
My tweak might be:
I personally like "affect" compared to "relate" since I think it gets at the heart of people's fears that the opportunity result is negatively affecting their score. |
That string LGTM! I'd just use the contraction "don't" for a more
conversational tone. Also, another alternative to "affect" is "impact," but
I don't have a strong preference.
…On Wed, Jul 17, 2019 at 2:27 PM Patrick Hulce ***@***.***> wrote:
"These suggestions can help your page load faster. They aren't directly
related to Performance metrics or the total score."
Reactions! :)
- I like "suggestions" here 👍
- I don't love saying they're not related to "performance metrics" at
large. All of the opportunities are computed on *some* performance
metric, is just might not help your score due to our weighting and score
curves.
My tweak might be:
These suggestions can help your page load faster. They do not directly
affect the overall performance score.
I personally like "affect" compared to "relate" since I think it gets at
the heart of people's fears that the opportunity result is negatively
affecting their score.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9355?email_source=notifications&email_token=ALEGLLMBURWHY5T2IMJNQQ3P76FDNA5CNFSM4IBUHTYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2GUNOI#issuecomment-512575161>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ALEGLLOX7L5FBFKFNVE7G63P76FDNANCNFSM4IBUHTYA>
.
--
Meggyn Watkins | UX Writer | Chrome | meggyn@ <meggyn@google.com>
|
I think one nuance that was lost in the end here was that Opportunities actually are Lighthouse's best current advice for how users can impact their score. The distinction is hard to communicate :) A lot of users interpret it as "my score is bad because I'm not doing the things in this list", when really it's "your score is bad because the metrics showed the page loaded slowly; here's a list of things we think should (but won't necessarily) improve the page load and thus the metrics and thus your score". They can do everything in that list and still have a slow page (the opportunities and the metrics aren't exhaustive), or they could not do everything in the list and have a fast page (if you have a 2s opportunity on a 1s page load that's a bug but some form of this does happen). Anyway, the fact that we don't add up the opportunities to make a score is what we'd like to clear up. If we land too far on the conservative side ("here is some random trivia about your page that will never ever have anything to do with your score" :), that's probably ok for a while. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on calling out optimizations+scoring in the report:
I think it's unnecessary.
I like the new "disclaimer" text for metrics and the long-awaited link to scoring docs. The prominent link directs people to a page that should communicate clearly that about the relationship of opportunities results & score.
We can start with adding just that metrics disclaimer sentence along with refreshed scoring docs.
@@ -557,7 +557,7 @@ Util.numberDateLocale = 'en'; | |||
*/ | |||
Util.UIStrings = { | |||
/** Disclaimer shown to users below the metric values (First Contentful Paint, Time to Interactive, etc) to warn them that the numbers they see will likely change slightly the next time they run Lighthouse. */ | |||
varianceDisclaimer: 'Values are estimated and may vary.', | |||
varianceDisclaimer: 'Values are estimated and may vary. The performance score is [based only on these metrics](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md#how-are-the-scores-weighted).', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@egsweeny do you we can should to the scoring doc in github or to https://developers.google.com/web/tools/lighthouse/v3/scoring ... in reality, both need updating, but google/WebFundamentals#7724 is already started.
I kinda prefer the devsite page.. it's SEO is a bit stronger. And looks nice. :)
Trying to represent @exterkamp and myself: we should just go with the (perhaps too) strongly worded warning. In an ideal world we could do just the metrics warning change and see if confusion drops, but our feedback routes for this are too tenuous and hard to quantify. If confusion continues, a likely outcome is that we'll just continue to debate the few signals we get from it and do nothing about it until the next UI change. Instead, it doesn't do much harm to go overboard on waving people off opportunities, and the fact that we will be seeing it in the report every day means that if we do find it doesn't express the situation well we'll be more likely to revise in the future. |
(though for the record I still 100% agree with everything I wrote above :) |
@paulirish sounds like he can live with this if |
updated with strings. Links are in progress (going to link to update from google/WebFundamentals#7724) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
linking directly affect
SGTM! :)
great compromises team!
/** Description of the opportunity section of the Performance category. 'Optimizations' could also be 'recommendations' or 'suggestions'. Within this section are audits with imperative titles that suggest actions the user can take to improve the loading performance of their web page. */ | ||
loadOpportunitiesGroupDescription: 'These optimizations can speed up your page load.', | ||
/** Description of the opportunity section of the Performance category. 'Suggestions' could also be 'recommendations'. Within this section are audits with imperative titles that suggest actions the user can take to improve the loading performance of their web page. */ | ||
loadOpportunitiesGroupDescription: 'These suggestions can help your page load faster. They don\'t [directly affect](https://github.com/GoogleChrome/lighthouse/blob/master/docs/scoring.md#how-are-the-scores-weighted) the Performance score.', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we should be sha (or at least version) tagging these links if they're going to live in a report forever?
ignore this I just read the "links are in progress" for the web fundamentals update :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we should be sha (or at least version) tagging these links if they're going to live in a report forever?ignore this I just read the "links are in progress" for the web fundamentals update :)
actually, maybe we won't wait for that. At worst, 5.2 users will get a special easter egg doc link. Added the hash as suggested, though :)
fixes #8717 (or at least takes a first stab)
let's bikeshed! Adapted from @csabapalfi's suggestion in #8717 (comment)
Regular blue link color was way too distracting in there, so adapted the gray.