-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(metrics): move FMP to computed artifact #4951
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly good! just a few ideas.
lighthouse-core/audits/metrics.js
Outdated
|
||
for (const [traceEventName, timing] of Object.entries(traceOfTab.timings)) { | ||
const uppercased = traceEventName.slice(0, 1).toUpperCase() + traceEventName.slice(1); | ||
const metricName = `raw${uppercased}`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raw => trace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
lighthouse-core/audits/metrics.js
Outdated
metrics.push(Object.assign({metricName}, values)); | ||
} | ||
|
||
for (const [traceEventName, timing] of Object.entries(traceOfTab.timings)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Include all timings from traceOftab
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
lighthouse-core/audits/metrics.js
Outdated
const metricsMap = {firstContentfulPaint, firstMeaningfulPaint, timeToInteractive}; | ||
|
||
const metrics = []; | ||
for (const [metricName, values] of Object.entries(metricsMap)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Include these specific metrics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
lighthouse-core/audits/metrics.js
Outdated
const firstContentfulPaint = await artifacts.requestFirstContentfulPaint(metricComputationData); | ||
const firstMeaningfulPaint = await artifacts.requestFirstMeaningfulPaint(metricComputationData); | ||
const timeToInteractive = await artifacts.requestConsistentlyInteractive(metricComputationData); | ||
const metricsMap = {firstContentfulPaint, firstMeaningfulPaint, timeToInteractive}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move this metricsMap
line to right above the for
loop. move the const metrics
line up higher.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -24,6 +24,16 @@ function safeGet(object, path) { | |||
return object; | |||
} | |||
|
|||
function findValueInMetricsAuditFn(metricName, valueName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
valueName => timingOrTimestamp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
😷 🗡 💨(that's supposed to be a ninja that's addressing feedback) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wfm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quuuuuestions for you
"items": [ | ||
{ | ||
"timing": 493, | ||
"timestamp": 185603812639.80002, | ||
"timing": 479, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why the changes here? Since regenerating from the same source, shouldn't they stay the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh good catch!! this was actually a bug introduced back with TTCI changes which fail when scoring params are not given in our hacky TTCI.audit(artifacts)
approach
@@ -8,8 +8,8 @@ | |||
const Metrics = require('../../../lib/traces/pwmetrics-events'); | |||
const assert = require('assert'); | |||
|
|||
const dbwTrace = require('../../fixtures/traces/dbw_tester.json'); | |||
const dbwResults = require('../../fixtures/dbw_tester-perf-results.json'); | |||
const dbwTrace = require('../../results/artifacts/defaultPass.trace.json'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename these variables (and in asset-saver
) to exampleTrace
and sampleV2Results
or something like that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, why? they're still on the dbw tester so the names are still relevant :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ha, somehow I never noticed that's where they were from. Never mind :)
|
||
// Include all timestamps of interest from trace of tab | ||
for (const [traceEventName, timing] of Object.entries(traceOfTab.timings)) { | ||
const uppercased = traceEventName.slice(0, 1).toUpperCase() + traceEventName.slice(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol no hope of inferring type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure there is! :D
string -> string -> string + string -> string => string
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ha, I meant of metrics
:P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh lol, yes :)
assert.equal(result.displayValue, '530\xa0ms'); | ||
assert.equal(result.rawValue, 529.9); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we preserve these tests (put them in metrics-test.js
, maybe?) Many of them were hard-won results from problematic traces, and having them in our tests make any future deviations from our current trace interpretations more obviously deliberate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by "these tests" do you mean the assertions on the correct timestamp/navstart/fcp rather than just the FMP? AFAICT I didn't regress coverage of any test scenarios
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean the assertions on the correct timestamp/navstart/fcp rather than just the FMP?
yes
AFAICT I didn't regress coverage of any test scenarios
is there elsewhere we test for navstart/fCP/fmpFellBack for these problematic traces (lateTracingStartedTrace
, badNavStartTrace
, preactTrace
, etc) specifically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes at the level where the problematic logic actually was, trace-of-tab :)
https://github.com/AkshayIyer12/lighthouse/blob/f313424de4f2284392aa758d28416c51f7f59dd2/lighthouse-core/test/gather/computed/trace-of-tab-test.js#L13-L18
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ha, well that's perfect then
}, computedArtifacts); | ||
} | ||
|
||
/* eslint-env mocha */ | ||
describe('Performance: first-meaningful-paint audit', () => { | ||
const context = {options, settings: {throttlingMethod: 'provided'}}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should these tests include simulated
runs or should most of these be kicked up to trace-of-tab
tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm keeping all of the simulated/provided/devtools test distinctions inside the computed artifact versions and keeping all the audit tests on provided to keep things simpler there. does that sound good to you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, it's not terrible, and reduces churn. I was just thinking if we wrote them today really this unit test would probably just verify that it can get results for the various throttling methods, while the nuts and bolts would be relegated to the computed artifact/lantern computed artifact unit tests.
WDYT? This can also just be some legacy we live with and some day maybe clean up our unit test organization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, might as well have a real fresh start this time around :D
TTFI.audit(artifacts).catch(() => ({rawValue: 0})), | ||
TTCI.audit(artifacts).catch(() => ({rawValue: 0})), | ||
TTFI.audit(artifacts, context).catch(() => ({rawValue: 0})), | ||
TTCI.audit(artifacts, context).catch(() => ({rawValue: 0})), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was the bug you caught @brendankenny
I'll do a followup PR to make these use computed artifacts instead 👍
🤔 I wonder if we should have consider the sample LHR to be a golden file that all changes to must be approved in review to catch these sooner
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should have consider the sample LHR to be a golden file that all changes to must be approved in review to catch these sooner
that's a good idea. I wonder if there's some way to enforce that? Maybe have Travis regenerate sample_v2.json
and diff with the checked in version? cc @paulirish
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that was my thought
yarn update:sample-json
git diff path/to/sample_v2.json --exit-code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
peek at #4956
just as an explicit notice for the purposes of #4934, I'm currently waiting response to review comments :) |
@@ -138,7 +138,8 @@ class TraceOfTab extends ComputedArtifact { | |||
firstPaintEvt: firstPaint, | |||
firstContentfulPaintEvt: firstContentfulPaint, | |||
firstMeaningfulPaintEvt: firstMeaningfulPaint, | |||
onLoadEvt: onLoad, | |||
loadEvt: load, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will you add the new metrics
audit and onLoadEvt
-> loadEvt
to #4333. We should probably have a list and we haven't been adding "breaking" to our PR titles (oh well)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
}, computedArtifacts); | ||
} | ||
|
||
/* eslint-env mocha */ | ||
describe('Performance: first-meaningful-paint audit', () => { | ||
const context = {options, settings: {throttlingMethod: 'provided'}}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, it's not terrible, and reduces churn. I was just thinking if we wrote them today really this unit test would probably just verify that it can get results for the various throttling methods, while the nuts and bolts would be relegated to the computed artifact/lantern computed artifact unit tests.
WDYT? This can also just be some legacy we live with and some day maybe clean up our unit test organization
assert.equal(result.displayValue, '530\xa0ms'); | ||
assert.equal(result.rawValue, 529.9); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean the assertions on the correct timestamp/navstart/fcp rather than just the FMP?
yes
AFAICT I didn't regress coverage of any test scenarios
is there elsewhere we test for navstart/fCP/fmpFellBack for these problematic traces (lateTracingStartedTrace
, badNavStartTrace
, preactTrace
, etc) specifically?
alright @brendankenny I think all feedback has been addressed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last request *ducks*
But otherwise LGTM!
traces: {[Audit.DEFAULT_PASS]: {traceEvents}}, | ||
devtoolsLogs: {[Audit.DEFAULT_PASS]: []}, | ||
}, computedArtifacts); | ||
const context = {options, settings: {throttlingMethod: 'provided'}}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe also a simulated
test (and devtools
? Even though it's currently the same execution path as this) so it checks that it can also successfully call through?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
💥 💥 |
progress on #4872
moves FMP and also moves all the junk that used to be in FMP to be in a
metrics.js
audit that will collect all metrics