-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: add test trace creator #6196
Conversation
const topLevelTasks = [ | ||
{ts: 1000, duration: 10}, | ||
{ts: 1050, duration: 10}, | ||
{ts: 1975, duration: 50}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@patrickhulce I don't entirely understand why I couldn't get the pessimistic graph to download these images in parallel without these extra 10ms tasks. Instead the second image and the CPU node always went in parallel (even if the two images had identical start and end times) and so it was failing to see both of these images before the last long task started.
Inserting these extra top level tasks fixed that and doesn't feel too bad (most traces have more than one top level task :), but I do worry about the fragility of my solution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't tell for 100% just from glancing, but it's probably because they were inferred to have come from the same connection, if different connectionIds are added and/or the timings show they happened in parallel then it might fix itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
different connectionIds are added and/or the timings show they happened in parallel
ah, didn't work. There's probably some subtlety with leaving out real connection timings or something, too. I'll just leave it like this for now and maybe we'll figure it out when a lantern change breaks this test :)
|
||
it('finds images loaded before Trace End when interactive throws error (Lantern)', async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as far as I can tell this test was identical to the above. The start time of generateRecord
of 'a' is the only thing that's different, but throttlingMethod: 'simulate'
uses the network record timing information, not the timing info in the ImageUsage
artifact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I think this was very recently added when I had @exterkamp keep fiddling with the tests, probably got duped in the shuffle sorry!
assert.equal(output.details.items[0].url, scriptSubNodeUrl); | ||
assert.equal(output.details.items[0].wastedMs, 330); | ||
assert.equal(output.details.items[1].url, scriptAddedNodeUrl); | ||
assert.equal(output.details.items[1].wastedMs, 180); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@patrickhulce it seemed fine to change these to the timings derived from the generated network records, but I wasn't sure what the if (scriptOtherNodeLocal.getDependencies()[0] === mainDocumentNodeLocal) {
conditionals were up to, so wanted to double check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that was just so we'd only be setting the original timings and letting the altered simulation ones be inferred on there own.
this is much nicer :) 👍
@@ -32,7 +32,8 @@ function getBaseRequestId(record) { | |||
function headersArrayToHeadersDict(headersArray = []) { | |||
const headersDict = {}; | |||
headersArray.forEach(headerItem => { | |||
const value = headersDict[headerItem.name] ? headersDict[headerItem.name] + '\n' : ''; | |||
const value = headersDict[headerItem.name] !== undefined ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changes also in #6195 but also needed for this PR
a403647
to
0244f1f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome stuff!!
}, | ||
]; | ||
const networkRecords = getMockNetworkRecords(); | ||
networkRecords[2].protocol = 'data'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
beautiful these are real tests now! :D
return {traceEvents}; | ||
} | ||
|
||
module.exports = createTestTrace; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i ❤️this 📄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM love the offscreen image tests 😍
other half #6195. This PR adds a simple trace creator to easily create test traces that match the needed characteristics. Right now it accepts timestamps for navStart and the end of the trace, and then a set of timestamps and durations for top-level tasks (but the options can easily be expanded).
The allows easy replacement of a bunch of computed artifact mocks relying on a specific TTI or just needed a dummy trace so the computed artifacts don't throw while another aspect of the audit is being tested.
Some of these are a little more sensitive to future changes than what was found in #6195, but we do get a bit more robustness in that these files now test with real computed artifacts rather than just testing the quality of our mocks (which can be particularly risky when testing just simple pass/fail conditions without testing why).