-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(predictive-perf): incremental improvements and fixes #3163
Conversation
b1fde39
to
0ff9ef9
Compare
const DEFAULT_THROUGHPUT = emulation.TYPICAL_MOBILE_THROTTLING_METRICS.targetDownloadThroughput * 8; // 1.6 Mbps | ||
|
||
// same multiplier as Lighthouse uses for CPU emulation | ||
const DEFAULT_CPU_TASK_MULTIPLIER = 4; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ugly, but should this also pull out emulation.CPU_THROTTLE_METRICS.rate
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if DEFAULT_LAYOUT_TASK_MULTIPLIER
should be defined relative to DEFAULT_CPU_TASK_MULTIPLIER
, we could start with just a multiplier (.5
in this case)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah good calls
const DEFAULT_CPU_TASK_MULTIPLIER = 4; | ||
// layout tasks tend to be less CPU-bound and do not experience the same increase in duration | ||
const DEFAULT_LAYOUT_TASK_MULTIPLIER = 2; | ||
// if a task takes more than 10 seconds it's usually a sign it isn't actually CPU bound and we're over estimating |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overestimating
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -69,6 +72,14 @@ class TcpConnection { | |||
} | |||
|
|||
/** | |||
* @param {number} bytes | |||
*/ | |||
setExtraBytesDownloaded(bytes) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
description for this? unclear what the functionality is from the call. may want to use just extra
or overflow
for both method and prop, too? Or explicit reference to being part of h2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah will do, _h2Overflow
probably more clear 👍
@@ -13,6 +13,8 @@ const TracingProcessor = require('../../lib/traces/tracing-processor'); | |||
|
|||
// Tasks smaller than 10 ms have minimal impact on simulation | |||
const MINIMUM_TASK_DURATION_OF_INTEREST = 10; | |||
// video files tend to be enormous and throw off all graph traversals | |||
const IGNORED_MIME_TYPES_REGEX = /^video/; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like a TODO, maybe? ideally we'd have a full graph in here and then things like predictive-perf would know how to ignore things like video (useful in the future for when other audits can rely on queries over the dep graph)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that sounds good
@@ -94,8 +93,7 @@ class TraceOfTab extends ComputedArtifact { | |||
// subset all trace events to just our tab's process (incl threads other than main) | |||
// stable-sort events to keep them correctly nested. | |||
const processEvents = trace.traceEvents | |||
.filter(e => e.pid === startedInPageEvt.pid) | |||
.stableSort((event0, event1) => event0.ts - event1.ts); | |||
.filter(e => e.pid === startedInPageEvt.pid); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess my issues here are:
- trace events aren't necessarily in order
- our finding of main thread, the correct navStart etc aren't perfect, but I'm concerned that all of our logic for those (from
keyEvents
above) currently relies on the order after sorting :) - I don't believe any gatherer/audit currently relies on correctly (stably) sorted events -- everything looking at the trace is looking at top level events, but it feels like something we're just going to have to re-add in the future when we start looking at nested events
- no sort at all seems really wrong, though. TTFI and TTCI currently rely on the top level events being in order (from
getMainThreadTopLevelEvents
which gets its order frommainThreadEvents
)...that may be true most of the time but isn't necessarily, and with the natural noise in those metrics it would be difficult to spot the bug if it does cause issues
since the absolute time spent on this is pretty small (like 900ms on a big trace I just tried vs like 500ms with native sort), it seems like the main benefit here is for the mass processing of traces rather than just a Lighthouse run, so personally I feel like we should keep the stable sort.
So we could:
- keep
stableSort
stableSort
onlykeyEvents
(should be fast because there aren't that many key events) and warn everywhereprocessEvents
andmainThreadEvents
aren't sorted (and TTFI and TTCI could add their own sorts after extracting quiet periods or whatever)- Switch to timsort. We discussed using timsort a while back because it's stable but is also optimized for mostly or partly sorted data, which fits the trace events use case well. Using https://www.npmjs.com/package/timsort I sorted the same trace in about 300ms, which seems like it could be a good compromise with "no sort at all" :) One problem is one longstanding issue open against the repo, but at worst we could wrap the sort line with a try/catch and fall back to devtool's
stableSort
if timsort throws (and maybe log to sentry to see if it's ever actually a problem for us...as @paulirish notes, we have the simplest case possible since we're only sorting integer timestamps)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can revert for now and we can punt for later. Maybe we end up fixing mziccard/node-timsort#14 and use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can revert for now and we can punt for later.
sgtm
Maybe we end up fixing mziccard/node-timsort#14 and use it?
I like that plan :)
0ff9ef9
to
df0b58c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💭 📐
LGTM!
…)" This reverts commit 854921b.
in the name of fast mode #2703
blocked on
#3162broken out from #2720 for easier reviewing