Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(lantern): move metrics to computed artifacts #4766

Merged
merged 16 commits into from
Mar 30, 2018

Conversation

patrickhulce
Copy link
Collaborator

splitting up big part of #4676 for separate discussion, this is a straight up refactor with no functional changes yet.

the bulk of predictive-perf audit has been split up into separate computed artifacts, they share a common base class (computed/metrics/metric.js) that DRYs up the simulation

if all this looks mostly good I'll add test file for each one individually

* @param {Object} artifacts
* @return {Promise<MetricResult>}
*/
async computeMetricWithGraphs(data, artifacts, extras) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

openly searching for a better name and possible flow than extras, its needed for TTI to be able to pass in the FMP results for Math.max, alternative is just duplicating this entire computeMetricWithGraphs function for TTI

@patrickhulce
Copy link
Collaborator Author

anyone interested in giving this some review ❤️ ? :D

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally fine with the guts of each metric in a computed artifact.

@@ -0,0 +1,100 @@
/**
* @license Copyright 2017 Google Inc. All Rights Reserved.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2018 😛

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done 😛

const traceOfTab = await artifacts.requestTraceOfTab(trace);
const networkAnalysis = await artifacts.requestNetworkAnalysis(devtoolsLog);

const optimisticGraph = this.getOptimisticGraph(graph, traceOfTab, extras);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

afaict these extras arent used within this methods?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah just in getEstimateFromSimulation I'll remoeve

const optimisticEstimate = new Simulator(optimisticGraph, options).simulate();
const pessimisticEstimate = new Simulator(pessimisticGraph, options).simulate();

optimisticEstimate.timeInMs = this.getEstimateFromSimulation(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're redefining the timeInMs? Feels a little weird. What does the timeInMs from .simulate represent?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, from .simulate it's a shortcut for Math.max(...nodeTiming.map(timing => timing.endTime))

we want to return our optimistic/pessimistic estimates in ms along with the node timing that led to that computation, open to other suggestions on how to structure the return value

require('fs').readdirSync(__dirname + '/gather/computed').forEach(function(filename) {
const fileList = [
...fs.readdirSync(path.join(__dirname, './gather/computed')),
...fs.readdirSync(path.join(__dirname, './gather/computed/metrics')).map(f => `metrics/${f}`),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you check if gulpfile also needs a bump?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this effectively is bumping the gulpfile from what I could tell

const gatherers = LighthouseRunner.getGathererList()
.map(f => '../lighthouse-core/gather/gatherers/' + f.replace(/\.js$/, ''));

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started leaving specific comments but it seems like you were looking for an overall 👍 first :)

This is looking good to me as a refactor of the current state of things. These make sense as computed artifacts.

For naming, I made a comment below that we may want to classify these more as lantern-specific metrics. We may have other simulation approaches than optimistic/pessimistic regression, so we probably don't want metric as the going name for just lantern-style simulated metrics.

I'm also somewhat concerned by code like the growing beginning of createAuditResult :)

const records = [];
graph.traverse(node => node.record && records.push(node.record));
const simulatorOptions = NetworkAnalysis.computeRTTAndServerResponseTime(records);
delete simulatorOptions.rtt;
Object.assign(simulatorOptions, {cpuTaskMultiplier: 1, layoutTaskMultiplier: 1});
const simulator = new LoadSimulator(graph, simulatorOptions);

some complexity is inherent here (and some will be fixed with those TODOs), but it would be great to do a pass in the near future focused on readability (and possibility writability) of any APIs we use outside the computed artifacts

@@ -0,0 +1,100 @@
/**
* @license Copyright 2017 Google Inc. All Rights Reserved.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2018 :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done :)

*/
'use strict';

const MetricArtifact = require('./metric');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since these are new files, can we turn on type checking for them?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done! 🎉

@@ -364,3 +364,8 @@ module.exports = Simulator;
* @property {number} [layoutTaskMultiplier]
*/

/**
* @typedef SimulationResult
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since these aren't exported I don't believe these will be visible in other files. Will need to live under LH...maybe a new sub-module under LH?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be structurally type checked across files, though, if you'd rather just define compatible typedefs in each place

Copy link
Collaborator Author

@patrickhulce patrickhulce Mar 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I discovered they're referenceable when you require in the file as I was getting the simulator stuff type checked, that seems like a decent way to prevent LH externs from ballooning with the more local typedefs, wdyt?

const Simulator = require('../../simulator')

/**
 * @param {Simulator.SimulationResult} result
 */

neeeeeeeevermind, that just worked last time as a coincidence because I export a property of that type with the same name on the class object 😞

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gah, I swear I checked this with Audit and it didn't work for jsdoc typedefs. Maybe it was fixed in 2.8? I can't remember if I tested before or after we updated. Or it was just user error :)

But since it works, agree that that's good for this kind of typedef

graph.traverse(node => node.record && records.push(node.record));
const simulatorOptions = NetworkAnalysis.computeRTTAndServerResponseTime(records);
// TODO: use rtt/throughput from config.settings instead of defaults
delete simulatorOptions.rtt;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set to undefined?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately that won't have the same effect, gotta love JS key existence vs. undefined biting us left and right 😆

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably keep an eye on performance then, but hopefully properties on simulatorOptions aren't the bottleneck anyways... :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤣 🤣 🤣

const Simulator = require('../../../lib/dependency-graph/simulator/simulator');
const WebInspector = require('../../../lib/web-inspector');

class MetricArtifact extends ComputedArtifact {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SimulatedMetric or LanternMetric, maybe?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LanternMetric for now 👍

@patrickhulce
Copy link
Collaborator Author

For naming, I made a comment below that we may want to classify these more as lantern-specific metrics. We may have other simulation approaches than optimistic/pessimistic regression, so we probably don't want metric as the going name for just lantern-style simulated metrics.

Sure, not sure if you got a chance to take a look at #4676 but the idea was that the single file would encapsulate the proper metric value and do the context.throttlingType === 'lantern' ? this.getLanternMetric() : this.getTraceMetric for you so it's not duplicated in all audits, keeping lantern computed-artifact separate composable subclass sg though 👍

some complexity is inherent here (and some will be fixed with those TODOs), but it would be great to do a pass in the near future focused on readability (and possibility writability) of any APIs we use outside the computed artifacts

yes definitely, we're somewhat hamstrung at this exact moment by not having landed the mechanism to branch logic based on lantern vs. non-lantern which is immediate followup after this 👍

@patrickhulce
Copy link
Collaborator Author

alright thanks for the initial round of feedback folks! I think we're ready for round 2️⃣ 🥊 💨

@patrickhulce
Copy link
Collaborator Author

🏏 🏏

@patrickhulce
Copy link
Collaborator Author

🏓

@patrickhulce
Copy link
Collaborator Author

(cricket cricket)

(ping)

const optimisticEstimate = new Simulator(optimisticGraph, options).simulate();
const pessimisticEstimate = new Simulator(pessimisticGraph, options).simulate();

optimisticEstimate.timeInMs = this.getEstimateFromSimulation(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this bit still seems weird to me. here's how i'm reading it:

  1. simulator, plz simulate with this graph.
  2. simulation done here's my estimate and some nodeTiming internals
  3. okay thx
    • but if this is the CI metric, i'm gonna do some extra work where I extract the lastLongTaskEndTime from the iternals and also take the lantern FMP number, and do some Math.max with those. and then i'm gonna ignore the provided estimate and instead use this one.

Is that right?

I'm thinking one alternative a sort of minimumEstimateTime(nodeTiming){ } method that's either provided to the simulator/simulate or something? Don't know if that makes sense though..

Are there any clients of simulate() now that don't override timeInMs with their own value?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a misunderstanding based on my laziness :) Simulator has no concept of a metric at all, and that's a good thing we'd like to keep; it just computes timings of things.

Metrics take these timings and return estimates, in the current case 1 optimistic, 1 pessimistic.

It just so happens that 2 of our 3 current metrics use time of the last network resource (what simulator outputs) as their estimate without any additional logic. Rather than add a mapSimulatorOutputToEstimateOutput I abused the existing return object and allow the metric to mutate it if it wants.

Given the amount of confusion this causes and the fact that we're about to be in a situation where only 2/5 metrics take simulator output without touching it, I think it makes sense to just go the mapSimulatorOutputToEstimateOutput route, WDYT? Does this make more sense?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brendankenny since paul's OOO for awhile WDYT of the above? :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

@@ -50,7 +50,7 @@ function parseChromeFlags(flags = '') {
* Attempts to connect to an instance of Chrome with an open remote-debugging
* port. If none is found, launches a debuggable instance.
* @param {!LH.Flags} flags
* @return {!Promise<!LH.LaunchedChrome>}
* @return {!Promise<LH.LaunchedChrome>}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason to half delete the ! here? :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

full 🗑 now!

graph.traverse(node => node.record && records.push(node.record));
const simulatorOptions = NetworkAnalysis.computeRTTAndServerResponseTime(records);
// TODO: use rtt/throughput from config.settings instead of defaults
delete simulatorOptions.rtt;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably keep an eye on performance then, but hopefully properties on simulatorOptions aren't the bottleneck anyways... :)

const optimisticEstimate = new Simulator(optimisticGraph, options).simulate();
const pessimisticEstimate = new Simulator(pessimisticGraph, options).simulate();

optimisticEstimate.timeInMs = this.getEstimateFromSimulation(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

tsconfig.json Outdated
@@ -6,6 +6,7 @@
"allowJs": true,
"checkJs": true,
"strict": true,
"allowSyntheticDefaultImports": true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was this needed or did it turn out to be a red herring?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah unnecessary, removed


namespace Simulation {
// HACK: TS treats 'import * as Foo' as namespace instead of a type, use typeof and prototype
export type GraphNode = typeof _Node['prototype'];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2.8 ended up adding a new helper that might(?) work here (and be somewhat less ugly :). export type GraphNode = InstanceType<typeof _Node>;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@patrickhulce
Copy link
Collaborator Author

hey folks would love to merge this today if possible, any more comments @paulirish @brendankenny ?

@@ -76,6 +76,7 @@ describe('Lighthouse chrome extension', function() {
});

if (lighthouseResult.exceptionDetails) {
console.error(lighthouseResult); // eslint-disable-line no-console
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will you add a comment explaining why logging here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@patrickhulce patrickhulce merged commit 25913c7 into master Mar 30, 2018
@patrickhulce patrickhulce deleted the move_predictive_perf_artifact branch March 30, 2018 22:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants