Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: add default audit options with scoring #4927

Merged
merged 1 commit into from
Apr 9, 2018

Conversation

patrickhulce
Copy link
Collaborator

closes #4873

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it be better to leave the default in the file and allow overriding instead?

@patrickhulce
Copy link
Collaborator Author

would it be better to leave the default in the file and allow overriding instead?

I thought about that, but for our magic --print-config I'm looking forward to you implementing, this seemed better ;)

@patrickhulce patrickhulce changed the base branch from cleanup_simulator_creation to master April 5, 2018 21:57
@paulirish
Copy link
Member

@patrickhulce wdyt about using defaultOptions which come along with the audit. it delivers its own so you don't need the config to describe how exactly each audit gets used.

e.g. https://github.com/paulirish/pwmetrics/blob/master/lib/lh-config.ts doesn't need to reference the constants just to provide defaults.

the basic pattern we're thinking of:

diff --git a/lighthouse-core/audits/audit.js b/lighthouse-core/audits/audit.js
index 227868c9..4bd800a7 100644
--- a/lighthouse-core/audits/audit.js
+++ b/lighthouse-core/audits/audit.js
@@ -42,6 +42,10 @@ class Audit {
     throw new Error('Audit meta information must be overridden.');
   }
 
+  static get defaultOptions() {
+    return {};
+  }
+
   /**
    * Computes a clamped score between 0 and 1 based on the measured value. Score is determined by
    * considering a log-normal distribution governed by the two control points, point of diminishing
diff --git a/lighthouse-core/audits/first-meaningful-paint.js b/lighthouse-core/audits/first-meaningful-paint.js
index 78273bdd..ae2e4d1d 100644
--- a/lighthouse-core/audits/first-meaningful-paint.js
+++ b/lighthouse-core/audits/first-meaningful-paint.js
@@ -24,6 +24,13 @@ class FirstMeaningfulPaint extends Audit {
     };
   }
 
+  static get defaultOptions() {
+    return {
+      scorePODR: 1600,
+      scoreMedian: 4000,
+    };
+  }
+
   /**
    * Audits the page to give a score for First Meaningful Paint.
    * @see https://github.com/GoogleChrome/lighthouse/issues/26
diff --git a/lighthouse-core/config/default-config.js b/lighthouse-core/config/default-config.js
index 33fd498b..70bc63e0 100644
--- a/lighthouse-core/config/default-config.js
+++ b/lighthouse-core/config/default-config.js
@@ -75,7 +75,7 @@ module.exports = {
     'works-offline',
     'viewport',
     'without-javascript',
-    {path: 'first-meaningful-paint', options: auditOptions['first-meaningful-paint']},
+    {path: 'first-meaningful-paint'},
     'load-fast-enough-for-pwa',
     {path: 'speed-index-metric', options: auditOptions['speed-index-metric']},
     'screenshot-thumbnails',
diff --git a/lighthouse-core/runner.js b/lighthouse-core/runner.js
index 65cdfb76..5b77f475 100644
--- a/lighthouse-core/runner.js
+++ b/lighthouse-core/runner.js
@@ -260,8 +260,9 @@ class Runner {
           throw error;
         }
       }
+      const options = {...audit.defaultOptions, ...(auditDefn.options || {})};
       // all required artifacts are in good shape, so we proceed
-      return audit.audit(artifacts, {options: auditDefn.options || {}, settings: opts.settings});
+      return audit.audit(artifacts, {options, settings: opts.settings});
     // Fill remaining audit result fields.
     }).then(auditResult => Audit.generateAuditResult(audit, auditResult))
     .catch(err => {

@patrickhulce patrickhulce changed the title core(config): make all numeric score thresholds configurable core: add default audit options for scores Apr 5, 2018
@patrickhulce patrickhulce changed the title core: add default audit options for scores core: add default audit options with scoring Apr 5, 2018
@patrickhulce
Copy link
Collaborator Author

aight, defaultOptions here we are! this what you had in mind @paulirish ?

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for dealing with the dumb churn :)

@@ -60,14 +68,10 @@ class TotalByteWeight extends ByteEfficiencyAudit {
const totalCompletedRequests = results.length;
results = results.sort((itemA, itemB) => itemB.totalBytes - itemA.totalBytes).slice(0, 10);

// Use the CDF of a log-normal distribution for scoring.
// <= 1600KB: score≈1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to preserve these to save a click? @paulirish used to like these especially

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO they've limited value in their current form since they're inconsistently documenting percentiles and near the context.options rather than the defaultOptions declaration. I'd prefer to stick to the graphs and get better at interpreting the PODR and median.

alternatively I had an idea that we express the score in terms of the green/yellow transition and yellow/red transition and the function just finds the curve that fits (yellow/red is already median) but I def don't want to make that change in this PR :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed on inconsistent across files and yeah, they would need to move to live near defaultOptions (they could live in the jsdoc, for instance).

I don't feel particularly strongly about them, but I do see the appeal of not having to click through and hover over the graph to get a sense of scoring.

On the other other hand, this kind of thing should likely live in the docs (and mayyyyybe the report?) anyways :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's nuke

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Convert metrics to read score thresholds from options if set
4 participants