Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: add default audit options with scoring #4927

Merged
merged 1 commit into from
Apr 9, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions lighthouse-core/audits/audit.js
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,13 @@ class Audit {
throw new Error('Audit meta information must be overridden.');
}

/**
* @return {Object}
*/
static get defaultOptions() {
return {};
}

/**
* Computes a clamped score between 0 and 1 based on the measured value. Score is determined by
* considering a log-normal distribution governed by the two control points, point of diminishing
Expand Down
24 changes: 16 additions & 8 deletions lighthouse-core/audits/bootup-time.js
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,6 @@ const Util = require('../report/v2/renderer/util');
const {groupIdToName, taskToGroup} = require('../lib/task-groups');
const THRESHOLD_IN_MS = 10;

// Parameters for log-normal CDF scoring. See https://www.desmos.com/calculator/rkphawothk
// <500ms ~= 100, >2s is yellow, >3.5s is red
const SCORING_POINT_OF_DIMINISHING_RETURNS = 600;
const SCORING_MEDIAN = 3500;

class BootupTime extends Audit {
/**
* @return {!AuditMeta}
Expand All @@ -34,6 +29,18 @@ class BootupTime extends Audit {
};
}

/**
* @return {LH.Audit.ScoreOptions}
*/
static get defaultOptions() {
return {
// see https://www.desmos.com/calculator/rkphawothk
// <500ms ~= 100, >2s is yellow, >3.5s is red
scorePODR: 600,
scoreMedian: 3500,
};
}

/**
* @param {DevtoolsTimelineModel} timelineModel
* @return {!Map<string, Number>}
Expand Down Expand Up @@ -65,9 +72,10 @@ class BootupTime extends Audit {

/**
* @param {!Artifacts} artifacts
* @param {LH.Audit.Context} context
* @return {!AuditResult}
*/
static audit(artifacts) {
static audit(artifacts, context) {
const trace = artifacts.traces[BootupTime.DEFAULT_PASS];
return artifacts.requestDevtoolsTimelineModel(trace).then(devtoolsTimelineModel => {
const executionTimings = BootupTime.getExecutionTimingsByURL(devtoolsTimelineModel);
Expand Down Expand Up @@ -106,8 +114,8 @@ class BootupTime extends Audit {

const score = Audit.computeLogNormalScore(
totalBootupTime,
SCORING_POINT_OF_DIMINISHING_RETURNS,
SCORING_MEDIAN
context.options.scorePODR,
context.options.scoreMedian
);

return {
Expand Down
28 changes: 16 additions & 12 deletions lighthouse-core/audits/byte-efficiency/total-byte-weight.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,6 @@
const ByteEfficiencyAudit = require('./byte-efficiency-audit');
const Util = require('../../report/v2/renderer/util');

// Parameters for log-normal CDF scoring. See https://www.desmos.com/calculator/gpmjeykbwr
// ~75th and ~90th percentiles http://httparchive.org/interesting.php?a=All&l=Feb%201%202017&s=All#bytesTotal
const SCORING_POINT_OF_DIMINISHING_RETURNS = 2500 * 1024;
const SCORING_MEDIAN = 4000 * 1024;

class TotalByteWeight extends ByteEfficiencyAudit {
/**
* @return {!AuditMeta}
Expand All @@ -31,11 +26,24 @@ class TotalByteWeight extends ByteEfficiencyAudit {
};
}

/**
* @return {LH.Audit.ScoreOptions}
*/
static get defaultOptions() {
return {
// see https://www.desmos.com/calculator/gpmjeykbwr
// ~75th and ~90th percentiles http://httparchive.org/interesting.php?a=All&l=Feb%201%202017&s=All#bytesTotal
scorePODR: 2500 * 1024,
scoreMedian: 4000 * 1024,
};
}

/**
* @param {!Artifacts} artifacts
* @param {LH.Audit.Context} context
* @return {!Promise<!AuditResult>}
*/
static audit(artifacts) {
static audit(artifacts, context) {
const devtoolsLogs = artifacts.devtoolsLogs[ByteEfficiencyAudit.DEFAULT_PASS];
return Promise.all([
artifacts.requestNetworkRecords(devtoolsLogs),
Expand All @@ -60,14 +68,10 @@ class TotalByteWeight extends ByteEfficiencyAudit {
const totalCompletedRequests = results.length;
results = results.sort((itemA, itemB) => itemB.totalBytes - itemA.totalBytes).slice(0, 10);

// Use the CDF of a log-normal distribution for scoring.
// <= 1600KB: score≈1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to preserve these to save a click? @paulirish used to like these especially

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO they've limited value in their current form since they're inconsistently documenting percentiles and near the context.options rather than the defaultOptions declaration. I'd prefer to stick to the graphs and get better at interpreting the PODR and median.

alternatively I had an idea that we express the score in terms of the green/yellow transition and yellow/red transition and the function just finds the curve that fits (yellow/red is already median) but I def don't want to make that change in this PR :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed on inconsistent across files and yeah, they would need to move to live near defaultOptions (they could live in the jsdoc, for instance).

I don't feel particularly strongly about them, but I do see the appeal of not having to click through and hover over the graph to get a sense of scoring.

On the other other hand, this kind of thing should likely live in the docs (and mayyyyybe the report?) anyways :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's nuke

// 4000KB: score=0.50
// >= 9000KB: score≈0
const score = ByteEfficiencyAudit.computeLogNormalScore(
totalBytes,
SCORING_POINT_OF_DIMINISHING_RETURNS,
SCORING_MEDIAN
context.options.scorePODR,
context.options.scoreMedian
);

const headings = [
Expand Down
28 changes: 16 additions & 12 deletions lighthouse-core/audits/byte-efficiency/uses-long-cache-ttl.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,6 @@ const URL = require('../../lib/url-shim');
// Ignore assets that have very high likelihood of cache hit
const IGNORE_THRESHOLD_IN_PERCENT = 0.925;

// Scoring curve: https://www.desmos.com/calculator/zokzso8umm
const SCORING_POINT_OF_DIMINISHING_RETURNS = 4; // 4 KB
const SCORING_MEDIAN = 768; // 768 KB

class CacheHeaders extends Audit {
/**
* @return {!AuditMeta}
Expand All @@ -36,6 +32,17 @@ class CacheHeaders extends Audit {
};
}

/**
* @return {LH.Audit.ScoreOptions}
*/
static get defaultOptions() {
return {
// see https://www.desmos.com/calculator/zokzso8umm
scorePODR: 4 * 1024,
scoreMedian: 768 * 1024,
};
}

/**
* Interpolates the y value at a point x on the line defined by (x0, y0) and (x1, y1)
* @param {number} x0
Expand Down Expand Up @@ -154,9 +161,10 @@ class CacheHeaders extends Audit {

/**
* @param {!Artifacts} artifacts
* @param {LH.Audit.Context} context
* @return {!AuditResult}
*/
static audit(artifacts) {
static audit(artifacts, context) {
const devtoolsLogs = artifacts.devtoolsLogs[Audit.DEFAULT_PASS];
return artifacts.requestNetworkRecords(devtoolsLogs).then(records => {
const results = [];
Expand Down Expand Up @@ -205,14 +213,10 @@ class CacheHeaders extends Audit {
(a, b) => a.cacheLifetimeInSeconds - b.cacheLifetimeInSeconds || b.totalBytes - a.totalBytes
);

// Use the CDF of a log-normal distribution for scoring.
// <= 4KB: score≈1
// 768KB: score=0.5
// >= 4600KB: score≈0.05
const score = Audit.computeLogNormalScore(
totalWastedBytes / 1024,
SCORING_POINT_OF_DIMINISHING_RETURNS,
SCORING_MEDIAN
totalWastedBytes,
context.options.scorePODR,
context.options.scoreMedian
);

const headings = [
Expand Down
23 changes: 15 additions & 8 deletions lighthouse-core/audits/consistently-interactive.js
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,6 @@ const NetworkRecorder = require('../lib/network-recorder');
const TracingProcessor = require('../lib/traces/tracing-processor');
const LHError = require('../lib/errors');

// Parameters (in ms) for log-normal CDF scoring. To see the curve:
// https://www.desmos.com/calculator/uti67afozh
const SCORING_POINT_OF_DIMINISHING_RETURNS = 1700;
const SCORING_MEDIAN = 10000;

const REQUIRED_QUIET_WINDOW = 5000;
const ALLOWED_CONCURRENT_REQUESTS = 2;

Expand All @@ -41,6 +36,17 @@ class ConsistentlyInteractiveMetric extends Audit {
};
}

/**
* @return {LH.Audit.ScoreOptions}
*/
static get defaultOptions() {
return {
// see https://www.desmos.com/calculator/uti67afozh
scorePODR: 1700,
scoreMedian: 10000,
};
}

/**
* Finds all time periods where the number of inflight requests is less than or equal to the
* number of allowed concurrent requests (2).
Expand Down Expand Up @@ -155,9 +161,10 @@ class ConsistentlyInteractiveMetric extends Audit {

/**
* @param {!Artifacts} artifacts
* @param {LH.Audit.Context} context
* @return {!Promise<!AuditResult>}
*/
static audit(artifacts) {
static audit(artifacts, context) {
const trace = artifacts.traces[Audit.DEFAULT_PASS];
const devtoolsLog = artifacts.devtoolsLogs[Audit.DEFAULT_PASS];
const computedArtifacts = [
Expand Down Expand Up @@ -192,8 +199,8 @@ class ConsistentlyInteractiveMetric extends Audit {
return {
score: Audit.computeLogNormalScore(
timeInMs,
SCORING_POINT_OF_DIMINISHING_RETURNS,
SCORING_MEDIAN
context.options.scorePODR,
context.options.scoreMedian
),
rawValue: timeInMs,
displayValue: Util.formatMilliseconds(timeInMs),
Expand Down
26 changes: 15 additions & 11 deletions lighthouse-core/audits/dobetterweb/dom-size.js
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,6 @@ const MAX_DOM_NODES = 1500;
const MAX_DOM_TREE_WIDTH = 60;
const MAX_DOM_TREE_DEPTH = 32;

// Parameters for log-normal CDF scoring. See https://www.desmos.com/calculator/9cyxpm5qgp.
const SCORING_POINT_OF_DIMINISHING_RETURNS = 2400;
const SCORING_MEDIAN = 3000;

class DOMSize extends Audit {
static get MAX_DOM_NODES() {
return MAX_DOM_NODES;
Expand All @@ -47,22 +43,30 @@ class DOMSize extends Audit {
};
}

/**
* @return {LH.Audit.ScoreOptions}
*/
static get defaultOptions() {
return {
// see https://www.desmos.com/calculator/9cyxpm5qgp
scorePODR: 2400,
scoreMedian: 3000,
};
}


/**
* @param {!Artifacts} artifacts
* @param {LH.Audit.Context} context
* @return {!AuditResult}
*/
static audit(artifacts) {
static audit(artifacts, context) {
const stats = artifacts.DOMStats;

// Use the CDF of a log-normal distribution for scoring.
// <= 1500: score≈1
// 3000: score=0.5
// >= 5970: score≈0
const score = Audit.computeLogNormalScore(
stats.totalDOMNodes,
SCORING_POINT_OF_DIMINISHING_RETURNS,
SCORING_MEDIAN
context.options.scorePODR,
context.options.scoreMedian
);

const headings = [
Expand Down
33 changes: 17 additions & 16 deletions lighthouse-core/audits/estimated-input-latency.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,6 @@ const Util = require('../report/v2/renderer/util');
const TracingProcessor = require('../lib/traces/tracing-processor');
const LHError = require('../lib/errors');

// Parameters (in ms) for log-normal CDF scoring. To see the curve:
// https://www.desmos.com/calculator/srv0hqhf7d
const SCORING_POINT_OF_DIMINISHING_RETURNS = 50;
const SCORING_MEDIAN = 100;

class EstimatedInputLatency extends Audit {
/**
* @return {!AuditMeta}
Expand All @@ -33,7 +28,18 @@ class EstimatedInputLatency extends Audit {
};
}

static calculate(tabTrace) {
/**
* @return {LH.Audit.ScoreOptions}
*/
static get defaultOptions() {
return {
// see https://www.desmos.com/calculator/srv0hqhf7d
scorePODR: 50,
scoreMedian: 100,
};
}

static calculate(tabTrace, context) {
const startTime = tabTrace.timings.firstMeaningfulPaint;
if (!startTime) {
throw new LHError(LHError.errors.NO_FMP);
Expand All @@ -43,16 +49,10 @@ class EstimatedInputLatency extends Audit {
const ninetieth = latencyPercentiles.find(result => result.percentile === 0.9);
const rawValue = parseFloat(ninetieth.time.toFixed(1));

// Use the CDF of a log-normal distribution for scoring.
// 10th Percentile ≈ 58ms
// 25th Percentile ≈ 75ms
// Median = 100ms
// 75th Percentile ≈ 133ms
// 95th Percentile ≈ 199ms
const score = Audit.computeLogNormalScore(
ninetieth.time,
SCORING_POINT_OF_DIMINISHING_RETURNS,
SCORING_MEDIAN
context.options.scorePODR,
context.options.scoreMedian
);

return {
Expand All @@ -69,13 +69,14 @@ class EstimatedInputLatency extends Audit {
* Audits the page to estimate input latency.
* @see https://github.com/GoogleChrome/lighthouse/issues/28
* @param {!Artifacts} artifacts The artifacts from the gather phase.
* @param {LH.Audit.Context} context
* @return {!Promise<!AuditResult>} The score from the audit, ranging from 0-100.
*/
static audit(artifacts) {
static audit(artifacts, context) {
const trace = artifacts.traces[this.DEFAULT_PASS];

return artifacts.requestTraceOfTab(trace)
.then(EstimatedInputLatency.calculate);
.then(traceOfTab => EstimatedInputLatency.calculate(traceOfTab, context));
}
}

Expand Down
Loading