Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixes for TTFB #2231

Merged
merged 12 commits into from
Sep 12, 2017
Merged
91 changes: 91 additions & 0 deletions lighthouse-core/audits/time-to-first-byte.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
/**
* @license Copyright 2017 Google Inc. All Rights Reserved.
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
*/
'use strict';

const Audit = require('./audit');
const Util = require('../report/v2/renderer/util');
const URL = require('../lib/url-shim');

const TTFB_THRESHOLD = 200;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we intending to run this audit only on unthrottled runs? Won't it flag all resources with the default throttling settings on?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I don't understand this comment. TTFB should be below 200, this is merely a constant that I use in the audit.

Copy link
Collaborator

@patrickhulce patrickhulce Aug 30, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well if throttling is on won't TTFB always be at least as high as our throttling value?

Copy link
Collaborator Author

@wardpeet wardpeet Sep 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a really good question. I believer you are right. Should we higher the max ttfb? I believe @paulirish suggested using 200ms

I'll do some audits based on throttling and no throttling

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When throttling is the TTFB that we observe should be minimum 150ms. And if there was any server latency, that is just absorbed in there.

So if the observed TTFB is > 151ms, then ALL of that was real network latency + server response.

That much sound correct to yall?

Then, 200 is a number we can evaluate separate from our throttling and we just have to decide when its slow enough for us to care.
PSI uses either 200 or 400, I forget.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah based on https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/emulation.js#L47 our set latency should be 562ms. So whatever i just say but s/150/562/ :)

well... then our threshold has to be like 600ms
eesh.

const TTFB_THRESHOLD_BUFFER = 15;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at this point I'm not sure splitting buffer is worth it when our threshold is 600, haha


class TTFBMetric extends Audit {
/**
* @return {!AuditMeta}
*/
static get meta() {
return {
category: 'Performance',
name: 'time-to-first-byte',
description: 'Time To First Byte (TTFB)',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this description nicely summarizes what we're really doing but it feels weird along with the other perf opportunities, maybe we should convert it into something like "Keep server response times low" and mention TTFB in the description or debugString?

I know @paulirish doesn't like "response time" though ;) open to whatever doesn't make it feel out of place there.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep server response times low (TTFB)?

informative: true,
helpText: 'Time To First Byte identifies the time at which your server sends a response.' +
'[Learn more](https://developers.google.com/web/tools/chrome-devtools/network-performance/issues).',
requiredArtifacts: ['devtoolsLogs']
};
}

static caclulateTTFB(record) {
const timing = record._timing;

return timing.receiveHeadersEnd - timing.sendEnd;
}

/**
* @param {!Artifacts} artifacts
* @return {!AuditResult}
*/
static audit(artifacts) {
const devtoolsLogs = artifacts.devtoolsLogs[Audit.DEFAULT_PASS];

return Promise.all([
artifacts.requestNetworkRecords(devtoolsLogs),
artifacts.requestCriticalRequests(devtoolsLogs)
])
.then(([networkRecords, criticalRequests]) => {
const results = [];
let displayValue;

criticalRequests.forEach(request => {
const networkRecord = networkRecords.find(record => record._requestId === request.id);

if (networkRecord && networkRecord._timing) {
const ttfb = TTFBMetric.caclulateTTFB(networkRecord);
results.push({
url: URL.getURLDisplayName(networkRecord._url),
ttfb: Util.formatMilliseconds(Math.round(ttfb), 1),
rawTTFB: ttfb
});
}
});

const recordsOverBudget = results.filter(row =>
row.rawTTFB > TTFB_THRESHOLD + TTFB_THRESHOLD_BUFFER
);

if (recordsOverBudget.length) {
const thresholdDisplay = Util.formatMiliseconds(TTFB_THRESHOLD, 1);
const recordsOverBudgetDisplay = Util.formatNumber(recordsOverBudget.length);
displayValue = `${recordsOverBudgetDisplay} critical request(s) went over` +
` the ${thresholdDisplay} threshold`;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if it's over the threshold then we can be confident it took at least that amount of time, so let's report the actual ttfb here e.g. Root document request took X ms

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something like Root document took ${Util.formatMilliseconds(ttfb, 1)} ms to get the first byte. or keep it short as you stated above

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yours sgtm too 👍

}

const headings = [
{key: 'url', itemType: 'url', text: 'URL'},
{key: 'ttfb', itemType: 'text', text: 'Time To First Byte (ms)'},
];

return {
rawValue: recordsOverBudget.length === 0,
results,
headings,
displayValue,
};
});
}
}

module.exports = TTFBMetric;
98 changes: 0 additions & 98 deletions lighthouse-core/audits/time-to-firstbyte.js

This file was deleted.

3 changes: 3 additions & 0 deletions lighthouse-core/closure/typedefs/ComputedArtifacts.js
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,9 @@ let TraceOfTabArtifact;
*/
function ComputedArtifacts() {}

/** @type {function(!DevtoolsLog): !Promise<!Object>} */
ComputedArtifacts.prototype.requestCriticalRequests;

/** @type {function(!DevtoolsLog): !Promise<!Object>} */
ComputedArtifacts.prototype.requestCriticalRequestChains;

Expand Down
4 changes: 2 additions & 2 deletions lighthouse-core/config/default.js
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ module.exports = {
'speed-index-metric',
'screenshot-thumbnails',
'estimated-input-latency',
// 'time-to-firstbyte',
'time-to-first-byte',
'first-interactive',
'consistently-interactive',
'user-timings',
Expand Down Expand Up @@ -232,7 +232,7 @@ module.exports = {
{id: 'uses-optimized-images', weight: 0, group: 'perf-hint'},
{id: 'uses-webp-images', weight: 0, group: 'perf-hint'},
{id: 'uses-request-compression', weight: 0, group: 'perf-hint'},
// {id: 'time-to-firstbyte', weight: 0, group: 'perf-hint'},
{id: 'time-to-first-byte', weight: 0, group: 'perf-hint'},
{id: 'total-byte-weight', weight: 0, group: 'perf-info'},
{id: 'dom-size', weight: 0, group: 'perf-info'},
{id: 'critical-request-chains', weight: 0, group: 'perf-info'},
Expand Down
145 changes: 37 additions & 108 deletions lighthouse-core/gather/computed/critical-request-chains.js
Original file line number Diff line number Diff line change
Expand Up @@ -6,123 +6,25 @@
'use strict';

const ComputedArtifact = require('./computed-artifact');
const WebInspector = require('../../lib/web-inspector');

class CriticalRequestChains extends ComputedArtifact {

get name() {
return 'CriticalRequestChains';
}

/**
* For now, we use network priorities as a proxy for "render-blocking"/critical-ness.
* It's imperfect, but there is not a higher-fidelity signal available yet.
* @see https://docs.google.com/document/d/1bCDuq9H1ih9iNjgzyAL0gpwNFiEP4TZS-YLRp_RuMlc
* @param {any} request
*/
static isCritical(request) {
const resourceTypeCategory = request._resourceType && request._resourceType._category;

// XHRs are fetched at High priority, but we exclude them, as they are unlikely to be critical
// Images are also non-critical.
// Treat any images missed by category, primarily favicons, as non-critical resources
const nonCriticalResourceTypes = [
WebInspector.resourceTypes.Image._category,
WebInspector.resourceTypes.XHR._category
];
if (nonCriticalResourceTypes.includes(resourceTypeCategory) ||
request.mimeType && request.mimeType.startsWith('image/')) {
return false;
}

return ['VeryHigh', 'High', 'Medium'].includes(request.priority());
}

static extractChain(networkRecords) {
networkRecords = networkRecords.filter(req => req.finished);

// Build a map of requestID -> Node.
const requestIdToRequests = new Map();
for (const request of networkRecords) {
requestIdToRequests.set(request.requestId, request);
}

// Get all the critical requests.
/** @type {!Array<NetworkRequest>} */
const criticalRequests = networkRecords.filter(CriticalRequestChains.isCritical);

const flattenRequest = request => {
return {
url: request._url,
generateChain(request) {
return {
request: {
id: request.id,
url: request.url,
startTime: request.startTime,
endTime: request.endTime,
responseReceivedTime: request.responseReceivedTime,
transferSize: request.transferSize
};
transferSize: request.transferSize,
},
children: {},
};

// Create a tree of critical requests.
const criticalRequestChains = {};
for (const request of criticalRequests) {
// Work back from this request up to the root. If by some weird quirk we are giving request D
// here, which has ancestors C, B and A (where A is the root), we will build array [C, B, A]
// during this phase.
const ancestors = [];
let ancestorRequest = request.initiatorRequest();
let node = criticalRequestChains;
while (ancestorRequest) {
const ancestorIsCritical = CriticalRequestChains.isCritical(ancestorRequest);

// If the parent request isn't a high priority request it won't be in the
// requestIdToRequests map, and so we can break the chain here. We should also
// break it if we've seen this request before because this is some kind of circular
// reference, and that's bad.
if (!ancestorIsCritical || ancestors.includes(ancestorRequest.requestId)) {
// Set the ancestors to an empty array and unset node so that we don't add
// the request in to the tree.
ancestors.length = 0;
node = undefined;
break;
}
ancestors.push(ancestorRequest.requestId);
ancestorRequest = ancestorRequest.initiatorRequest();
}

// With the above array we can work from back to front, i.e. A, B, C, and during this process
// we can build out the tree for any nodes that have yet to be created.
let ancestor = ancestors.pop();
while (ancestor) {
const parentRequest = requestIdToRequests.get(ancestor);
const parentRequestId = parentRequest.requestId;
if (!node[parentRequestId]) {
node[parentRequestId] = {
request: flattenRequest(parentRequest),
children: {}
};
}

// Step to the next iteration.
ancestor = ancestors.pop();
node = node[parentRequestId].children;
}

if (!node) {
continue;
}

// If the node already exists, bail.
if (node[request.requestId]) {
continue;
}

// node should now point to the immediate parent for this request.
node[request.requestId] = {
request: flattenRequest(request),
children: {}
};
}

return criticalRequestChains;
}

/**
Expand All @@ -131,8 +33,35 @@ class CriticalRequestChains extends ComputedArtifact {
* @return {!Promise<!Object>}
*/
compute_(devtoolsLog, artifacts) {
return artifacts.requestNetworkRecords(devtoolsLog)
.then(CriticalRequestChains.extractChain);
return artifacts.requestCriticalRequests(devtoolsLog)
.then(criticalRequests => {
// Create a tree of critical requests.
const criticalRequestChains = {};
const mappedRequests = {};

let request = criticalRequests.shift();
while(request) {
if (!mappedRequests[request.id]) {
mappedRequests[request.id] = this.generateChain(request);
}

const node = mappedRequests[request.id];
const parent = request.parent;
if (parent) {
if (!mappedRequests[parent.id]) {
mappedRequests[parent.id] = this.generateChain(parent);
}

mappedRequests[parent.id].children[request.id] = node;
} else {
criticalRequestChains[request.id] = node;
}

request = criticalRequests.shift();
}

return criticalRequestChains;
});
}
}

Expand Down
Loading