Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.0 improving performance and scalability #431

Closed
KyleAMathews opened this issue Sep 8, 2016 · 4 comments
Closed

1.0 improving performance and scalability #431

KyleAMathews opened this issue Sep 8, 2016 · 4 comments
Milestone

Comments

@KyleAMathews
Copy link
Contributor

KyleAMathews commented Sep 8, 2016

1.0 is going to have a number of improvements to Gatsby's frontend performance. This issue provides some details about my plans and the reasoning behind them.

What Gatsby gets right already

Gatsby 0.x is very fast. We've worked really hard at this and so far have added:

  • CSS inlining in the <head>. This avoids the extra blocking network
    call to fetch your css.
  • Static rendering of pages — obvious for a static site renderer :-) but
    still worth noting as it's a big part part of why static sites are
    desirable. Statically rendered HTML means a) your web server does no work
    almost to serve the file and b) files can be distributed globally on a
    CDN dramatically speeding load times.
  • No page reloads. This is still fairly unique to
    Gatsby. When a Gatsby site loads, it loads in the background code and content
    for other pages so when you click on an internal link, the next page
    loads almost instantly. Basically Gatsby starts as a static site but
    boots into a single-page-app. Despite having worked on Gatsby for a year+, it
    still feels magical especially when I play with a new Gatsby site
    someone launches. This is a noticeable difference even on fast computers
    on fast networks but is very very noticeable on mobile phones and other
    slower devices on poor networks.

What needs improved?

In frontend performance parlance, Gatsby has an excellent TTFB (Time To First Byte) and TTFP (Time To First Paint). Gatsby sites load fast and are remarkably quick when clicking around a site.

Changes for Gatsby 1.0 are focused on improving TTI (Time To Interaction) and our ability to scale to larger sites.

Gatsby 0.x loads upfront all code, data, and styles for the entire site. This causes problems, especially on older phones and for those on slow networks as larger Javascript bundles take longer to download and to parse and evaluate.

For an educational experience, go to https://www.webpagetest.org/ and pick an older phone like the "Motorola G" and then load a site with a large amount of Javascript. It's not uncommon to see the CPU pegged on the initial parse/eval of Javascript for 2-10 seconds. During this, older phones will be unresponsive and any Javascript interactivity won't work.

So for Gatsby 1.0 we're teaching the framework to split code so that each page loads only the critical Javascript (and data and styles) upfront. For most sites the defaults should just work. We'll also investigate high-leverage APIs so sites can tweak code splitting as needed.

Many of these changes are inspired by the fine work of engineers at Google (and elsewhere) who've been researching patterns for improving web performance and building these into the web platform.

Particularly helpful is the PRPL pattern.

PRPL stands for:

  • Push critical resources for the initial route.
  • Render initial route.
  • Pre-cache remaining routes.
  • Lazy-load and create remaining routes on demand.

Time to interaction

Browsers do work. They parse HTML, CSS, and Javascript. They calculate layouts, they paint pixels on the screen, they run Javascript code. The more work you give them the more CPU and RAM they'll use and the slower they'll become.

The slower the hardware the more noticeable this is.

Solutions to this come in two categories. One is don't do work you don't have to. If you don't need some CSS or Javascript don't include it. Two is to optimize the scheduling of work.

For the first category, I think there's a few automated lint-like things we can do to suggest to people there's code they could eliminate. Also tracking the sizes of different pages and how that changes over time would be helpful.

For the second, the basic principle we're following is "do work when it's needed". Or in more concrete terms, only run Javascript that's needed for that page (or for large pages, the parts of the page that are active).

There's a close analogy to just-in-time manufacturing ideas. Companies found that the way to be the most responsive to customers is to actually avoid doing work ahead of time. When they did do work ahead of time this would paradoxically slow them down as the speculative work would get in the way of getting the work done that's actually necessary (resource contention).

For both manufacturing and web apps there's high inventory cost (unused code takes up memory) and a premium on responsiveness. The car customer wants their new car yesterday and the web app consumer wants their app running immediately. Any work you do ahead of time because "they might need it" gets in the way of the app being responsive to the user.

With both you want to wait until the user asks for something and then work overtime to get it to them as fast as possible.

The PPRL pattern says push the initial page as fast as possible and then let a service worker cache the raw ingredients for remaining pages in the browser so they can be quickly assembled when the user asks for them.

That's part of what makes service workers so valuable over previous precaching solutions — they don't evaluate the JS, just load and cache it.

Scalability

By limiting the work a browser does to what's needed for the current page, Gatsby can scale to almost any sized site as the site only pays the cost for the pages a user visits.

Plans for improving TTI in Gatsby

Loading only critical resources upfront is a fairly obvious idea. The devil of course is in the details. How can Gatsby identify the critical resources for a page without swamping developers with tedious bookkeeping?

A website is made up of roughly four types of things: styles, code, data, and images. Each requires different strategies. Let's take a look.

Identifying and loading critical styles

Gatsby 0.x inlines all CSS for a site in <head>. For smaller sites this is fine but as a site grows it'd be preferable for the initial page load to only fetch CSS for that page and then to lazy load more CSS as the user navigates around the site.

I like to think in terms of global and component styles. Typically a site will have a set of global stylesheets e.g. for reset/normalize, typography, and various other global concerns. These set the overall look and feel for the site. Then there are styles for individual components. Ideally components are responsible for their own styles one way or another.

Ideally we inline in <head> only global styles and styles for components on that page. Styles for subsequent pages should only be added as needed.

Handling global styles is fairly easy. With traditional CSS you could compile the global styles to their own file to be inlined or you could use something like Typography.js.

Pulling out a page's component styles can be trickier.

  • Traditional CSS — Each route component would import just the CSS it
    needs and we'd extract this (somehow — haven't researched how to do
    this in Webpack) to its own CSS file to be inlined in its own page.
    Perhaps an easier solution for those using traditional CSS methods would
    be to use a critical CSS extractor tool while rendering HTML pages i.e.
    run each page against the entire styles.css and inline the critical
    styles for that page and then async load the remaining styles.css.
  • Inline and CSS-in-JS — if you're using inline styles with React
    Components or one of the popular CSS-in-JS libraries like Glamor or
    Aphrodite, your job is done. You only load the critical styles by
    default. With inline styles your styles are literally in the HTML so
    you're guaranteed to not be over-fetching. And both Glamor and Aphrodite
    have ways to extract styles from components on a page to <head> while
    server rendering. For both, styles are tied to React components so
    additional styles are loaded in only as needed.

Identifying and loading critical code and data

Note: I'm calling data the data that is passed client-side into your React.js components. A Gatsby site runs on both the server and client so page data has to be loaded into the client along with component code.

In Gatsby 0.x all data for a site is loaded into the client at boot. This was the easiest way I found to build version 0.x and has proved convenient to use.

The difficulty with this is that every page then pays the cumulative cost of every page on the site. One massive visualization with heavy JS libraries and 1000s of rows of data is loaded on every page.

This isn't ideal obviously.

The ideal is every page can specify exactly the data it needs and that and only that data would get loaded with each page.

Luckily some teams at Facebook have already been thinking hard about this problem and have come up with GraphQL and Relay. GraphQL is an elegant query language for letting client code specify data requirements and Relay provides beautiful and simple integration with React where each route specifies its data requirements with GraphQL and Relay handles the behind-the-scenes work of fetching the data and caching it locally. I used them for close to a year building a product and they are fantastic. Colocating your data query with your component makes it simple to see what data is available on each page and make quick modifications.

I wrote more in another issue about how Gatsby 1.0 will use GraphQL and a Relay-like pattern but in short, each page can now specify exactly the critical data it needs to render which gets written out to a JSON file and loaded along with the page component code. I'm exploring patterns as well for a page to lazy-load data.

For splitting code, this is an area that's been thoroughly explored by the Webpack and React communities. There's a wide variety of options available, most of which I've explored. I spent two days fiddling with custom Webpack configs and plugins working through options and tradeoffs. I even dreamed one night about a code splitting problem (I solved it) :-)

Similar to styles there are global JS modules (used on every page) and route-specific modules. Global JS should be loaded on the first page load along with modules for that page and then other JS is fetched in the background and then evaluated on route transitions.

Another consideration is improving long-term caching. Ideally we should split code in such a way that limits how many bundles are affected by common changes.

This feels quite similar to database normalization. And like database normalization, there's tradeoffs between levels of normalization. The JS bundle equivalent of a fully normalized database is where the browser loads each JS module individually.

Khan Academy explored doing this and found that it was significantly slower (even with HTTP/2).

So like databases will often denormalize to improve reads, we bundle javascript modules to improve read speeds of Javascript over the network.

There are many ways we could split code and data but this is what I've settled on for now (please suggest other ideas if you have them!). diagram subset

Each page loads a commons.js (global JS modules), the route component for that page (e.g. blog posts pages all share the same route component), and the data module for that page.

When loading subsequent pages, if moving to a different route type (e.g. from a blog post to an index page), load the new route component and the page's data bundle.

This makes for very quick page transitions as the data bundles are often a few kbs and the route components often < 15kbs.

With a service worker, these bundles will be cached and ready to be used further dropping page transition times.

Editing a page means either just one data bundle is changed or one route component bundle.

All this can happen at the framework level as routes and data requirements are declared programmatically. Using this information we write out a custom routes file (for React Router) that has code splitting with named bundles built-in. Using the named bundles, we specify on each statically rendered HTML page which JS bundles to load.

Reducing impact of images

I'd love to make near automatic several image loading techniques. Responsive images, lazy loading images when they enter the viewport, and loading placeholder images first before loading the actual image.

This issue discusses some of those ideas #285.

The new GraphQL data layer should make some of these ideas fairly straightforward to implement. E.g. provide a custom Gatsby React image component which exports a standard GraphQL query for getting responsive image links plus the placeholder (which would be inlined) and has built-in awareness of viewports so knows when to load its image.

Other performance ideas

Many hosts need custom configuration to unlock performance options they have. I can see host-specific Gatsby plugins being really useful to setup caching, server push (as it becomes available), etc.

@carlosagsmendes
Copy link

I was wondering if we have any control over what content gets pre-cached. Can we control it or will all linked pages be preloaded?

Thanks in advance

@KyleAMathews
Copy link
Contributor Author

We shipped v1! 🎉

@justin808
Copy link

@KyleAMathews Any best followups to read up on what's happened on the performance front since this issue was closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants