Skip to content

Performance Check with Profiler

Will Rogers edited this page Mar 9, 2020 · 9 revisions

Recently we have added the Profiler component to the main body of the application, which will measure renders as they occur on the page with a number of metrics.

This wiki outlines how we can use that as part of our evaluation process when making significant changes to the architecture of the application.

Setup

To provide the best chance of measuring the performance of the web application without other considerations, we have access to some Windows laptops which we can be relatively sure other people are not accessing and using at the same time. This approach is recommended over using a Windows virtual machine.

For the time being, we are primarily using simulation PVs to avoid network effects, although this will of course be a consideration in the future.

If you would like to create a new screen for performance testing, consider using the tools here.

Profiling with Chrome

The first step of investigating a change is to take a look at it in the Chrome profiler. The usefulness of this and how it is used in practice are highlighted on the performance wiki page.

Using the React profiler in Chrome can give you a good understanding of what is happening and is a good way to spot any red flags which suggest things might not be working as well as you thought they worked.

Profiling with Profiler

The main application page now uses the React Profiler component to produce useful output about how long renders took. The application of this tool and its useful output in the console has made it much easier to measure changes, as this process was quite involved using the Chrome profiler before.

Initially this can be performed with the development build. Later we will look at using the production build.

When testing large numbers of widgets (>50) it can be useful to slow down the simulation time. Given that we are looking to maintain a 10 Hz update rate across a range of operational modes, an update rate of 1 Hz provides adequate room to identify when renders are taking too long. Open src/settings.ts and apply:

export const profilerEnabled = true;
export const simulationTime = 1000;

Then go to the relevant page you want to measure performance for. There are some set up for you:

  • performance.json - large number of readbacks all looking at the same PV
  • performanceDifferentPVs - large number of readbacks looking at unique PVs
  • performanceWaveforms - large numbers of readbacks looking at different waveform PVs

There will be more in the future.

With that page open, open the console (usually Ctrl + Shift + i) and wait a few seconds for the initial renders to finish. There should now be a steady stream of output, specifying actual duration, base duration and reconciliation time. This is approximately what they mean:

  • actual duration - the time React spend rendering each component
  • base duration - estimate of the time React would have taken to calculate and render changes to the entire tree
  • reconciliation time - effective time taken to render all components and perform some other calculations

Of these, reconciliation time is most closely linked to the time taken to render the new content on the screen.

Open a spreadsheet, make some comments as to the parameters you are testing and copy and paste the numbers in as they appear.

A good method is to wait for a steady stream of output to fill the console, scroll up to stop it from moving everything, and copy 10 values into the spreadsheet. In excel, once you have copied the line with the numbers (i.e 14,33,16) go to Data -> Text to Columns and select commas as a delimiter. The output will then automatically be separated into columns.

Once you have a good amount of measurements, average them to produce a slightly more stable measurement. This is always a bit delicate with performance measurements, as this is affected by many different operations the computer might be performing. In the future we might discard high values as not being truly indicative of performance. Having said that, most people will be using our application on a machine which is similarly performing many different operations so it might not be such a bad way of measuring this performance.

A good test to ensure that you haven't caused anything to go badly wrong is to use different numbers of widgets. You should see the render time per widget decrease slightly as the number of widgets goes up. If something else is happening then you should probably be concerned and return to using the Chrome Profiler (above) to more closely investigate what is happening.

Profiling the Production Build

Once you are confident that any changes you have made have not produced an obvious decrease in performance, measure how it affects the production build. You can produce this while keeping the profiler component working with:

npm run build -- --profile

Then run the built code with:

npx serve -s build -l 3000

where the number after the -l argument should match the port you specified as baseUrl in src/settings.ts.

Then repeat the above process. The absolute value of the measurements would be expected to decrease, and reconciliation time should now be close to the actual duration.