This directory contains both the code for the Renderer's Visual Regression Test
Runner and a directory of certified snapshot images
(visual-regression/certified-snapshots
) for each of the defined Visual
Regression Test Case Snapshots (Snapshots).
Within the certified-snapshots
directory are subdirectories defining
which browser and runtime environment (RUNTIME_ENV environment variable) was
used to generate their contained snapshots in the format ${browser}-${env}
.
The supported runtime environments are ci
(generated from a Linux-based Docker
container) and local
(generated by the local machine environment).
ci
snapshots are checked into the repo and are the basis of a GitHub
Actions PR status check.
local
snapshots are never checked into the repo and are used strictly to speed
up local development. See instructions below.
NOTE: Currently these tests only run the Chromium web browser as a baseline. Other browser support may come in the future.
Visual Regression Test Runner
Options:
--help Show help [boolean]
--version Show version number [boolean]
-c, --capture Capture new snapshots [boolean] [default: false]
-o, --overwrite Overwrite existing snapshots (--capture must also be set)
[boolean] [default: false]
-v, --verbose Verbose output [boolean] [default: false]
-s, --skipBuild Skip building renderer and examples[boolean] [default: false]
-p, --port Port to serve examples on [number] [default: 50535]
-i, --ci Run in docker container with `ci` runtime environment
[boolean] [default: false]
-f, --filter Tests to run ("*" wildcard pattern) [string] [default: "*"]
The test runner may be launched in local mode with:
pnpm test:visual
To run the tests in a Docker container with the ci
runtime environment, use --ci
mode:
pnpm test:visual --ci
NOTE: For this to work, you must have Docker installed and have built the Visual Regression Docker Image. See DOCKER.md for more info.
By default, the runner will build the Renderer, then the Example Tests, and then serve/launch the tests in a headless browser. The actual screenshot output from the headless browser for each defined Visual Regression Snapshot will then be compared pixel-by-pixel to the certified expected snapshot images. If a difference is detected, the test will fail.
To make it easy to see what went wrong in any failed Snapshot, three images will
be saved to the visual-regression/failed-results
: the certified expected
snapshot image, the actual snapshot image, and a difference between the two of
them. As a protection, a Git hook exists that will prevent a commit if this
directory contains any failed result files. The failed results are cleared
before every comparison run.
If any test fails the exit code of the test runner will be 1
to indicate there
was a failure. Otherwise it will be 0
.
In order to capture new Snapshots (or overwrite existing ones), you need to run the Visual Regression Test Runner in Capture mode:
pnpm test:visual [--ci] --capture
This will do everything that the Comparison mode does, but skip the actual
comparing and instead capture and save the Snapshot image data files to files
in the visual-regression/certified-snapshots
directory.
As a safety feature, by default, Capture mode will not overwrite any existing
certified snapshot files. So in order to capture new ones it is recommended
to delete the specific certified snapshot files you'd like to update before
running Capture mode. If you are sure you'd like to overwrite all of the
certified snapshots, you can add the --overwrite
CLI argument to the command.
When developing locally, running the tests in Docker CI mode can be significantly slower than than running them on the local runtime environment. Local snapshots are never checked into the repository, however you can utilize local snapshots to speed up the time it takes to verify if your changes cause any regressions.
- After creating a new branch for a new feature or bug fix, capture a new set
of local snapshots by running:
pnpm test:visual --capture --overwrite
- Implement the new feature or bug fix.
- Before pushing new commits to the remote PR branch, compare your changes
to the snapshots you took prior to starting your PR:
pnpm test:visual
The Visual Regression Tests run as a status check on every pull request update
via the GitHub Actions workflow defined in .github/workflows/tests.yml
. The tests
that run are compared with the snapshots in the certified-snapshots/chromium-ci
directory.
When tests failed, the failure results that you normally find in
visual-regression/failed-results
are uploaded to the workflow run as a zip file
artifact named failed-results. See Where does the upload go?
for more on how to find it.
The Snapshots themselves are defined in the individual Example Tests located in the
examples/tests
directory. Note that not all Example Tests need to define Snapshots.
For an Example Test to define Snapshots, it must export an automation()
function. Here's an example of one from the alpha
Example Test:
export async function automation(settings: ExampleSettings) {
// Launch the test
await test(settings);
// Take/define a snapshot
await settings.snapshot();
}
This method is called only when the Visual Regression Tests are run. Here the
Example Test only defines one Snapshot. It first runs the Example Test's
renderer code by calling test()
(defined later in the Example Test's code),
and then takes a single snapshot by calling settings.snapshot()
. When the
Visual Regression Tests are run, this Snapshot will be given the name
alpha-1
since it defines the 1st (and only) Snapshot of the alpha
Example
Test. Addtional snapshots can be defined by calling settings.snapshot()
additional times, while of course making changes to the Renderer state in
between calls.
A name may be optionally provided in the snapshot call:
settings.snapshot({ name: 'myname' });
This name will be appended to the name of the Example Test. For example, if
run in the alpha
Example Test, the name of Snapshot will be alpha_myname-1
.
The same name may be used multiple times.
Example Tests that utilize the PageContainer
class to define separate pages
of static content may use the pageContainer.snapshotPages()
helper method
to automatically take snapshots of each of the pages defined in the container.
Here's an example from the text-rotation
Example Test:
export async function automation(settings: ExampleSettings) {
// Snapshot all the pages (`await test()` resolves to a PageContainer instance)
await (await test(settings)).snapshotPages();
}
By default, when calling settings.snapshot()
a snapshot will be created of
the entire app/testRoot area at 1920x1080 (downscaled to canvas/PNG at 1280x720).
Large snapshots like the default require more processing to load and do a diff
on. If your test only requires a small snapshot area you can communicate this
to the Visual Regression Test Runner in two ways:
Method 1: Set the dimensions of the testRoot.
// Do this in the actual test itself (not the automation function) for the best
// results.
export default function test(settings: ExampleSettings) {
const { testRoot } = settings;
// Set a smaller snapshot area
testRoot.width = 200;
testRoot.height = 200;
// Set a color on the test root so its more obvious when running the test in
// the browser what the snapshot area is.
testRoot.color = 0xffffffff;
}
Method 2: Provide a clip
rectangle prop to the settings.snapshot()
method.
export async function automation(settings: ExampleSettings) {
// Launch the test
await test(settings);
// Take/define a snapshot
await settings.snapshot({
clip: {
x: 0,
y: 0,
width: 200,
height: 200,
},
});
}
Method 2 allows more flexibility in how snapshots are defined but should only be used if Method 1 does not satisfy the requirements for the snapshots to be taken.
In order to allow for consistent snapshots when random numbers are convenient in
writing tests, Example Tests that are run by the Visual Regression Test Runner
receive a consistent constant seeded implementation
of Math.random()
.
This means that for each Example Test that is run, Math.random()
will
produce the same sequence of random numbers.
Math.random()
operates as it normally does when the Example Tests are run
directly in the browser (without the VRT Runnner).