A collection of tests I use for testing Wayfire and discovering regressions.
Tests are organized in directories, starting in ./tests
, and each test is contained in its own directory.
To execute a test or group of tests, run:
./run_tests.sh <test directory> <wayfire A> (--compare-with <wayfire B>)
<test directory>
is the directory which contains all the tests you want to run.
Information about each test run will be printed on the terminal.
<wayfire A>
is the Wayfire executable/launch script. It should accept the same arguments as the Wayfire executable.
Some of the tests test the graphical output of Wayfire (so called GUI tests in the code).
To make these tests independent of things like GTK themes, etc., these tests require a second Wayfire version to run (the --compare-with
option).
Both versions are executed with the same configuration and the same IPC commands are fed to them.
At the end, screenshots of both sessions are taken and compared.
If many tests are failing, this could be because your system runs slower than mine.
You can try increasing the lengths of the timeouts using in tests (which of course will make them run more slowly) with the --ipc-timeout
option.
Its default value is 0.1
.
The test runner supports running multiple tests in parallel with the -j <N>
option.
Running many tests in parallel puts a lot of stress on the system (esp. during Wayfire's and client initialization, since these often depend on system calls and the GPU hardware).
To avoid this happening:
- Do not run too many tests in parallel (personal rule of a thumb is to use a bit less than the physical core count, but of course this depends on the system).
- Use the
--interactive
flag. After the tests are run, you will be presented with a list of tests that failed. You can rerun all of them sequentially by typingrun all
or in parallel withrun all-parallel
at the prompt. Most of the tests should now become green. You can also rerun a particular failed test by typingrun <test number>
, orrun slow <test number>
to add extra timeouts.
My personal workflow is like this:
./run_tests.sh tests/ <wayfire A> --compare_with <wayfire B> -j 10 --interactive
# At the prompt:
run slow all-parallel
run slow all
After this, all tests are usually green :)
You can also run tests in the background by using a headless Wayfire session, the run_nested.sh
script can be used for that.
This is particularly useful for regression testing, where you can just leave all the tests to run and do something else in the meantime.
Tests are a combination a python test file and a Wayfire config file. Currently, the following is required for the python test file:
- A function
is_gui()
returning whether the test is a GUI test (e.g. compares the graphical output of two Wayfire versions). - A class named
WTest
which implements theWayfireTest
interface defined inwfpytest/wftest.py
.
The tests can use the modules defined in wfpytest/
for example to launch Wayfire, communicate with it via IPC, etc.
The test runner automatically switches to each test's directory when executing the given test, so any temporary files can be stored there and
later reviewed (useful for example for the wayfire.log
file generated by most tests).