Skip to content

ammen99/wayfire-tests

Repository files navigation

wayfire-tests

A collection of tests I use for testing Wayfire and discovering regressions.

Usage

Tests are organized in directories, starting in ./tests, and each test is contained in its own directory. To execute a test or group of tests, run:

./run_tests.sh <test directory> <wayfire A> (--compare-with <wayfire B>)

<test directory> is the directory which contains all the tests you want to run. Information about each test run will be printed on the terminal.

<wayfire A> is the Wayfire executable/launch script. It should accept the same arguments as the Wayfire executable.

Some of the tests test the graphical output of Wayfire (so called GUI tests in the code). To make these tests independent of things like GTK themes, etc., these tests require a second Wayfire version to run (the --compare-with option). Both versions are executed with the same configuration and the same IPC commands are fed to them. At the end, screenshots of both sessions are taken and compared.

Tips and tricks

IPC timeout

If many tests are failing, this could be because your system runs slower than mine. You can try increasing the lengths of the timeouts using in tests (which of course will make them run more slowly) with the --ipc-timeout option. Its default value is 0.1.

Parallel running

The test runner supports running multiple tests in parallel with the -j <N> option. Running many tests in parallel puts a lot of stress on the system (esp. during Wayfire's and client initialization, since these often depend on system calls and the GPU hardware). To avoid this happening:

  • Do not run too many tests in parallel (personal rule of a thumb is to use a bit less than the physical core count, but of course this depends on the system).
  • Use the --interactive flag. After the tests are run, you will be presented with a list of tests that failed. You can rerun all of them sequentially by typing run all or in parallel with run all-parallel at the prompt. Most of the tests should now become green. You can also rerun a particular failed test by typing run <test number>, or run slow <test number> to add extra timeouts.

My personal workflow is like this:

./run_tests.sh tests/ <wayfire A> --compare_with <wayfire B> -j 10 --interactive
# At the prompt:
run slow all-parallel
run slow all

After this, all tests are usually green :)

Run tests in the background

You can also run tests in the background by using a headless Wayfire session, the run_nested.sh script can be used for that. This is particularly useful for regression testing, where you can just leave all the tests to run and do something else in the meantime.

How to write a new test

Tests are a combination a python test file and a Wayfire config file. Currently, the following is required for the python test file:

  1. A function is_gui() returning whether the test is a GUI test (e.g. compares the graphical output of two Wayfire versions).
  2. A class named WTest which implements the WayfireTest interface defined in wfpytest/wftest.py.

The tests can use the modules defined in wfpytest/ for example to launch Wayfire, communicate with it via IPC, etc. The test runner automatically switches to each test's directory when executing the given test, so any temporary files can be stored there and later reviewed (useful for example for the wayfire.log file generated by most tests).

About

Automatic tests for wayfire

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •