-
Notifications
You must be signed in to change notification settings - Fork 189
Testing
New or significantly changed features will only be accepted if they have a test case. This is to make sure the feature is not broken by future changes to ESPResSo, and so other users can get an impression of what behavior is guaranteed to work. There are multiple kinds of tests: C++ unit tests, Python integration tests, tutorial tests, sample tests and installation tests. Tests are configured in CMake and run by CTest.
To execute the unit and integration tests, simply run make check in the top build directory. The other tests (tutorials, samples, installation) are described in the following sections. All tests are executed on our GitLab-CI infrastructure.
To run tests in parallel, configure the CTest module once for the current build folder with the following command:
cmake . -D ESPRESSO_CTEST_ARGS="-j$(nproc)"
Extra arguments may be passed to that variable, as long as they are separated by semicolons.
Testing individual C++ functions and classes.
To build and run the tests:
make -j$(nproc) check_unit_tests
To build and run a specific test with verbose output:
(cd src/script_interface/tests/; make Verlet_list_test && mpiexec -n 2 ./Verlet_list_test -l all)
Framework: Boost.Test
Store new unit tests in src/core/unit_tests/ (for core features) or in any other other C++ submodule of src/. Tests usually follow this structure:
/// @file /src/core/unit_tests/Particle_test.cpp
#include <boost/test/unit_test.hpp>
#include "core/utils/serialization/Particle.hpp"
BOOST_AUTO_TEST_CASE(comparison) {
{
Particle p, q;
p.identity() = 1;
q.identity() = 2;
BOOST_CHECK(p != q);
}
{
Particle p, q;
p.identity() = 2;
q.identity() = 2;
BOOST_CHECK(p == q);
}
}
Mocked classes should be enclosed in a namespace and stored in src/core/unit_tests/mock/. Here is an example of a mocked Cell class:
/// @file /src/core/unit_tests/mock/Cell.hpp
namespace Testing {
template <typename Particle> class Cell {
public:
Cell() : n(0) {}
std::size_t n;
std::vector<Particle> part;
};
} // namespace Testing
which is then used as:
#include "mock/Cell.hpp"
using Cell = Testing::Cell<Particle>;
Configure tests in src/core/unit_tests/CMakeLists.txt using the syntax
unit_test(NAME <mytest> SRC <mytest.cpp> [DEPENDS <target1>[, ...]] [NUM_PROC <N>])
where NUM_PROC instructs CTest to run the binary through MPI with N threads. When NUM_PROC is not provided, the binary is executed normally. The use of more than 1 source file should be avoided in favor of the DEPENDS option.
Testing Python bindings and numerical results of core features.
To build and run the tests:
make -j$(nproc) check_python # run all tests
make -j$(nproc) check_python_gpu # run only GPU tests
make -j$(nproc) check_python_skip_long # run only quick tests
make -j$(nproc) check_python_parallel_odd # run only tests with 3 or more cores
To run a specific test case, do:
mpiexec -n 4 ./pypresso testsuite/python/test.py
The test file path can be followed by one of these modifiers:
- -v to print test names in real time
- TestClass to run only that particular unittest class
- TestClass.test_example to run only that particular unittest class method
- Test__axis_x to instantiate a class Test with parameters ["axis_x"] (only for generative test cases marked with the ARGUMENTS option in testsuite/python/CMakeLists.txt)
Framework: unittest
Store new unit tests in testsuite/python/. Tests usually follow this structure:
# /testsuite/python/constraint_shape_based.py
import unittest as ut
import unittest_decorators as utx
import numpy as np
import espressomd
@utx.skipIfMissingFeatures(["LENNARD_JONES_GENERIC"])
class ShapeBasedConstraintTest(ut.TestCase):
box_l = 30.
system = espressomd.System(box_l=3 * [box_l])
def tearDown(self):
self.system.part.clear()
self.system.constraints.clear()
def test_hollowcone(self):
system = self.system
system.time_step = 0.01
system.cell_system.skin = 0.4
# <...>
if __name__ == "__main__":
ut.main()
Python decorators are used to skip tests when a condition is not met. They start with the symbol @ and can be stacked above a class declaration or a method declaration to disable it. The most commonly used decorators are:
- @utx.skipIfMissingFeatures(<features>) to skip a test when required features are not compiled in
- @utx.skipIfMissingGPU() to skip a GPU-based test when no GPU is available or when feature CUDA is not compiled in
- @ut.skipIf(<condition>, <message>) to create a custom condition
Configure tests in testsuite/python/CMakeLists.txt using the syntax
python_test(FILE <mytest.py>
MAX_NUM_PROC <N>
[LABELS <label1> [, ...]]
[DEPENDENCIES <../dependency1.py>[, ...]])
where MAX_NUM_PROC instructs CTest to run the script through MPI with at most N ranks. The actual number of ranks used during the test is determined by CMake. Use LABELS gpu to mark tests that require a GPU, so as to avoid locking the GPU when tests run in parallel. Files listed in DEPENDENCIES are passed to configure_file().
samples/ contains Python scripts showcasing typical uses of ESPResSo. doc/tutorials/ contains Jupyter notebooks and bonus Python scripts used in teaching sessions. maintainer/benchmarks/ contains benchmarks used to check performance improvements of pull requests.
To build and run the tests:
make -j$(nproc) check_tutorials
make -j$(nproc) check_samples
make -j$(nproc) check_benchmarks
Tutorials and samples are designed for interactive use, and cannot be imported like conventional Python modules. For example, Jupyter notebooks have to be first converted to Python scripts, then imported with the importlib module because of their non-standard filenames. Some scripts need to be imported together in the same Python session for them to work, while others need to access resources (i.e. .dat files) found in the same directory as the scripts. This last issue is solved by copying the complete /samples directory to /build/testsuite/scripts/samples/local_samples, the /doc/tutorials directory to /build/testsuite/scripts/tutorials/local_tutorials and the /maintainer/benchmarks directory to /build/testsuite/scripts/benchmarks/local_benchmarks.
Since importing a Python script causes its execution, the simulation will run upon import, at which point all global variables become accessible to the unittest classes, including the system object. However, some scripts can be slow to import or require mandatory command line arguments. To solve both issues, the scripts are edited first and saved to a new file with suffix _processed.py, which is the one actually imported by the testing script. During editing, global variables controlling the running time (number of integration steps, number of particles, target accuracy, etc.) are substituted with new values defined in the testing script, and sys.argv is modified to contain the command line arguments. Several GUI classes are also disabled during this step. For tutorials, hidden solutions will be revealed and if a hidden markdown cell contains only Python code, it will be converted to a code cell and executed.
Here is a test template to load sample.py, while altering the values of two global variables (warm_steps, n_iterations=20), setting up command line arguments (--cpu 0.001) and replacing an infinite loop with a finite loop:
import unittest as ut
import importlib_wrapper
def disable_visualizer_GUI(code):
breakpoint = "while True:"
assert breakpoint in code
code = code.replace(breakpoint, "for _ in range(5):", 1)
sample, skipIfMissingFeatures = importlib_wrapper.configure_and_import(
"@SAMPLES_DIR@/sample.py", cmd_arguments=["--cpu", "0.001"],
warm_steps=100, n_iterations=20, substitutions=disable_visualizer_GUI)
@skipIfMissingFeatures
class Sample(ut.TestCase):
system = sample.system
def test_something(self):
self.assertLess(abs(sample.pressure - 1.0), 1e-3)
# <...>
if __name__ == "__main__":
ut.main()
Contrary to the Python integration tests where decorators @utx.skipIfMissingFeatures(<features>) are manually created to disable tests for missing features, tutorials and samples already have espressomd.assert_features() statements from which @skipIfMissingFeatures decorators are automatically created.
When importing multiple scripts in the same test (for example, some tutorials have bonus scripts that don't necessarily require assert_features()), on the first import failure all subsequent imports will be skipped.
Please note that numerical results of interest (sample.pressure in the previous example) need to be stored in global variables to be accessible. It is also important to format optional code cells in IPython/Jupyter notebooks as **markdown cells** to prevent them from running during import. Simply enclose them in triple backticks to preserve syntax highlighting:
Alternatively, you could start the visualizer using:
```python
from espressomd import visualization
visualizer = visualization.openGLLive(system)
visualizer.run()
```
Given the stochastic nature of some scripts (e.g. when P3M tuning parameters are not hardcoded and the numpy seed not fixed), it is necessary to run new tests multiple times to ensure reproducibility of the numerical results being tested. Here is one method in bash:
test_type="samples" # or test_type="tutorials"
test_name="dpd" # or test_name="02-charged_system-1"
make local_${test_type}
cd testsuite/scripts/${test_type}/
for i in {1..100}; do
rm -f local_${test_type/tutorials/${test_type}/${test_name%-[0-9]}}/${test_name}_processed.py
../../../pypresso test_${test_name}.py
if [ $? != "0" ]; then
echo "failure after ${i} trials"
break
fi
done
Store new tests in testsuite/scripts/samples/, testsuite/scripts/tutorials/ and testsuite/scripts/benchmarks/.
Configure tests in testsuite/scripts/samples/CMakeLists.txt and testsuite/scripts/tutorials/CMakeLists.txt using the syntax
sample_test(FILE <mytest.py>
[LABELS "<label1>[; ...]"]
[SUFFIX "<suffix>"]
[DEPENDENCIES <../dependency1.py>[, ...]])
for samples, or tutorial_test() for tutorials and their bonus scripts.
Use LABELS gpu to mark tests that require a GPU, so as to avoid locking the GPU when tests run in parallel (by default GPU rank 0 is used, so we can only run one GPU job at a time). SUFFIX is used to generate multiple test scripts from a template test script. Files listed in DEPENDENCIES are passed to configure_file().
This example shows how to set up an LB test to use the CPU implementation once and the GPU implementation once:
sample_test(FILE test_lbf.py SUFFIX cpu)
sample_test(FILE test_lbf.py SUFFIX gpu LABELS "gpu")
where SUFFIX is used to create a file test_lbf_with_cpu_processed.py and a file test_lbf_with_gpu_processed.py.
This example shows how to generate tests for various espressomd.shapes using a loop:
foreach(shape wall;sphere;ellipsoid;cylinder;hollowcone)
sample_test(FILE test_visualization_constraints.py SUFFIX ${shape})
endforeach(shape)
Sequentiality can be enforced with fixtures:
sample_test(FILE test_save_checkpoint.py)
sample_test(FILE test_load_checkpoint.py)
set_tests_properties(sample_save_checkpoint PROPERTIES FIXTURES_SETUP saved_checkpoint)
set_tests_properties(sample_load_checkpoint PROPERTIES FIXTURES_REQUIRED saved_checkpoint)
Test the installation of ESPResSo and its Python bindings.
There are two mechanisms: the CMake tests in testsuite/cmake/ which are rather simple (more details in the following subsections), and the CI job installation that compiles ESPResSo with all dependencies and install it in a new folder, then run the python/tutorial/sample tests. This CI job is rather slow and set to manual mode; it is recommended to activate it whenever a PR creates a new submodule or changes the installation logic in the CMake build system.
To run the tests:
make check_cmake_install
Framework: custom Bash script testsuite/cmake/BashUnitTests.sh.
Here is a toy example to check if a file exists and if a Python module can be imported:
#!/usr/bin/env bash
# load bash unit testing library
source BashUnitTests.sh
function test_install() {
assert_file_exists "@CMAKE_BINARY_DIR@/ipypresso"
}
function test_import() {
local import_dir="@DESTDIR@/@CMAKE_INSTALL_PREFIX@/@Python_SITEARCH@"
local instruction="import sys;sys.path.insert(0, '${import_dir}');import espressomd"
assert_return_code "@CMAKE_BINARY_DIR@/pypresso" -c "${instruction}"
}
# run tests
run_test_suite
Store new tests in testsuite/cmake/ and add Execute permission with chmod +x test_<name>.sh to avoid the following CTest error message:
The following tests FAILED:
1 - test_python_bindings (BAD_COMMAND)
Configure tests in testsuite/cmake/CMakeLists.txt using the syntax
cmake_test(FILE <mytest.sh> [DEPENDENCIES <../dependency1.sh>[, ...]])
Files listed in DEPENDENCIES are passed to configure_file().