Description
Often a big source of annoyance is that recompiling the gigantic test source files we have takes quite a bit of time. Most of the tests, though, have a very similar structure:
In the EndToEnd-tests, a contract is given, it is compiled, deployed and then multiple functions are called on the contract with multiple arguments, yielding certain expected outputs.
In the NameAndType tests, a contract is given, is compiled until the type checking phase and then an expected list of warnings and errors (or none) is checked.
Both types do not really need to be .cpp files with their own logic. Most of the tests can be specified by just a list of strings. If this list is an external file, no recompilation is needed if just the test expectations are adjusted.
We can even go further than that: We might have an interactive test runner, that just asks the user to automatically correct test expectations if they fail, on a test-by-test basis: It displays the source, the inptus (for the case of EndToEnd tests), the actual values and the expected values and waits for a y/n response to adjust the values.
The only problem here might be the encoding of the inputs and outputs of the EndToEnd tests. For readability reasons, we do not want them to be fully hex encoded, so the file format has to be able to support some kind of flexibility there. We might start with an easy version, just supporting decimals and hex numbers (if auto-generated, we might want to check if the hex version ends with many zeros or f
s and only then choose hex) and extract all test cases that have such simple inputs and outputs.
I would propose to use a simple separator-based expectation file format (not yaml or json because it could create problems with escaping and also indentation is always weird):
TestName
contract {
// source until separator
}
=====
f(uint,bytes32): 0x123000, 456 -> 123, true
g(string): "abc" -> X
=====
NextTest
// ...
The X
signifies that a revert
is expected.