You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of #3486, we should standardise the NameAndTypeResolution tests so that their test expectations can be easily upgraded. This will be very important for the transition to version 0.5.0 since it requires many updates.
The idea would be to move each test into its own file which contains source code and the list of errors and warnings it generates.
We should also take this opportunity and create a new name for the test suite while gradually moving out the tests. It should probably be named something like SyntaxTests (and the EndToEndTests should be renamed SemanticsTests).
It is often beneficial to group similar tests. This can either be done by allowing multiple tests per file or by creating subdirectories for the test data.
The test runner should still use boost test because this allows us to benefit from its xml reporting output, but it should dynamically modify the test tree based on the files.
There should be a way to run the test which, for each failing test, interactively displays the test input, the multiset of expected errors and the multiset of actual errors and allows to either accept the changed expectations or not (and skip to the next failing test). This way to run the tests could be its own executable.
The text was updated successfully, but these errors were encountered:
As part of #3486, we should standardise the NameAndTypeResolution tests so that their test expectations can be easily upgraded. This will be very important for the transition to version 0.5.0 since it requires many updates.
The idea would be to move each test into its own file which contains source code and the list of errors and warnings it generates.
We should also take this opportunity and create a new name for the test suite while gradually moving out the tests. It should probably be named something like SyntaxTests (and the EndToEndTests should be renamed SemanticsTests).
It is often beneficial to group similar tests. This can either be done by allowing multiple tests per file or by creating subdirectories for the test data.
The test runner should still use boost test because this allows us to benefit from its xml reporting output, but it should dynamically modify the test tree based on the files.
There should be a way to run the test which, for each failing test, interactively displays the test input, the multiset of expected errors and the multiset of actual errors and allows to either accept the changed expectations or not (and skip to the next failing test). This way to run the tests could be its own executable.
The text was updated successfully, but these errors were encountered: