You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have started to write a more rigorous testing in my fork to allow for somewhat easier testing.
What do you think how detailed should the tests be? Do we need unit tests for every function of the script or is it enough to use functional tests as before (and implemented already)?
Do you have a "wishlist" for the test features? Currently, the test suite looks for all read files and references available in the test/data folder and tries to run the pipeline for each combination. I am thinking of adding an option to specify test matrix for options and read/reference combinations (eg. by giving a json file(?)).
I think something simple and robust would be good. The aim should be to test the general functionality of the program and report -non-exhaustively - that things look ok.
For me, a huge test of every feature means we need to put a lot of effort into monitoring the test, and adding new tests for each feature. This probably exceeds the scope and time of what we can put into this. Simple tests can help users to gain confidence that their installation and conda env are fine, without ovewhelming them with info.
TODO
For filenames, test should rename filenames created with test system to avoid having to change test
mv someFancyName1 test1.bam.txt
mv someFancyName_dup2 test2.bam.txt
The text was updated successfully, but these errors were encountered: