Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code test coverage should be measured and reported in CI #288

Open
briansmith opened this issue Oct 9, 2022 · 4 comments
Open

Code test coverage should be measured and reported in CI #288

briansmith opened this issue Oct 9, 2022 · 4 comments
Labels
help wanted Extra attention is needed testing

Comments

@briansmith
Copy link
Contributor

There is a lot of runtime feature detection and other conditional logic in this crate. AFAICT, when tests are run, it is arbitrary which implementation gets picked. For example, for Linux, AFAICT only the getrandom syscall implementation is tested, and the file I/O fallback is not tested. Publishing the code test coverage report would make it clear which code isn't being tested on which platforms.

There is a lot of code that is copy-modified-pasted. This is understandable because some targets have slightly different APIs. My hope is that when code test coverage measurement is published, we'll see clearly which duplicated coding patterns we should factor out to increase the code coverage further to minimize the amount of uncovered code for difficult-to-test (lacking test runners) platforms.

Also I expect having code test coverage will facilitate more exhaustive testing, such as writing tests that exercsise both the getrandom syscall branch and the File I/O, e.g. by using ptrace or equivalent, similar to what BoringSSL does.

@newpavlov
Copy link
Member

Are you proposing to measure coverage on a per-target basis? I am not sure if it will be possible to accumulate coverage data for supported targets which run in separate CI jobs. Also, we can not run tests in CI for all supported targets the first place.

As for measuring code coverage, tarpaulin is a quite convenient tool for that.

@josephlr
Copy link
Member

josephlr commented Nov 3, 2022

I think that ideally we would collect coverage metrics from a bunch of different runs on a bunch of different targets. Then we would have a way to "merge" all this coverage data. Given the limitations with some of our targets, certainly some files wouldn't be covered, but we would be able to see which lines of code are being hit by some of our tests.

@briansmith
Copy link
Contributor Author

Are you proposing to measure coverage on a per-target basis? I am not sure if it will be possible to accumulate coverage data for supported targets which run in separate CI jobs.

In the ring CI we do collect code test coverage for multiple targets. We send it to codecov.io and then codecov.io merges it all for us automatically.

@josephlr
Copy link
Member

It might be a good idea to also incorporate branch coverage (in addition to line coverage) to make sure we are hitting alternative code paths: taiki-e/cargo-llvm-cov#8

@newpavlov newpavlov added the help wanted Extra attention is needed label Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed testing
Projects
None yet
Development

No branches or pull requests

3 participants