Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track test coverage #132

Closed
1 of 5 tasks
briansmith opened this issue Feb 29, 2016 · 52 comments
Closed
1 of 5 tasks

Track test coverage #132

briansmith opened this issue Feb 29, 2016 · 52 comments

Comments

@briansmith
Copy link
Owner

briansmith commented Feb 29, 2016

  • Measure code coverage
    • x86 and x86-64 Linux.
    • armv6, armv7+neon, and AAarch64: Some progress, but blocked on toolchain issues (see comments below).
    • Windows: No progress yet. This article makes it look like it might not be too painful to do.
  • Report code coverage. I guess this is easiest to do with coveralls. I guess maybe we need to add the coveralls badge to the README or somewhere else where people can find it.

This is related to #131.

See also https://boringssl-review.googlesource.com/#/c/7251/ and related BoringSSL work.

@briansmith
Copy link
Owner Author

@frewsxcv did some work on this using gcov. There are some Rust-specific complications. See https://internals.rust-lang.org/t/disabling-gc-sections-when-test-is-specified/2163.

@frewsxcv
Copy link
Contributor

FYI, I generated coverage using kcov, not gcov. Though, there had been some work done recently to get gcov working with Rust: rust-lang/rfcs#646

@briansmith
Copy link
Owner Author

@pietro
Copy link
Contributor

pietro commented Apr 15, 2016

I created a travis entry to run kcov on gcc 5 on 64 bit linux and stable rust. You can see the test run here: https://coveralls.io/builds/5796536

Should I set up kcov on all linux x86 builds?

@briansmith
Copy link
Owner Author

All of this is awesome work! I have some questions:

  1. ring is very unusual in that a huge amount of the code is platform-specific. For example, there is one SHA-256 implementation for each target platform, each written in assembly language. Similarly, different platforms chose different parts of crypto/modes/gcm.c to use. This means that, to be accurate, the coverage reports for multiple platforms need to be combined together into one report. Is there any way of doing this?
  2. I noticed the data in the coveralls.io link above is incomplete. Is this because of the issues described at https://internals.rust-lang.org/t/disabling-gc-sections-when-test-is-specified/2163? If so, do you have any suggestions for how to resolve this?

@pietro
Copy link
Contributor

pietro commented Apr 15, 2016

  1. I'll try to enable kcov on all Linux builds. Mac OS X and windows will have to wait until rust has builtin coverage tools. Coveralls does one report per travis build. I added beta and nightly to my branch. Unfortunately the report for the nightly build wasn't uploaded. I'll try to fix that. The reports are here: https://coveralls.io/builds/5807150
  2. Passing -C link-dead-code to rustc using RUSTFLAGS env var only works on beta and nightly. Thats's why the coverage on the builds on the link above are so different.

@briansmith
Copy link
Owner Author

I'll try to enable kcov on all Linux builds.

That would be awesome!

Mac OS X and windows will have to wait until rust has builtin coverage tools.

Understood. Actually, I mispoke above. There is very little platform-specific code in ring, but there's lots of architecture-specific code. So, actually, we could get the tests running on Linux Aarch64 and Linux ARM, we'd cover pretty much all the code. (The random number generation is an exception.) I'll work on getting the test suite to run on ARM and Aarch64, via emulators, on Travis CI.

Passing -C link-dead-code to rustc using RUSTFLAGS env var only works on beta and nightly. Thats's why the coverage on the builds on the link above are so different.

A beta and/or nightly report is probably sufficient for everything, as there isn't (shouldn't be) rust-version-specific code in ring, at least currently.

In the report above, I see that it reported on crypto/curve25519/asm/x25519-asm-x86_64.S but not other asm code. Presumably that is because that file is not wrapped in Perl like the rest of the assembly language code is. So, later we'll need to do more work to handle the assembly language code.

@briansmith
Copy link
Owner Author

A note for future improvements: gcov/kcov/etc. don't offer MC/DC coverage. We should look into some tools that provide that, after we get kcov (or whatever) working.

@pietro
Copy link
Contributor

pietro commented Apr 16, 2016

I'll create a PR with my changes to run kcov on nightly rust on Linux x86 and gcc5. I haven't been able to get test executable built with a cross compiler yet. After that running with user mode qemu should be easy.

@pietro
Copy link
Contributor

pietro commented Apr 16, 2016

I got the tests to run on 32 bit ARM, with some failures though: https://travis-ci.org/pietro/ring/builds/123605148

The version of qemu on travis doesn't have 64 bit ARM. I'll look into using the newer travis environment for that.

I can't get kcov to submit to coveralls when doing a x86 32 bit build. I'm trying to compile kcov as a 32 bit binary too to see if works.

@pietro
Copy link
Contributor

pietro commented Apr 16, 2016

32 bit kcov is waiting on 32 bit binutils: travis-ci/apt-package-safelist#2838

@briansmith
Copy link
Owner Author

I got the tests to run on 32 bit ARM, with some failures though: https://travis-ci.org/pietro/ring/builds/123605148

I am fixing that now. The problem is that the code is using env!("OUT_DIR") to determine where the generated executable files are. However, that macro is evaluated at compile time, not at runtime, and it expands to an absolute path. In other words, the tests are trying to run executables in the locations they exist on the host, instead of as they exist on the target.

@briansmith
Copy link
Owner Author

I am fixing that now.

I believe my most recent commits 8b5a1a5, 4165289, and 14d1666 fix the failures you are seeing. I can't fully verify that, though, because the version of QEMU I'm using right now is unstable (I hope; otherwise there are bigger problems).

Now all the tests are in one executable, which should also improve the code coverage results.

@pietro
Copy link
Contributor

pietro commented Apr 18, 2016

QEMU won't work unless you're using one of the 2.6.0 rc's because previous versions don't have support for getrandom(2) on arm and aarch64.

@briansmith
Copy link
Owner Author

QEMU won't work unless you're using one of the 2.6.0 rc's because previous versions don't have support for getrandom(2) on arm and aarch64.

QEMU will print "unsupported syscall: 384" (or similar) but AFAICT things will keep going because the Rust code falls back to using /dev/urandom if the syscall fails. (However, now that I think about it, I might have forgot to add /dev/urandom to my chroot, which might be the cause of the instability I was experiencing.)

@pietro
Copy link
Contributor

pietro commented Apr 19, 2016

Compiling a 32 bit kcov is hard than I thought. I can't get 32 and 64 bit binutils installed side by side. kcov itself builds a 32 bit chroot but that takes too long. I'll clean up my branch and submit the PR for running kcov on one of the 64 bit builds.

On the ARM side: 32 bit hangs on travis. It may just be taking way to long to run the tests and it gets killed after 10 minutes. I can't get ring to compile with the "native" cross toolchain on trusty (see https://travis-ci.org/pietro/ring/jobs/124061285). I tried both armel and armhf. AArch64 either segfaults or I get a "tcg fatal error" which looks like a threading issue.

@pietro
Copy link
Contributor

pietro commented Apr 19, 2016

It turns out binutils is not needed to compile kcov, so I'm skipping it on 32 bit cross builds. Still working on arm.

@pietro
Copy link
Contributor

pietro commented Apr 27, 2016

QEMU will print "unsupported syscall: 384" (or similar) but AFAICT things will keep going because the Rust code falls back to using /dev/urandom if the syscall fails.

I wasn't able to run the tests for arm with the version of qemu on ubuntu precise. That version doesn't support aarch64 and unfortunately I wasn't able to run the tests on the version on ubuntu trusty. But I installed qemu 2.5 locally and successfully ran the tests for both arm and aarch64. I requested a backport for ubuntu trusty on launchpad but I'm not holding my breath. I'm thinking about doing it myself.

@briansmith
Copy link
Owner Author

I wasn't able to run the tests for arm with the version of qemu on ubuntu precise. That version doesn't support aarch64 and unfortunately I wasn't able to run the tests on the version on ubuntu trusty. But I installed qemu 2.5 locally and successfully ran the tests for both arm and aarch64.

Awesome! I also did some similar work for 32-bit ARM (and maybe Aarch64), which resulted in the flurry of ARM-specific fixes. Maybe after your initial PR commits land, we can figure out a way to at least make it easy to do the coverage runs manually/locally by putting the commands into a script.

Thanks for your work on this. I am reviewing your PRs now.

@pietro
Copy link
Contributor

pietro commented Apr 30, 2016

I backported QEMU 2.5 to Ubuntu Trusty and set up travis to use it for the ARM tests. The debug builds on rust stable for both 32 and 64 bit fail with memory issues.Two of the 32 bit builds timed out.

I copied and modified the travis build script to run the tests locally. I'll clean it up a bit and do a PR. I'll play with building a cross kcov for ARM. It might even work!

The ARM builds with QEMU for running the tests: https://travis-ci.org/pietro/ring/builds/126502065

@briansmith
Copy link
Owner Author

. The debug builds on rust stable for both 32 and 64 bit fail with memory issues.

Yes, I saw this in my own experiments with it. I'm not sure what the problem is, yet. I'll do some experiments to see if the problem is in ring, qemu, or the rust stdlib/runtime.

Two of the 32 bit builds timed out.

There are some slow tests that are marked #[cfg(not(debug_assertions))]. Apparently they are too slow on ARM in the emulator and need to be handled specially too. In particular, I think we should use travis_wait on ARM emulator builds as described here: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received.

I copied and modified the travis build script to run the tests locally. I'll clean it up a bit and do a PR. I'll play with building a cross kcov for ARM. It might even work!

Even just getting the tests running on ARM, without coverage, would be a very useful PR.

@pietro
Copy link
Contributor

pietro commented May 5, 2016

I think we should use travis_wait on ARM emulator builds

Unfortunately even 30 minutes is not enough: https://travis-ci.org/pietro/ring/builds/127967722

Even just getting the tests running on ARM, without coverage, would be a very useful PR.

Done: #171

@briansmith
Copy link
Owner Author

Here's how the rust libc is doing its testing on ARM and other targets: https://github.com/rust-lang/libc/blob/master/ci/run.sh.

@pietro
Copy link
Contributor

pietro commented May 11, 2016

Doesn't look much different then what we're doing. They a are using armhf and the packaged gcc. Should we consider that?

@briansmith
Copy link
Owner Author

Doesn't look much different then what we're doing. They a are using armhf and the packaged gcc. Should we consider that?

Yes, I think we should try to replicate what their setup for now. IIRC, I originally chose the Linaro toolchain because I couldn't figure out how to get another one on Travis CI, and because some parts of BoringSSL used to need (a specific version of) the Linaro chain for some hack that no longer matters.

@pietro
Copy link
Contributor

pietro commented May 12, 2016

armhf and aarch64 with ubuntu gcc packages on #174

I had to modify the makefiles to get ring to build on android, but the tests segfault too. Should I do a pr with my changes? If so should I run the android tests on travis?

@briansmith
Copy link
Owner Author

I had to modify the makefiles to get ring to build on android, but the tests segfault too.

OK, that's good to know. That almost definitely means I screwed up something in the Android-specific bits of ring. I'll try to figure it out.

Should I do a pr with my changes?

Yes please!

If so should I run the android tests on travis?

Yes. That would be great.

@pietro
Copy link
Contributor

pietro commented May 17, 2016

I got ARM Android to build and run successfully. I'm not submitting a PR yet because I hardcoded the Android API and GCC versions. Unless of course @briansmith think it's ok.

The changes to the build scripts are on pietro/ring@120cc92 and the tests on travis

I also think I know why my ios cross builds were failing. We need to set up the correct perlasm flavour.

I tried cross compiling to windows using mingw-w64 since it's a rust tier 1 platform. That sent me down a rabbit hole. Current mingw-w64 release lacks an intrinsic we need. Current RC has that intrinsic, so after building and installing it the build died because upstream BoringSSL doesn't support mingw-w64. I did some perlasm hacking to see if I could at least get it to build it failed to find some symbols. Someone who knows more windows and assembly than I could try to add mingw-w64 support to upstream.

@briansmith
Copy link
Owner Author

I got ARM Android to build and run successfully. I'm not submitting a PR yet because I hardcoded the Android API and GCC versions. Unless of course @briansmith think it's ok.

Yes, it's fine that way, especially because you didn't even change anything except the Travis config. Thanks!

I also think I know why my ios cross builds were failing. We need to set up the correct perlasm flavour.

That sounds likely.

I tried cross compiling to windows using mingw-w64 since it's a rust tier 1 platform. That sent me down a rabbit hole. Current mingw-w64 release lacks an intrinsic we need. Current RC has that intrinsic, so after building and installing it the build died because upstream BoringSSL doesn't support mingw-w64. I did some perlasm hacking to see if I could at least get it to build it failed to find some symbols. Someone who knows more windows and assembly than I could try to add mingw-w64 support to upstream.

Could you put this in a public branch? If so, somebody could take it over.

The -gnu ABI on Windows isn't a high priority for me. Real software shipping on Windows (not just developer tools lazily ported to Windows) is build with MSVC anyway.

@briansmith
Copy link
Owner Author

I also think I know why my ios cross builds were failing. We need to set up the correct perlasm flavour.
That sounds likely.

See this: https://github.com/openssl/openssl/blob/592b6fb489c97be2b039ab671647aa58a472204a/Configurations/10-main.conf#L1570

It should be ios32 or ios64 depending on whether you're building to 64-bit or 32-bit.

@pietro
Copy link
Contributor

pietro commented May 17, 2016

It should be ios32 or ios64 depending on whether you're building to 64-bit or 32-bit.

I was thinking about using macosx because thats what BoringSSL does: https://boringssl.googlesource.com/boringssl/+/master/crypto/CMakeLists.txt#3

@briansmith
Copy link
Owner Author

If you look at the perlasm source, you will see:

  1. "macosx" is recognized (only) in x86/x86-64 stuff.
  2. "ios" and specifically "ios" is matches in arm-xlate.pl.

I have the impression that Chromium doesn't actually use the BoringSSL asm code on iOS so I think we're a little bit in uncharted territory.

@pietro
Copy link
Contributor

pietro commented May 18, 2016

Android build/tests on travis on #177

For iOS I'm setting up the perlasm flavour to:

  1. ios32 on armv7-apple-ios, armv7-apple-iosand armv7s-apple-ios
  2. ios64 on aarch64-apple-ios
  3. macosx on i386-apple-ios and x86_64-apple-ios (the emulators)

I'm also setting up -target, -arch, -isysroot, -mios-version-min and mios-simulator-version-minclang options.

That's on pietro/ring@803bcd6

After doing those changes and setting up RUSTFLAGS to set the mios-version-min I was able to build and run the tests for i386-apple-ios and x86_64-apple-ios. On i386 the c::test_i64_metrics and c::test_u64_metrics tests fail. I couldn't find anything about the alignment of the 32 bit x86 emulator.

ARM iOS builds fail because the syntax of crypto/curve25519/asm/x25519-asm-arm.S is not supported by the version of clang that comes with XCode. I was able to find some information and fix some of the issues with CPP Macros. But I still can't get it to build. I'm getting:

crypto/curve25519/asm/x25519-asm-arm.S:34:12: error: invalid operand for instruction
and sp,sp,#0xffffffe0

Aarch64 iOS builds fail because upstream split the cpu detection code from crypto/cpu-arm.c into crypto/cpu-aarch64-linux.c so we don't have OPENSSL_cpuid_setup.

That's all for today.

@briansmith
Copy link
Owner Author

On i386 the c::test_i64_metrics and c::test_u64_metrics tests fail. I couldn't find anything about the alignment of the 32 bit x86 emulator.

See

ring/src/c.rs

Line 144 in 72db3b2

#[cfg(all(test, target_arch = "x86", target_os = "linux"))]
. Try changing the factor to 1 for that target.

ARM iOS builds fail because the syntax of crypto/curve25519/asm/x25519-asm-arm.S is not supported by the version of clang that comes with XCode.

For now, we can disable the Curve25519 asm code on ARM iOS, falling back to the C code.

Aarch64 iOS builds fail because upstream split the cpu detection code from crypto/cpu-arm.c into crypto/cpu-aarch64-linux.c so we don't have OPENSSL_cpuid_setup.

For iOS, see:

#if defined(OPENSSL_APPLE)

At least for now, I think we need to define these on iOS AAarch64:

  • OPENSSL_STATIC_ARMCAP_AES
  • OPENSSL_STATIC_ARMCAP_SHA1
  • OPENSSL_STATIC_ARMCAP_SHA256
  • OPENSSL_STATIC_ARMCAP_PMULL

Is the iOS compiler defining __ARM_NEON__ automatically? If not, we should change its configuration so that it does, but we can also define OPENSSL_STATIC_ARMCAP_NEON as a workaround if that's hard. On 32-bit iPhone.

Then in src/init.rs, use #[cfg(something)] to make init_once a no-op and to avoid the extern declaration of OPENSSL_cpuid_setup.

@briansmith
Copy link
Owner Author

On 32-bit iPhone.

...I believe that we need to make sure that the compiler is defining __ARM_NEON__ or that we do the wokaround with OPENSSL_STATIC_ARMCAP_NEON. I don't think any 32-bit iPhones have SHA1/SHA256/PMUL/AES but I could very well be wrong.

@pietro
Copy link
Contributor

pietro commented May 19, 2016

Try changing the factor to 1 for that target.

You mean 2 right? Because if I'm reading the code and the failure message correctly 1 is the default and we're getting 4 for the alignment when we want it to be 8.

@briansmith
Copy link
Owner Author

Try changing the factor to 1 for that target.
You mean 2 right? Because if I'm reading the code and the failure message correctly 1 is the default and we're getting 4 for the alignment when we want it to be 8.

I suggest just changing it to the opposite of what it currently is.

@briansmith
Copy link
Owner Author

I have noticed that for ARM and AAarch64, Rust Nightly builds and Rust Stable + RELWITHDEBINFO builds succeed regularly. So, it might have been a bug in Rust debug builds that is already fixed?

I've also noticed that sometimes AAarch64 builds fail intermittently with an "Expected key=value format" error followed by a SIGSEGV. This appears to be a bug in QEMU as evidenced by, for example, http://git.qemu.org/?p=qemu.git;a=commitdiff;h=886bc7a0492ff5d3b6c42157afff827b68595c34. Note that that commit might be fixing a different bug.

So, this all seems like good news insofar as it seems to mean we don't have bugs in ring causing these crashes.

@briansmith
Copy link
Owner Author

briansmith commented May 20, 2016

I noticed that the Android stuff is 32-bit ARM. Is there anything blocking Aarch64? I guess with new phones AAarch64 must be more common now.

Also, I'm thinking that once we have Android 32-bit and 64-bit builds, we may just disable the non-Android builds on Travis CI, or make them more targetted, e.g. Raspberry Pi 2, other (IoT-oriented) dev boards.

@pietro
Copy link
Contributor

pietro commented May 20, 2016

I noticed that the Android stuff is 32-bit ARM. Is there anything blocking Aarch64?

Nope, I picked one target to get it working in the first place. It's pretty easy to add new ndk/sdk installation targets. I'll do that in a PR later today.

QEMU 2.6 was just released. I'll run the ARM tests on it and if there's any difference I'll try to get it on travis.

I'll clean up and put all my ios and mingw changes on specific branches so someone with more knowledge about those platforms can take a look.

@briansmith
Copy link
Owner Author

Awesome! I noticed QEMU 2.6 has a "raspi2" target, so whenever we get it working, we should be able to convert the existing QEMU stuff to target that (hopefully using Raspberry-Pi-2 optimized build flags).

@pietro
Copy link
Contributor

pietro commented May 20, 2016

I noticed that the Android stuff is 32-bit ARM. Is there anything blocking Aarch64?

I was too optimistic. Aarch64 Android is a tier 3 platform so rust doesn't provide a pre-compiled std for it. Also it doesn't look like Android provides a system image for it.

@briansmith
Copy link
Owner Author

I was too optimistic. Aarch64 Android is a tier 3 platform so rust doesn't provide a pre-compiled std for it. Also it doesn't look like Android provides a system image for it.

No problem. I started a thread here: https://internals.rust-lang.org/t/suggestion-make-real-world-tier-1-platforms-at-least-tier-2/. Let's see what's decided.

@pietro
Copy link
Contributor

pietro commented May 22, 2016

QEMU 2.6 was just released. I'll run the ARM tests on it and if there's any difference I'll try to get it on travis.

Since there's already a QEMU 2.6 package on Debian unstable I went ahead and backported it to Ubuntu Trusty and uploaded it to my PPA. Let's keep an eye out for AAarch64 failures.

@pietro
Copy link
Contributor

pietro commented May 24, 2016

No problem. I started a thread here: https://internals.rust-lang.org/t/suggestion-make-real-world-tier-1-platforms-at-least-tier-2/. Let's see what's decided.

I was trying to figure out the SDK/NDK versions rust uses and I had seen that the Aarch64 NDK is installed on the buildbots. I was going to try using the docker image the buildbots use to build a cross std, but it seems rust devs are ok with adding Aarch64 Andoid so I did a PR for that rust-lang-deprecated/rust-buildbot#103

@pietro
Copy link
Contributor

pietro commented May 24, 2016

I pushed the branch that has the work I did to support iOS and opened #184 to discuss it.

@briansmith
Copy link
Owner Author

Good news! I switched from using the C code to the ASM code for poly1305, and suddenly the Stable Rust AAarch64/ARM builds that were frequently failing (and thus in the "Allowed Failures" on Travis CI) are now passing regularly! This indicates that there's some problem in the C implementation of the poly1305 code and/or that I screwed up the FFI stuff in a such way that there was a buffer overflow or something. Still investigating.

Bad news! I feel like recently some of the AAarch64/ARM stuff that was previously moved out of "Allowed Failures" is less stable the last few days--before the C -> ASM change for the poly1305 stuff, though the C -> ASM change may have worsened the effect. Will keep an eye on this.

@briansmith
Copy link
Owner Author

Good news! I switched from using the C code to the ASM code for poly1305, and suddenly the Stable Rust AAarch64/ARM builds that were frequently failing (and thus in the "Allowed Failures" on Travis CI) are now passing regularly! This indicates that there's some problem in the C implementation of the poly1305 code and/or that I screwed up the FFI stuff in a such way that there was a buffer overflow or something. Still investigating.

More good news! I found out what specific bug was causing the failures for ARM/Aarch64 Rust Stable builds and I fixed it! (It actually seems to be irrelevant for current platforms since we switched to using the asm implementations of Poly1305, but it will probably help future uses of the C implementation.) The fix is 5d22c58. I also cleaned up the code a bit in other ways: aebb3b9 and 50c18bf.

With that, I found that the tests are always passing on all platforms, except some AArch64 builds time out. I fixed that with 0eadaf6, I think. Then I was able to remove the "Allow Failures" section from .travis.yml completely in 83219e8.

Everything looks good on Travis CI now, except...there's some problem with the Rust build infrastructure, causing build failures for builds using Rust Nightly. Hopefully that will be fixed when the next Rust Nightly build comes out.

@briansmith
Copy link
Owner Author

I was trying to figure out the SDK/NDK versions rust uses and I had seen that the Aarch64 NDK is installed on the buildbots. I was going to try using the docker image the buildbots use to build a cross std, but it seems rust devs are ok with adding Aarch64 Andoid so I did a PR for that rust-lang-deprecated/rust-buildbot#103

This is now merged and there are now rust-std-nightly-aarch64-linux-android.tar.gz and similar at https://static.rust-lang.org/dist. So, if I understand correctly, we should be able to add aarch64-linux-android builds to Travis CI.

Also, @pietro, did you say that you already started some work on getting kcov to work on the Android or ARM builds? Is everything already landed?

@pietro
Copy link
Contributor

pietro commented Jun 13, 2016

Sorry, I was extremely busy lately. I cross-compiled kcov for arm but it uses cpu pinning so it won't work on user mode qemu.

@pietro
Copy link
Contributor

pietro commented Jul 8, 2020

@briansmith I'm trying to get aarch64-unknown-linux-gnu builds going on native arm on travis. If I get it working I'll and kcov to them. hopefully it won't be too hard.

@briansmith
Copy link
Owner Author

briansmith commented Nov 25, 2020

PR #1124 adds code coverage measurement of the test suite to GitHub Actions for {x86_64,i686}-unknown-linux-musl and aarch64-unknown-linux-gnu (via QEMU) using Rust Nightly's new source-basd branch coverage. I integrated codecov.io into GitHub so that code reviews can make use of the code coverage information.

In the future, we should make the following improvements:

  • Add 32-bit ARM code coverage. As of 2020-11-25, I don't know of a 32-bit ARM target for Rust that has profiler_builtins available to do this.
  • Switch any -gnu targets for profiling to instead be -musl targets.
  • Enable code coverage measurement for the assembly language code.
  • Measure coverage for the Windows targets.
  • Measure coverage for Apple targets.
  • Enable branch coverage measurement. Branch coverage measurement is being added to LLVM: https://reviews.llvm.org/D84467.
  • Organize the coverage reports into a more useful form. Right now, all the code coverage information for running the test suite is combined into a single report. Ideally we'd separate things out so that we can verify that, for example, aead_tests provides complete code coverage for the ring::aead module, regardless of what other tests do. As another example, if would be useful to be able to look at code coverage on a per-target basis.
  • Automatically enforce the rules that new PRs don't land without test coverage for the changed/new code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants