-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code Coverage Support #143
Comments
Hi @felipefoz , I don't pretend that this was a good answer, as there might be clever ways we could create test vectors that reference Caraya assertions/unit tests, but it seems to me like Caraya should be thought more in terms of a potential engine to a Code Coverage tool than the actual tool reporting on code coverage. I'm happy to be convinced otherwise. What would be the workflow and hooks needed to make Caraya report on code coverage? I've used, as many others, Caraya in a TDD setting. I don't know how representative of Caraya usage this example is, but if you take my MQTT Broker as an example, I broke down the specification into 141 requirements. Each of the requirements is covered by {1 to N} assertions before I consider them fully covered. I close a requirement only if a full set of assertions give me a "pass" condition. I calculate code coverage by the ratio of requirements closed over total requirements I started with. That's my metric. I guess it could be turned into a "test vector" file(s) and Caraya could execute against this sort of configuration, but it feels to me like this is a brand new application that has Caraya as a dependency... |
Finally I had some time to think a little bit about it. This is a very interesting theme for me. Although, I think this shall be the way to pursuit, because other metrics seem to be not enough considering the nature of the LabVIEW (G) Programming. With that in mind, then it is clear that Caraya is not the target here. This another tool would have to interface with Caraya in some way. Using the requirements approach, some points to this approach could be:
Should this tool then be part of Caraya toolchain? Re-reading what I've just written I don't think so. Then, are we trying to reinvent the wheel? I will do some further research to see what is up on open-source projects. |
Hi @felipefoz , Sorry to revive this old thread, but I wanted to document that the new Test and Assert properties in Caraya 1.4.0+ could be used to build a quick code/requirement coverage. This is intended for anyone finding this thread through search engine and wondering what a potential solution might look like, in the same vein as what you suggested above. Test Properties include a specific Test Requirement method tha tcan link any test to a requirement ID. The requirement ID would be linked to that Test in the report class, as depicted in the verbose display of the test report. When running the Tests in a Test Suite, the output Test report Class contains all test results. The requiremend-id property includes those tags and can easily be programmatically compared to the full list of all requirements, providing an easy calculation of the Code Coverage. Of course, through convention in the requirement ID, it would be easy to create sub-categories of code coverage. Note that this is in LV 2013. LV2020 Sets would make it more compact to calculate the Test Coverage through an intersection. |
Hi @francois-normandin, it is not really an old one. thanks for that. I think this could satisfy most of the needs for doing some test coverage, considering that you have a good description of the requirements, unfortunately that is not always the case, but for now I don't see any other way, so this for me as definitely a solution for this issue. Needless to say, but you are doing a great job on maintaining this framework. Well done. I also foresee some other use cases for those properties. Regards, |
Hi @felipefoz @francois-normandin. I have been upgrading the OpenG Toolkit unit tests to use Caraya and these toolkits have a very simple code coverage model. What I think would be needed from Caraya (at a first order) might be some linker info about which project source VIs are being called by caraya tests. From that, one could determine which project VIs are being called by tests. Let's look at how the OpenG Toolkit thinks about coverage (in very very simple terms): OpenG Toolkit Code Coverage Model
Simple Codecov json ReportThis is then used to create a very simple codecov.json file:
OpenG Time Library Coverage Report: {
"coverage": {
"source/library/Periodic Trigger.vi": {
"1": 1
},
"source/library/Tick Count (ms).vi": {
"1": 1
},
"source/library/Wait (ms).vi": {
"1": 1
},
"source/library/Wait Until Next ms Multiple.vi": {
"1": 1
}
}
} Coverage Badges and PR CommentsWhat's nice is that this can be used with codecov to show badges and add comments to PRs showing changes in test coverage. Let's look at the OpenG Variant Data Library, since it has more tests and some missing coverage: Here is it's code coverage report: https://app.codecov.io/gh/vipm-io/OpenG-Variant-Data-Library And, it integrates with PRs. This screenshot isn't very useful, but will show more coverage reports in future PRs once it has a baseline for reference. |
Hi everyone,
I know this is maybe a long shot, but has anyone thought about code coverage support in Caraya?
What challenges do we have?
I know in NI UTF is based on diagrams and how many VIs are in project. I read it in the forums.
Before someone argues that Code Coverage is not a good parameter, I might say I find it valuable, and can be used for comparison in Gitlab and Github during merge requests. Although, I do understand that a code with 100% code coverage is not Bug Free, and a code without 100% coverage might have some bug.
Regards,
The text was updated successfully, but these errors were encountered: