-
-
Notifications
You must be signed in to change notification settings - Fork 433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
function level coverage #780
Comments
I would also very like to see such feature added. Currently we have to use some home-brewed script that parses normal coverage report and tries to extract info about function, this has some issues so "native" support from pycoverage would be great. |
By now there's only one plugin called pytest-func-cov for function coverage. But it does not support file reporting. |
I need some clarification about what people mean by "function-level coverage". The pytest-func-cov plugin considers a function tested if it is called directly by a test, but not if it is called indirectly. Is that what is meant? Or are indirect calls sufficient?
The pytest-func-cov README shows a report with file names, so I'm not sure what is missing? |
We are looking to complete our current coverage report with this data as well. It looks like pytest-func-cov's definition of function coverage (a function must be called directly from a test) is different from what other sources say. Atlassian, Wikipedia and lcov's data format docs all define a function as covered if it's hit during a test at any point. For our particular use case, having the |
Being able to see what functions were covered by tests would be useful for me too, and being called indirectly should absolutely still count.
Anything you can share here, by chance?
This makes pytest-func-cov unusable for my purposes. Also, it needs to be reimplemented due to a design flaw, and isn't actively maintained. Plus it'd be better to have this feature directly in coverage.py itself or a coverage plugin, as opposed to something external (with its own separate data format / possibly no structured data format). |
I have some thoughts about how to implement this, and have started some of the foundation work. I'm curious if people have ideas about how to present the data? Do you have examples of other coverage tools (possibly in other languages) that do a good job showing the results? |
That's awesome, thanks @nedbat! A useful way of presenting this data for me would be something like:
...as well as an associated JSON representation. Even just a list of That said, just having this data collected in the first place would be useful to have landed before any presentation support is landed. The harder part for users is getting this data to be collected. But once it's collected, it's easier for us to handle pulling it out of the coverage database ourselves in whatever format we need. In case it makes things easier for you to ship the collection support first, with presentation support left for a followup release. |
This is an interesting point, because the function-level data doesn't need to be stored in the database at all, just as the missing lines aren't in the database. The database captures the measured data. The reporting phase can determine what is missing, and could also calculate statistics per function. A middle ground could be something like a JSON report that includes function data. |
Outputting JSON about function-level coverage would be great! Just to improvise something:
And if passing |
If anyone wants to try some really hacked together functionality as a work in progress: install this temporary branch of coverage.py:
After running coverage as usual, run regions.py: https://github.com/nedbat/coveragepy/blob/nedbat/public-analysis/lab/regions.py It will list functions and classes, with their total percentage. Keep in mind, this is temporary. I'm interested to hear if this is useful data. |
Definitely useful, thank you! Some prior art just for completeness: lcov's And gcovr does as well: (Here's a zip of the source files and the html output I used to produce those screenshots, and here's a post about those two tools, in case it's helpful.) |
Thanks. I'm wondering how those tools cope with real projects with thousands of functions. How do they present the information in a useful way? |
Looks like both allow you to view the functions in a single file at a time (and hopefully there aren't thousands of functions in a single file). In Also looks like only |
I've implemented more real function and class reports. Please try them out: https://nedbatchelder.com/blog/202404/try_it_functionclass_coverage_report.html |
I ran it on pynpc and got these. The project is small, and it is a little difficult to see |
Thanks for the feedback. It might not be obvious, but the column headers are click-to-sort, so you can order the report by increasing coverage to put the 10 least-covered items at the top of the page. Also, I'll shortly be adding a checkbox to hide fully-covered items (#1384) which will help you focus on where work is needed. Does that help? |
Oh, cool! 😁 This is perfect.
Yes, that would be fantastic. Thank you so much! |
Any ideas what can I do to make the column sortability more apparent? |
Up and down chevrons might be good choices, like ▲/△ and ▼/▽. Of course, having them one above the other is CSS black voodoo magic. 🪄
This is where I found the glyphs, if it helps. |
Yes, it is well too cluttered. Maybe just have one icon which change shape depending on the sorting?
If it's simple and works! 🚀 🌔 Go for it! |
Awesome progress!
Rather than clutter the UI with "Columns are sortable" text, instead ensure that tables are always shown sorted by one of the columns and that the header for that column shows the arrow to indicate the sort state. Currently tables are already initialized sorted by the first column, but for some reason the arrow is hidden, so users miss this important UI cue. The example in the w3.org "sortable table" is some prior art here.
Always produced.
I wasn't sure about "Inner functions are not counted as part of their outer functions". I wonder if inner functions/classes should just be considered the same as other lines in the implementation of an (outer) function/class (at least by default).
Large projects have their code spread across many files in many subdirectories. To scale better, I've been thinking about whether coverage.py's views should more directly reflect a hierarchy like "directories contain files, files contain classes and functions, classes and functions contain lines". Each time you drill down the hierarchy, it both adds more detail and removes information you don't care about, kind of like zooming in when using a maps app. Additional "layers" could similarly be enabled to add more detail at the current "zoom level" (as with traffic in a maps app). This seems like one approach to scaling the current UI to better handle more code as well as more features of that code. Concretely, "coverage html" could generate a tree of views that reflects the way the code is organized into subdirectories. The top-level report would show the stats for each file in the top-level directory, but if there is code in any subdirectory, the top-level report would only include a single entry for each subdirectory with its aggregate stats, and the files in that subdirectory would not be shown. You could then click a subdirectory to see the view for the files in that subdirectory. (This matches (Currently coverage.py provides a single view for file-level coverage that shows all files in all subdirectories combined together. To focus on files in a certain directory, you currently have to use the "filter" input, which doesn't scale as well.) With this in mind, a new per-file view would be able to show the class- and function-level coverage for just the classes and functions in a given file, in addition to the current line-level view that is generated for each file. Finally, each directory-level view would provide a link to a report that would show the class- and function-level view for all classes and functions below that directory level (regardless of file). Still thinking about this but didn't want to let any more time pass before I shared the rough thoughts. |
I appreciate the ideas. One benefit of a flat view is that you can sort it by coverage to focus on what needs work no matter where it is. |
Agreed. That's what I had in mind with:
So the top-level (root directory) report would link to a function/class-level coverage report that included all functions/classes (regardless of file or sub(sub,...)directory), preserving what's currently possible. The same could be done for all files (regardless of sub(sub,...)directory), again preserving what's currently possible. One idea I realized I omitted above: Instead of separate views for class- and function-level coverage, would it be worth considering a combined "class and function coverage" view that merged the class and function data together, and two checkboxes would allow you to toggle them each off individually if desired? (Or maybe something even more like "layers" in a maps app, but I'm not sure what.) |
Just wanted to add, I'm guessing that reorganizing coverage.py's reports in the way I described is probably a lot more work than you were thinking of doing for this. Since |
Thanks for the pointer to genhtml. I'll experiment with that as an escape hatch for people who want more involved reports (once the lcov output has function data). |
This is a good point. I didn't realize the initial sorting isn't indicated. I've changed the report so that it's always sorted by something, and the sorting is indicated with more obvious triangles in commit a3dff72. Thanks. |
(oops, forgot to mention this issue in the changelog, but) this is now released as part of coverage 7.5.0. Try it out! |
I need to get info about what functions are tested. Currently there is only line coverage data.
if this tool can be enhanced to report that data, it will be great
The text was updated successfully, but these errors were encountered: