-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for measuring unit test coverage difference in presubmits #25966
Comments
Given that we always merge into the target branch locally before testing, and we have continuous unit testing ... strawman:
Something similar was implemented once for a Google internal project on a Prow. |
+1 |
/cc |
this is a great idea! I'm in supporting.
According to https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md#job-environment-variables, I have a couple of questions:
|
I don't think we need a list of changed files because either way we need a baseline to compare to and changing one file can change the coverage in an untouched file. Consider the trivial case of modifying a unit test file only. |
Would e.g. removing a file or splitting it into separate files be something we expect the automation to pick up on and handle gracefully or would it be hard enough to automate / rare enough to require /override? |
Is there any specific reason for not using codecov? It's free for public GitHub repos and is very easy to setup. The Knative projects have been using it for a while and it worked pretty well. See the below links as an example:
|
I like that - it seems simpler to do - so hopefully also faster to act on. To do that, IIUC we only need to:
@BenTheElder - Do you have any pointers maybe? Would it be possible to upstream this potentially?
I would prefer start simple and expand later. Even if we will be missing those cases initially, the other cases will be a huge gain already.
We do use golang test coverage underneath:
Will anyone be able to help with this maybe? /help wanted |
I think my question/concern is whether it's worth the effort to reinvent the wheel, since To add a bit more context, Knative previously had an in-house |
And it looks some Kubernetes projects are already using it - https://app.codecov.io/gh/kubernetes |
I'm not owning our test-infra so I will let others to chime in, but while I generally agree with non reinventing the wheel, there seem to be advantages of having unifying the tooling (and we're using prow-based tooling for everything). |
I actually don't have it handy, I could dig it up but it's just some very naive bash capturing the overall coverage value and comparing IIRC, not really worth re-using, especially with so many lines of code. The concept of using continuously uploaded post-merge results to compare is good though (and presumably how e.g. codecov works?).
I haven't looked at codecov in some time, our own existing coverage tooling is moreso about filtering out e.g. vendor, generated code, but it produces normal looking go coverage results so if codecov can accept uploading filtered results from a prowjob that could work. We also probably want to continue making only prowjobs required in k/k, for reasons like It's also important that we control how |
It looks like it is possible to upload coverage files with codecov and that's the expected implementation https://about.codecov.io/blog/getting-started-with-code-coverage-for-golang/ So probably someone could update the unit test prowjobs to upload the final maniulated coverage files. .. ButiIt doesn't appear that the uploader will set an exit code based on coverage dropping, so while it might gain us a visualization of modified files, it won't create the prow pass / fail we probably want ... (which comes with guarantees about being up to date to the latest code changes, something that is difficult to guarantee for externally posted statuses). We already have a built in treemap => source file coverage visualization of the current coverage in prow, FWIW. At minimum, I think someone will need to build a tool that produces a pass/fail exit code on coverage changes. I don't think that requires a huge effort, we already have a command for diffing files https://github.com/kubernetes/test-infra/blob/master/gopherage/cmd/diff/diff.go |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
It would be great if could have a presubmit that will ensure that test coverage of packages/files touched by a given PR is not decreasing (or is at least X%).
A bit of context:
We already have postsubmits that show us the test coverage, e.g.:
https://testgrid.k8s.io/sig-testing-canaries#ci-kubernetes-coverage-unit
https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-testing/coverage.yaml#L127-L158
We're using the
gopherage
tooling to do that:https://github.com/kubernetes/test-infra/tree/master/gopherage
It already allows to fail if coverage of a given file is lower than X.
It would be useful if we could add a functionality that would allow us to filter the report only to files/packages that are touches in a given PR.
Given that Profiles that are produced contain the filename:
https://github.com/kubernetes/test-infra/blob/master/gopherage/cmd/junit/junit.go#L66
it effectively boils down to filtering them to use only ones touched in the PR.
What I'm thinking about is:
I'm not sure what's the best way to do the second though - so would appreciate some thoughts
@kubernetes/sig-testing @chaodaiG @BenTheElder - for thoughts
The text was updated successfully, but these errors were encountered: