-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add best-effort coverage reports per test type #1915
Conversation
test integrations |
Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273 |
test integrations |
Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273 |
test integrations |
Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273 |
test integrations |
@@ -24,7 +24,6 @@ func setupCheckCommand() *cobraext.Command { | |||
Args: cobra.NoArgs, | |||
RunE: func(cmd *cobra.Command, args []string) error { | |||
err := cobraext.ComposeCommands(args, | |||
setupFormatCommand(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can make this change in a different PR: #1936
Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273 |
Ok, it is working now. I will open for review. There is a working report in https://sonar.elastic.dev/component_measures?metric=new_conditions_to_cover&view=list&pullRequest=10273&id=elastic_integrations_AYu5LCaKQZlFqhqWIrk_ Perhaps we can review the files added to the coverage and we will need to review the list of files ignored by sonarqube in the integrations repository, the latter will be done in a different PR. |
}, | ||
}, | ||
} | ||
|
||
for _, tt := range tests { | ||
t.Run(tt.name, func(t *testing.T) { | ||
report, err := createCoverageReport(tt.rootPath, tt.packageName, tt.packageType, tt.testType, tt.results, tt.coverageFormat, tt.timestamp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is timestamp not mandatory for the coverage reports using Cobertura?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, but there is no coverage report generated by this method now, it uses the coverage in the results, and then the timestamp is not used anymore, this is why I removed the parameter.
|
||
// GenerateBasePackageCoverageReport generates a coverage report where all files under the root path are | ||
// marked as not covered. It ignores files under _dev directories. | ||
func GenerateBasePackageCoverageReport(pkgName, rootPath, format string) (CoverageReport, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this function solved also this issue for pipeline tests ?
#1624
If all files present in the package are marked as uncovered, there will be coverage (zero) for those files without tests.
Just one concern regarding also about pipelines, if there is a coverage report created by pipeline in some package. With this change there would be lines marked as uncovered (that previously there was no information), that could cause a decrease in the coverage (%). Would that be an issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this function solved also this issue for pipeline tests ?
#1624
Probably yes. Tests will generate coverage for all files now, even if it is empty.
If all files present in the package are marked as uncovered, there will be coverage (zero) for those files without tests.
Correct, if there are no tests, there is no coverage.
Just one concern regarding also about pipelines, if there is a coverage report created by pipeline in some package. With this change there would be lines marked as uncovered (that previously there was no information), that could cause a decrease in the coverage (%). Would that be an issue?
Yes, there can be a decrease in coverage in several cases. I wouldn't consider this as an issue. Before there was more coverage % just because we were not reporting coverage for many files. We will probably revisit what files we mark as covered or uncovered, and how.
err := fmt.Errorf("verifying test result failed: %w", err) | ||
tr.ErrorMsg = err.Error() | ||
return tr, nil | ||
results, _ := rc.WithErrorf("verifying test result failed: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was previously a failure, that's why the false positives test packages are failing.
Currently, the XML rendered shows this:
<error>verifying test result failed: test case failed: Expected results are different from actual ones</error>
but the test is expecting <failure>
<failure>test case failed: Expected results are different from actual ones:.*</failure>
It looks it should be kept this block:
if e, ok := err.(testrunner.ErrTestCaseFailed); ok {
tr.FailureMsg = e.Error()
tr.FailureDetails = e.Details
return tr, nil
}
in order to show a failure here
if tcf, ok := err.(ErrTestCaseFailed); ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting, because what we are doing now is quite similar, WithErrorf
includes a similar logic, that should be setting the failure fields:
var tcf *ErrTestCaseFailed
if errors.As(err, &tcf) {
rc.FailureMsg += tcf.Reason
rc.FailureDetails += tcf.Details
return []TestResult{rc.TestResult}, nil
}
Taking a look...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Condition was incorrect, the variable doesn't have to be a pointer. Apart of that I adjusted when the prefix test case failed:
is expected.
💛 Build succeeded, but was flaky
Failed CI StepsHistory
cc @jsoriano |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
This is a great improvement for the coverage reports 💪
Ensure that each executed test type marks some content as covered, so we can have an idea of the files covered by tests, and the test types executed.
Currently we report if a test type has been executed by marking a line in a manifest as covered, but this fails if the manifest doesn't have enough lines. This change attempts to solve this kind of issues.
At the moment the following files are fully or partially marked as covered if an specific type of test is executed:
Benchmark configs.No need to cover dev files.The rest of non-development files are marked as not covered.
Fixes #1624.