Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add best-effort coverage reports per test type #1915

Merged
merged 22 commits into from
Jul 3, 2024

Conversation

jsoriano
Copy link
Member

@jsoriano jsoriano commented Jun 18, 2024

Ensure that each executed test type marks some content as covered, so we can have an idea of the files covered by tests, and the test types executed.

Currently we report if a test type has been executed by marking a line in a manifest as covered, but this fails if the manifest doesn't have enough lines. This change attempts to solve this kind of issues.

At the moment the following files are fully or partially marked as covered if an specific type of test is executed:

  • Pipeline tests:
    • No changes in this PR, we have coverage based on ES stats.
  • Asset tests:
    • All files under the kibana directory.
  • Policy tests:
    • All manifests.
    • All template files in the executed data stream.
  • Static tests
    • Sample events.
    • Benchmark configs. No need to cover dev files.
  • System tests
    • All manifests.
    • Fields files.

The rest of non-development files are marked as not covered.

Fixes #1624.

@jsoriano jsoriano self-assigned this Jun 18, 2024
@jsoriano
Copy link
Member Author

test integrations

@elasticmachine
Copy link
Collaborator

Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273

@jsoriano
Copy link
Member Author

test integrations

@elasticmachine
Copy link
Collaborator

Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273

@jsoriano
Copy link
Member Author

test integrations

@elasticmachine
Copy link
Collaborator

Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273

@jsoriano
Copy link
Member Author

test integrations

@@ -24,7 +24,6 @@ func setupCheckCommand() *cobraext.Command {
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
err := cobraext.ComposeCommands(args,
setupFormatCommand(),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can make this change in a different PR: #1936

@elasticmachine
Copy link
Collaborator

Created or updated PR in integrations repository to test this version. Check elastic/integrations#10273

@jsoriano
Copy link
Member Author

jsoriano commented Jul 1, 2024

Ok, it is working now. I will open for review. There is a working report in https://sonar.elastic.dev/component_measures?metric=new_conditions_to_cover&view=list&pullRequest=10273&id=elastic_integrations_AYu5LCaKQZlFqhqWIrk_

Perhaps we can review the files added to the coverage and we will need to review the list of files ignored by sonarqube in the integrations repository, the latter will be done in a different PR.

@jsoriano jsoriano marked this pull request as ready for review July 1, 2024 15:25
@jsoriano jsoriano requested a review from a team July 2, 2024 08:43
internal/testrunner/testrunner.go Show resolved Hide resolved
internal/testrunner/coveragereport.go Show resolved Hide resolved
},
},
}

for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
report, err := createCoverageReport(tt.rootPath, tt.packageName, tt.packageType, tt.testType, tt.results, tt.coverageFormat, tt.timestamp)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is timestamp not mandatory for the coverage reports using Cobertura?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, but there is no coverage report generated by this method now, it uses the coverage in the results, and then the timestamp is not used anymore, this is why I removed the parameter.


// GenerateBasePackageCoverageReport generates a coverage report where all files under the root path are
// marked as not covered. It ignores files under _dev directories.
func GenerateBasePackageCoverageReport(pkgName, rootPath, format string) (CoverageReport, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this function solved also this issue for pipeline tests ?
#1624

If all files present in the package are marked as uncovered, there will be coverage (zero) for those files without tests.

Just one concern regarding also about pipelines, if there is a coverage report created by pipeline in some package. With this change there would be lines marked as uncovered (that previously there was no information), that could cause a decrease in the coverage (%). Would that be an issue?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this function solved also this issue for pipeline tests ?
#1624

Probably yes. Tests will generate coverage for all files now, even if it is empty.

If all files present in the package are marked as uncovered, there will be coverage (zero) for those files without tests.

Correct, if there are no tests, there is no coverage.

Just one concern regarding also about pipelines, if there is a coverage report created by pipeline in some package. With this change there would be lines marked as uncovered (that previously there was no information), that could cause a decrease in the coverage (%). Would that be an issue?

Yes, there can be a decrease in coverage in several cases. I wouldn't consider this as an issue. Before there was more coverage % just because we were not reporting coverage for many files. We will probably revisit what files we mark as covered or uncovered, and how.

err := fmt.Errorf("verifying test result failed: %w", err)
tr.ErrorMsg = err.Error()
return tr, nil
results, _ := rc.WithErrorf("verifying test result failed: %w", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was previously a failure, that's why the false positives test packages are failing.

Currently, the XML rendered shows this:

<error>verifying test result failed: test case failed: Expected results are different from actual ones</error>

but the test is expecting <failure>

<failure>test case failed: Expected results are different from actual ones:.*</failure>

It looks it should be kept this block:

	if e, ok := err.(testrunner.ErrTestCaseFailed); ok {
		tr.FailureMsg = e.Error()
		tr.FailureDetails = e.Details
		return tr, nil
	}

in order to show a failure here

if tcf, ok := err.(ErrTestCaseFailed); ok {
?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting, because what we are doing now is quite similar, WithErrorf includes a similar logic, that should be setting the failure fields:

        var tcf *ErrTestCaseFailed
        if errors.As(err, &tcf) {
                rc.FailureMsg += tcf.Reason
                rc.FailureDetails += tcf.Details
                return []TestResult{rc.TestResult}, nil
        }

Taking a look...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Condition was incorrect, the variable doesn't have to be a pointer. Apart of that I adjusted when the prefix test case failed: is expected.

@elasticmachine
Copy link
Collaborator

elasticmachine commented Jul 2, 2024

💛 Build succeeded, but was flaky

Failed CI Steps

History

cc @jsoriano

@jsoriano jsoriano requested a review from mrodm July 2, 2024 18:11
Copy link
Contributor

@mrodm mrodm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!
This is a great improvement for the coverage reports 💪

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Coverage for pipeline tests are not generated for all data streams in some scenarios
3 participants