Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the check-result key and store the original result #3185

Open
psss opened this issue Sep 3, 2024 · 13 comments
Open

Implement the check-result key and store the original result #3185

psss opened this issue Sep 3, 2024 · 13 comments
Assignees
Labels
area | check Test check implementation command | tests tmt tests command specification Metadata specification (core, tests, plans, stories)
Milestone

Comments

@psss
Copy link
Collaborator

psss commented Sep 3, 2024

Similarly to the result key it would be good to have a check-result key which would decide how the check results should affect the final result of a test, e.g. respect or ignore. The original result of the test execution itself should be stored in the results.yaml as well so that it can be inspected if needed.

There is an expectation that check results should by default affect the final result so that possible problems are revealed for example in Testing Farm viewer, so the default values should probably be respect.

@happz happz added specification Metadata specification (core, tests, plans, stories) command | tests tmt tests command area | check Test check implementation labels Sep 3, 2024
@psss psss added this to the 1.37 milestone Sep 10, 2024
@dennisbrendel
Copy link

General comment on the way check works: It calls dmesg -c (or dmesg --read-clear), so it also messes with the buffer, which can be unexpected for users. In addition it is not always reliable, because some tests also clear the buffer during runtime.

Are there concerns switching over to reading the journal instead? And fallback to dmesg when no systemd-journald is available?

@martinhoyer
Copy link
Collaborator

General comment on the way check works: It calls dmesg -c (or dmesg --read-clear), so it also messes with the buffer, which can be unexpected for users. In addition it is not always reliable, because some tests also clear the buffer during runtime.

Are there concerns switching over to reading the journal instead? And fallback to dmesg when no systemd-journald is available?

Good point, thanks! Check is a optional step and not necessarily 'dmesg'. For example, there can be "journal" check plugin implemented.

That said, it was discussed before. I'd like to look into incorporating coredumpctl tool as a "test check".
In any case, this PR is not the right place for this discussion :)

@martinhoyer
Copy link
Collaborator

martinhoyer commented Sep 27, 2024

@psss I'm thinking whether it wouldn't make sense to have ignore option per each check instead. It could look like:

check:
  - how: avc
    enable: false
  - how: dmesg
    ignore: true
class Check(
    how: str
    enabled: bool = field(
        default=True,
        is_flag=True,
        help='Whether the check is enabled or not.')
+    ignore: bool = field(
+        default=False,
+        is_flag=True,
+        help='When true, check failure does not affect result of the test.')

wdyt?

@happz
Copy link
Collaborator

happz commented Sep 27, 2024

@martinhoyer Would it make sense to use the result key, similarly to how is used in test metadata? I.e. if I wish to (temporarily) ignore a check, I'd set its result to pass, and so on. The benefit would be fewer new keys and concepts, and this ignore seems like a special case of result, https://tmt.readthedocs.io/en/stable/spec/tests.html#result

@martinhoyer
Copy link
Collaborator

@happz hmm not sure. Afaik we don't want to change the result of the check itself.
Other idea is to allow check.enabled to be true/false/ignore, which could be nice from user perspective, but I assume quite funky under the hood.

Let's see if @psss wants check-result or is open to this per-check setting idea in the first place :)

@happz
Copy link
Collaborator

happz commented Sep 28, 2024

@happz hmm not sure. Afaik we don't want to change the result of the check itself.

True, wrong place, check-result test key is the right one.

Other idea is to allow check.enabled to be true/false/ignore, which could be nice from user perspective, but I assume quite funky under the hood.

That would be ugly, not just the implementation but also the specification.

Let's see if @psss wants check-result or is open to this per-check setting idea in the first place :)

Per-check control can be added later, I believe, if needed.

@martinhoyer
Copy link
Collaborator

Per-check control can be added later, I believe, if needed.

I was thinking more about doing one or the other. i.e. either specifying it as part of check object(s), or the original idea. I'm having a hard time visualizing it how it would work with subresults and when users would like to just do a simple adjust to affect all the checks...so if you say it's best to not worry about per-check for now I'm happy.
btw: and other ideas for the key name?

@happz
Copy link
Collaborator

happz commented Sep 30, 2024

Per-check control can be added later, I believe, if needed.

I was thinking more about doing one or the other. i.e. either specifying it as part of check object(s), or the original idea. I'm having a hard time visualizing it how it would work with subresults and when users would like to just do a simple adjust to affect all the checks...

That's why I would start with a single check-result test key, telling tmt how to interpret all test checks, and later we can add per-check result (for example, to match the test-level result key) telling tmt how to interpret that single check ("I want all checks to affect outcome of the testing, except this dmesg check which I want only as FYI, so I set it to result: info"). But it seems to me this per-check granularity could be added later, if needed, I don't see an immediate blocker we'd be adding now, and yes, it would make the interpretation more complicated, but we'll manage...

so if you say it's best to not worry about per-check for now I'm happy.

I for one would start with the simpler case, let's hear from others.

btw: and other ideas for the key name?

check-result sounds good to me.

@martinhoyer
Copy link
Collaborator

Thanks @happz! I'll work on it today.

@psss
Copy link
Collaborator Author

psss commented Sep 30, 2024

I'd agree with @happz that introducing new key like ignore or mixing true/false/enabled under the enabled key would not be a good idea. I would always suggest to keep consistency as much as possible.

Which leads to the interesting idea of not introducing a new key name check-result but rather keeping result with basically the same (or very similar) meaning, just putting it under the check field:

test: test.sh
check:
  - name: avc
    result: xfail

This could be useful for covering expected avc failures with xfail for example. I have to admit that this approach sounds tempting to me. It would be nicely packaged close to the check definition itself, it would be consistent with the test result key as well.

I also see @happz's point, starting with a single "global" check-result also makes sense, although, my feeling is that the use case of completely disabling all checks will not be that common.

@dennisbrendel
Copy link

I would find it useful to have the setting per test. I am not an active tmt user currently, but I see our plans often have a mixture of tests where some would be expected to fail for some checks while others must not.

@happz
Copy link
Collaborator

happz commented Oct 1, 2024

I would find it useful to have the setting per test. I am not an active tmt user currently, but I see our plans often have a mixture of tests where some would be expected to fail for some checks while others must not.

That should be supported in any case, the debate above is mostly about whether the control should be per test, or even more granular, per test check. The most granular way, "per test check", seems to be getting most support.

@dennisbrendel
Copy link

The most granular way, "per test check", seems to be getting most support.

Yes :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area | check Test check implementation command | tests tmt tests command specification Metadata specification (core, tests, plans, stories)
Projects
None yet
Development

No branches or pull requests

5 participants