Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support selecting tests by labels #108828

Open
serhiy-storchaka opened this issue Sep 2, 2023 · 4 comments
Open

Support selecting tests by labels #108828

serhiy-storchaka opened this issue Sep 2, 2023 · 4 comments
Labels
tests Tests in the Lib/test dir type-feature A feature request or enhancement

Comments

@serhiy-storchaka
Copy link
Member

serhiy-storchaka commented Sep 2, 2023

Feature or enhancement

Has this already been discussed elsewhere?

No response given

Links to previous discussion of this feature:

No response

Proposal:

I propose to add in libregrtest the support of selecting tests not only by names, but also by labels. Some labels are set when the test is decorated with corresponding decorator, others can be set by user manually.

For example, "requires_cpu" will mark tests decorated with @requires_resouce('cpu'), "impl_detail_cpython" will mark tests decorated with @cpython_only, "bigmemtest" will mark tests decorated with @bigmemtest(...). Tests related to pickling can be manually marked with label "pickletest".

Added two options --label=NAME for including only tests with the specified label, --no-label=NAME for excluding tests with the specified label.

You can request all test cases with the specified label using option --list-cases. You can run only tests with or without the specified label.

Added function mark() in the test.support module for manual marking tests.

@test.support.mark('pickletest')
def test_pickling(self):

Added also functions skipIf() and skipUnless() with additional argument specifying the label. It is for easier creation of custom decorators, e.g.

requires_foo = test.support.skipUnless(has_foo, 'requires Foo', label='requires_foo')

If you simply decorate the test method or class, you can just combine decorators:

@unittest.skipUnless(has_foo, 'requires Foo')
@test.support.mark('requires_foo')
def test_with_foo(self):

Adding the label to the test class is equivalent to adding the label to every method of that class.

Currently it only works with decorated test classes and test methods. If the whole module is skipped by using requires('gui'), classes and tests in that module cannot be found by the "requires_gui" label. If the test requires, for example, the "network" resource, but calls @requires('network') instead of been decorated with @requires_resource('network'), it cannot be found by the "requires_network" label.

It is an experimental feature, and can be changed in future (for example support labels with values, support glob patterns for labels). I think that in future some version of this feature will be added in unittest.

Linked PRs

@serhiy-storchaka serhiy-storchaka added type-feature A feature request or enhancement tests Tests in the Lib/test dir labels Sep 2, 2023
serhiy-storchaka added a commit to serhiy-storchaka/cpython that referenced this issue Sep 2, 2023
@serhiy-storchaka
Copy link
Member Author

Added support of labels of modules. Now --label requires_gui finds a lot of Tkinter and IDLE tests.

@vstinner
Copy link
Member

vstinner commented Sep 4, 2023

I don't understand the use cases of this feature. Can you please give some concrete examples?

I'm not sure that I would use such feature. Is it to count tests of a specific label to produce statistics? Or to only run tests of a label?

For requires_cpu tests, we have buildbots running them. If we have no automated infra running a test frequently, maybe it's better to give up and even remove the test.

At least, I managed to use it :-)

$ ./python -m test -v test_tarfile --list-cases|wc -l
588
$ ./python -m test -v test_tarfile --label requires_gzip --list-cases|wc -l
98

So I see that the majority of test_tarfile tests don't use gzip, and 98 of them need gzip.

@serhiy-storchaka
Copy link
Member Author

It is both for counting and for running.

There was a complain that we may skip too many tests without knowing how much of tests were skipped and for what reasons. Your counting feature answered the part of this question. This issue is another answer, from other angle. I myself wanted to see how many tests will be marked with requires_resource('walltime') and run them sequentially before merging #108480. If in future we optimize tests we can found that some of them are fast enough and no need such decoration.

Other example -- bigmem tests. They are usually skipped on buildbot, but it is useful to run them manually from time to time. They cannot be run in parallel, you need to run them sequentially, so it takes a long time to run them together with other tests, and their skip messages are lost in reports about successful pass of other tests. This feature allows: 1) to get a list of all bigmem tests which are scattered across many files to run them separately, 2) to run only bigmem tests directly (unfortunately there is still many noise and overhead added).

Other example -- when working on some pickle changes I want to run pickling related tests which are scattered across many files multiple times. Running all tests takes too many time (well, it is now less, after excluding the slowest tests, but still too much to simply stare at screen while they run).

Other example -- when working on some general C API changes I wanted to run all C API tests, which also were scattered across many files.

Negative label matching can be used to search tests for manual marking. For example, --match '*ickle*' --no-label pickletest will give a list of candidates for @mark('pickletest') which still do not have this decorator.

serhiy-storchaka added a commit to serhiy-storchaka/cpython that referenced this issue Sep 11, 2023
@vstinner
Copy link
Member

What's the status of this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests Tests in the Lib/test dir type-feature A feature request or enhancement
Projects
None yet
Development

No branches or pull requests

2 participants