Skip to content

Meta tests #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
asmeurer opened this issue Sep 11, 2020 · 5 comments
Closed

Meta tests #5

asmeurer opened this issue Sep 11, 2020 · 5 comments

Comments

@asmeurer
Copy link
Member

asmeurer commented Sep 11, 2020

Given the complexity of some of the tests, it would be a good idea to test that they are actually testing what we expect. This entails two things

  • Faking out modules with all the known errors and making sure that the corresponding test fails as expected
  • Faking out a module that doesn't give any errors

I don't actually know how to do the first one. Can pytest be used as a library, to run just a single test?

For the second one, I don't know if we can actually do it. It would amount to writing a module that actually conforms to the spec. Such an endeavor might be out of scope for this project.

@rgommers
Copy link
Member

This sounds like a good idea. I've had this issue a lot in the past - how can I be sure that we didn't silently stopped running some tests?

Could the first one be achieved by putting

def arange(...):   # add the expected signature
    raise MyCustomException

def linspace(....):
    raise MyCustomException

etc.

in a separate dummy_array_module file, specifying that as the module under test, running that in a separate process, and then parsing the summary of the test suite run to say 0 failures, * skipped, N errors with N equal to the number of MyCustomException`'s?

Maybe a bit too much work for just a meta-test though. However it would also give us an importable module with all the right function signatures - which I think we may need for other tests and for things like programmatically comparing the objects in the API standard vs in, say, numpy.

@rgommers
Copy link
Member

Ah, I just see you already have this in functions_stubs/. You could, in the meta test, copy that to a tmpdir and change every pass to an exception in the tmpfiles on disk?

@asmeurer
Copy link
Member Author

A perhaps much simpler thing to test would be that none of the tests error. They should either pass or fail. Need to figure out how to do this in pytest.

@asmeurer
Copy link
Member Author

We could require all test failures to be AssertionErrors with xfail https://docs.pytest.org/en/latest/reference.html#pytest-mark-xfail-ref. That would require defensively calling every API function in a way that converts exceptions into AssertionErrors, but would catch errors in the testing code itself.

@honno
Copy link
Member

honno commented Dec 22, 2021

I feel the test suite now does well with

  1. meta tests for the more awkward utils we use
  2. sanity checks with explanatory messages when test logic gets complicated

so I'll close this.

@honno honno closed this as completed Dec 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants