-
Notifications
You must be signed in to change notification settings - Fork 45
Meta tests #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This sounds like a good idea. I've had this issue a lot in the past - how can I be sure that we didn't silently stopped running some tests? Could the first one be achieved by putting
in a separate Maybe a bit too much work for just a meta-test though. However it would also give us an importable module with all the right function signatures - which I think we may need for other tests and for things like programmatically comparing the objects in the API standard vs in, say, |
Ah, I just see you already have this in |
A perhaps much simpler thing to test would be that none of the tests error. They should either pass or fail. Need to figure out how to do this in pytest. |
We could require all test failures to be AssertionErrors with xfail https://docs.pytest.org/en/latest/reference.html#pytest-mark-xfail-ref. That would require defensively calling every API function in a way that converts exceptions into AssertionErrors, but would catch errors in the testing code itself. |
I feel the test suite now does well with
so I'll close this. |
Given the complexity of some of the tests, it would be a good idea to test that they are actually testing what we expect. This entails two things
I don't actually know how to do the first one. Can pytest be used as a library, to run just a single test?
For the second one, I don't know if we can actually do it. It would amount to writing a module that actually conforms to the spec. Such an endeavor might be out of scope for this project.
The text was updated successfully, but these errors were encountered: