-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change all xfail decorated tests currently xpassing so that they all fail #4516
Comments
@ricardoV94 @michaelosthege I created an issue for changing all the |
@DRabbit17 over in #4512 I'm adding this section to
My PR targets the if you want to resolve it for the |
@michaelosthege thanks for the heads up. I thought that Is it actually not the case and it's better to focus on |
@DRabbit17 thanks for asking - we should update that article with a big Please refer to https://github.com/pymc-devs/pymc3/wiki/Timeline which is I will also update again to reflect that we now have a |
@DRabbit17 correction: with |
@michaelosthege makes sense, thanks for clarifying and for pointing me to the timeline page, that's handy. I tried to investigate xpasses on Also, as far as I understood issues in vNext |
@DRabbit17 excellent question. In addition to the vNext
|
Thanks for sharing a list of suggestions, that's very handy. I spent some time getting a bit more context on the first point you mentioned, replacing the |
Sorry that I went ahead with that first item - it was dealt with in #4549 already. I hope you can at least inspect the diff there to learn a thing or two anyway. |
No prob, thanks for sharing the PR I will give a look at the last bullet point then and if still free work on that |
Description of the problem
Some tests are expected to fail if certain conditions are met. Given that this is expected these tests are marked as expected to fail through the use of the decorator
@pytest.mark.xfail
.Some of xfail tests are not always running with the same conditions. These may change for every run and time to time generate unexpected passes rather than failures.
The
check_logcdf
check for the normal distribution is an example affected by this behavior (although in the current version of the codebase it's not xfailed, it will be). The non-deterministic component in this case, is introduced by the sampling of the points to be tested from the distribution values domain (n_samples
set to 100).https://github.com/pymc-devs/pymc3/blob/d248a0e94d67f12c67342c54f90f9b72752e1a1b/pymc3/tests/test_distributions.py#L892-L906
Potential Solution
Change all xfail decorated tests so that they always fail, no matter what conditions are in place. Also set all the
@pytest.mark.xfail
tests to@pytest.mark.xfail(strict=True)
so that any xfail test unexpectedly passing will result in an error in the test run.A set of tests that should probably be checked are all the ones using any sampling. ie any test running
check_logcdf
,check_logp
,check_selfconsistency_discrete_logcdf
,check_int_to_1
(and any other test using these functions). It is possible to test whether these tests ever result in a pass by settingn_samples=-1
, which will run the test for all possible cases in the domainThe text was updated successfully, but these errors were encountered: