Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] test_complexity fails on MacOS Monterey 12.0.1 #149

Closed
tuelwer opened this issue Jan 11, 2022 · 3 comments
Closed

[BUG] test_complexity fails on MacOS Monterey 12.0.1 #149

tuelwer opened this issue Jan 11, 2022 · 3 comments

Comments

@tuelwer
Copy link
Contributor

tuelwer commented Jan 11, 2022

Hi, one of the tests failed on my machine. What is the problem here?

This issue is part of my JOSS-review openjournals/joss-reviews#3994.

Reproducing code example:

pytest test/property_tests/test_optimizers_complexity.py

Error message:

==================================================================== test session starts =====================================================================
platform darwin -- Python 3.7.10, pytest-5.4.3, py-1.8.0, pluggy-0.13.1
rootdir: /Users/tobi/sciebo/reviews/JOSS/pysindy/pysindy
plugins: hypothesis-6.35.0, lazy-fixture-0.6.3, cov-3.0.0
collected 5 items                                                                                                                                            

test/property_tests/test_optimizers_complexity.py F....                                                                                                [100%]

========================================================================== FAILURES ==========================================================================
______________________________________________________________________ test_complexity _______________________________________________________________________

    @given(
>       n_samples=integers(min_value=100, max_value=10000),
        n_features=integers(min_value=10, max_value=30),
        n_informative=integers(min_value=3, max_value=9),
        random_state=integers(min_value=0, max_value=2 ** 32 - 1),
    )
    @settings(max_examples=20, deadline=None)
    def test_complexity(n_samples, n_features, n_informative, random_state):
        """Behaviour test for complexity.
    
        We assume that more regularized optimizers are less complex on the same dataset.
E       hypothesis.errors.FailedHealthCheck: Examples routinely exceeded the max allowable size. (20 examples overran while generating 9 valid ones). Generating examples this large will usually lead to bad results. You could try setting max_size parameters on your collections and turning max_leaves down on recursive() calls.
E       See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.data_too_large to the suppress_health_check settings for this test.

test/property_tests/test_optimizers_complexity.py:20: FailedHealthCheck
------------------------------------------------------------------------- Hypothesis -------------------------------------------------------------------------
You can add @seed(52255670341713864011347036311399653498) to this test or run pytest with --hypothesis-seed=52255670341713864011347036311399653498 to reproduce this failure.
================================================================== short test summary info ===================================================================
FAILED test/property_tests/test_optimizers_complexity.py::test_complexity - hypothesis.errors.FailedHealthCheck: Examples routinely exceeded the max allowa...
================================================================ 1 failed, 4 passed in 2.40s =================================================================

PySINDy/Python version information:

>>> import hypothesis
>>> print(hypothesis.__version__)
6.35.0
>>> import sys, pysindy; print(pysindy.__version__, sys.version)
1.7.dev2+gba56ca8 3.7.10 (default, Feb 26 2021, 10:16:00) 
[Clang 10.0.0 ]
@akaptano
Copy link
Collaborator

I'm not sure. I've actually seen this error myself, but it is inconsistent... sometimes when I run it, it passes successfully, and other times not (with the same error about the max allowed time). I'll look into it and thanks for your time!

@akaptano
Copy link
Collaborator

This appears to be a dumb bug because of an old print statement in the file. Deleted this and now cannot reproduce your error, even by running pytest with the same seed ("pytest test/property_tests/test_optimizers_complexity.py --hypothesis-seed=52255670341713864011347036311399653498"). I also added @settings(max_examples=20, deadline=None) to the test_complexity_parameter function since occasionally I was seeing inconsistent timing.
I'll push this change in a moment, and go ahead and close this issue if it solves the issue on your computer. Thanks!

@akaptano akaptano reopened this Jan 13, 2022
@tuelwer
Copy link
Contributor Author

tuelwer commented Jan 17, 2022

Works now! Thanks for fixing!

@tuelwer tuelwer closed this as completed Jan 17, 2022
jpcurbelo pushed a commit to jpcurbelo/pysindy_fork that referenced this issue May 9, 2024
The single-frequency models do support using hourly data for daily predictions (use_frequencies = ['1D']).
However, the input layer would calculate the size of dynamic features
incorrectly in these cases (it used the number of frequencies (i.e., 1)
instead of the number of features of that one frequency.
jpcurbelo pushed a commit to jpcurbelo/pysindy_fork that referenced this issue May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants