Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2 tests fail #802

Closed
yurivict opened this issue Aug 9, 2022 · 4 comments
Closed

2 tests fail #802

yurivict opened this issue Aug 9, 2022 · 4 comments

Comments

@yurivict
Copy link

yurivict commented Aug 9, 2022

Description

========================================================================================== FAILURES ==========================================================================================
_________________________________________________________________________ test_itercb_minimizer_class[leastsq-False] _________________________________________________________________________

method = 'leastsq', calc_covar = False

    @pytest.mark.parametrize("calc_covar", calc_covar_options)
    @pytest.mark.parametrize("method", fitmethods)
    def test_itercb_minimizer_class(method, calc_covar):
        """Test the iteration callback for all solvers."""
        if method in ('nelder', 'differential_evolution'):
            pytest.xfail("scalar_minimizers behave differently, but shouldn't!!")
    
        mini = Minimizer(residual, pars, fcn_args=(x, y), iter_cb=per_iteration,
                         calc_covar=calc_covar)
        out = mini.minimize(method=method)
    
>       assert out.nfev == 23
E       assert 21 == 23
E        +  where 21 = <lmfit.minimizer.MinimizerResult object at 0x9816362e0>.nfev

tests/test_itercb.py:76: AssertionError
____________________________________________________________________________________ test_manypeaks_speed ____________________________________________________________________________________

    @pytest.mark.flaky(max_runs=5)
    def test_manypeaks_speed():
        model = None
        t0 = time.time()
        for i in np.arange(500):
            g = Model(gaussian, prefix=f'g{i}')
            if model is None:
                model = g
            else:
                model += g
        t1 = time.time()
        pars = model.make_params()
        t2 = time.time()
        _cpars = deepcopy(pars)  # noqa: F841
        t3 = time.time()
    
        # these are very conservative tests that
        # should be satisfied on nearly any machine
>       assert (t3-t2) < 0.5
E       assert (1660084394.21768 - 1660084393.5360963) < 0.5

tests/test_manypeaks_speed.py:33: AssertionError
====================================================================================== warnings summary ======================================================================================
../../../../../../usr/local/lib/python3.9/site-packages/matplotlib/__init__.py:152
../../../../../../usr/local/lib/python3.9/site-packages/matplotlib/__init__.py:152
../../../../../../usr/local/lib/python3.9/site-packages/matplotlib/__init__.py:152
../../../../../../usr/local/lib/python3.9/site-packages/matplotlib/__init__.py:152
../../../../../../usr/local/lib/python3.9/site-packages/matplotlib/__init__.py:152
  /usr/local/lib/python3.9/site-packages/matplotlib/__init__.py:152: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
    if LooseVersion(module.__version__) < minver:

../../../../../../usr/local/lib/python3.9/site-packages/setuptools/_distutils/version.py:346
../../../../../../usr/local/lib/python3.9/site-packages/setuptools/_distutils/version.py:346
../../../../../../usr/local/lib/python3.9/site-packages/setuptools/_distutils/version.py:346
../../../../../../usr/local/lib/python3.9/site-packages/setuptools/_distutils/version.py:346
../../../../../../usr/local/lib/python3.9/site-packages/setuptools/_distutils/version.py:346
  /usr/local/lib/python3.9/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
    other = LooseVersion(other)

tests/test_dual_annealing.py: 3 warnings
tests/test_itercb.py: 2 warnings
tests/test_max_nfev.py: 4 warnings
tests/test_model_saveload.py: 1 warning
  /disk-samsung/freebsd-ports/math/py-lmfit/work-py39/lmfit-1.0.3/lmfit/minimizer.py:2232: DeprecationWarning: dual_annealing argument 'local_search_options' is deprecated in favor of 'minimizer_kwargs'
    ret = scipy_dual_annealing(self.penalty, bounds, **da_kws)

tests/test_lineshapes.py::test_x_float_value[step]
tests/test_lineshapes.py::test_x_float_value[rectangle]
  <__array_function__ internals>:180: DeprecationWarning: Calling nonzero on 0d arrays is deprecated, as it behaves surprisingly. Use `atleast_1d(cond).nonzero()` if the old behavior was intended. If the context of this warning is of the form `arr[nonzero(cond)]`, just use `arr[cond]`.

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
===Flaky Test Report===

test_manypeaks_speed failed (4 runs remaining out of 5).
	<class 'AssertionError'>
	assert (1660084382.1485593 - 1660084381.4555016) < 0.5
	[<TracebackEntry /disk-samsung/freebsd-ports/math/py-lmfit/work-py39/lmfit-1.0.3/tests/test_manypeaks_speed.py:33>]
test_manypeaks_speed failed (3 runs remaining out of 5).
	<class 'AssertionError'>
	assert (1660084385.2885222 - 1660084384.6056588) < 0.5
	[<TracebackEntry /disk-samsung/freebsd-ports/math/py-lmfit/work-py39/lmfit-1.0.3/tests/test_manypeaks_speed.py:33>]
test_manypeaks_speed failed (2 runs remaining out of 5).
	<class 'AssertionError'>
	assert (1660084388.2647245 - 1660084387.5893734) < 0.5
	[<TracebackEntry /disk-samsung/freebsd-ports/math/py-lmfit/work-py39/lmfit-1.0.3/tests/test_manypeaks_speed.py:33>]
test_manypeaks_speed failed (1 runs remaining out of 5).
	<class 'AssertionError'>
	assert (1660084391.2338388 - 1660084390.564811) < 0.5
	[<TracebackEntry /disk-samsung/freebsd-ports/math/py-lmfit/work-py39/lmfit-1.0.3/tests/test_manypeaks_speed.py:33>]
test_manypeaks_speed failed; it passed 0 out of the required 1 times.
	<class 'AssertionError'>
	assert (1660084394.21768 - 1660084393.5360963) < 0.5
	[<TracebackEntry /disk-samsung/freebsd-ports/math/py-lmfit/work-py39/lmfit-1.0.3/tests/test_manypeaks_speed.py:33>]

===End Flaky Test Report===
================================================================================== short test summary info ===================================================================================
SKIPPED [2] tests/test_covariance_matrix.py:113: could not import 'numdifftools': No module named 'numdifftools'
SKIPPED [5] tests/test_covariance_matrix.py:156: could not import 'numdifftools': No module named 'numdifftools'
SKIPPED [1] tests/test_covariance_matrix.py:202: could not import 'numdifftools': No module named 'numdifftools'
SKIPPED [2] tests/test_model.py:1297: ConstantModel has not independent_vars.
SKIPPED [1] tests/test_nose.py:460: Pytest fails with multiprocessing
======================================================= 2 failed, 591 passed, 11 skipped, 2 xfailed, 22 warnings in 365.55s (0:06:05) ========================================================
A Minimal, Complete, and Verifiable example

n/a

Error message:

see above

Version information

1.0.3

Python-3.9
FreeBSD 13.1

@newville
Copy link
Member

newville commented Aug 9, 2022

@yurivict Thanks, but I'm not sure we're going to act on this. We don't really promise that a) all tests will always pass on all platforms and contexts, or that b) we view such failures as "actionable". Specifically, we do not test with FreeBSD and do not intend to do so. Tests are targeted for our CI systems and used so that we have confidence that we have not broken expected behavior during development. While the results of running tests on other platforms are interesting, they are not necessarily something that we will address.

Do these failures present a problem for you? If so, can you explain what that problem is?

@yurivict
Copy link
Author

No, I just ran them once and 2 of them failed.

@reneeotten
Copy link
Contributor

the first failure is fixed already in 753db61 ; it turns out that with some change in SciPy the fit converges quicker than before and that's where the failure originated from.

The second one I already marked as "flaky" in 5891949 to try and avoid these failures; it should really be able to run within the time unless the hardware is really old...

@newville
Copy link
Member

@reneeotten thanks, yeah to me this says "lmfit runs fine of FreeBSD even though we don't test it". So, thanks @yurivict but there is no further action needed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants