You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Several of the docstring examples have numeric output that changes too much leading to doctest failures.
We should change the data and models in those examples (particularly PrePostFit and RegressionDiscontinuity) to have more stability even if the examples become unrealistic or 'silly' - they will illustrate basic usage and ensure documentation is up to date with code, leaving the main docs for real examples and detailed instruction.
The main goal is that doctests should always pass unless there's been a clear change in how a function works.
The text was updated successfully, but these errors were encountered:
Looking into it in more detail, it looks like we are getting non reproducible results. This seems to be because we are only ingesting the sample kwargs into pm.sample. So the random_seed kwarg is not being passed into pm.sample_prior_predictive or pm.sample_posterior_predictive.
We can't simply unpack the provided kwargs into these function calls because they don't accept all the same kwargs as pm.sample.
Several of the docstring examples have numeric output that changes too much leading to doctest failures.
We should change the data and models in those examples (particularly PrePostFit and RegressionDiscontinuity) to have more stability even if the examples become unrealistic or 'silly' - they will illustrate basic usage and ensure documentation is up to date with code, leaving the main docs for real examples and detailed instruction.
The main goal is that doctests should always pass unless there's been a clear change in how a function works.
The text was updated successfully, but these errors were encountered: