-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove automatic normalization in Multinomial and Categorical #5331 #5370
Conversation
Codecov Report
@@ Coverage Diff @@
## main #5370 +/- ##
==========================================
+ Coverage 80.44% 83.48% +3.03%
==========================================
Files 82 132 +50
Lines 14132 26113 +11981
==========================================
+ Hits 11369 21800 +10431
- Misses 2763 4313 +1550
|
The failing test seems to be due to an invalid pymc/pymc/tests/test_idata_conversion.py Lines 590 to 591 in f12b1fe
That should either have a |
While at it, this PR should include a release note about the changed behavior in https://github.com/pymc-devs/pymc/blob/f12b1fe04e4c50d4060803ad32dfa9c158cdf073/RELEASE-NOTES.md?plain=1 |
Re-comitted with your suggestions. @ricardoV94 it's not clear to me why use of the beta distrubution led to a failing test? Its support is [0,1] so it won't produce negative values.... |
Yes, but the three independent betas don't add up to 1, which is the other requirement |
I think it's better to separate clearly the different expected behaviors across different tests:
|
Yeah I agree! I'll get to writing these tests. Before I do, I have question about suggestion 4:
What do you mean by symbolic here? Is this like the |
Yes. More directly you can simply use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great. I left some minor naming/testing suggestions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great thanks for making the changes!
These are only suggestions. You'll have to commit them (this can be easily done via Github) for them to be "applied". |
Appologies, I thought reviewing them would commit them. Will commit them now! |
Ah! Sorry it looks like there were some commit suggestions that github had hidden. I'll need to commit these. |
I think this might be another floating point error, I have now changed to: assert np.isclose(m.x.owner.inputs[3].sum().eval(), 1.0) in the failing test. test_distrubution.py passed fine on my machine not sure why it fails here. |
You should also check locally with float32, often that's why tests fail in here but not locally. You can do this by adding these lines at the very top of the test file import aesara
aesara.config.floatX = "float32" |
I've ran test_distrubution.py locally using this and they all pass now. Thanks for the heads up! |
So in the future to stop the pre-commit failure I should run
prior to commiting? Do I need to run this locally and commit for this PR? Or has this been run already? |
Easiest way is to install pre-commit, so it will run everytime you try to create a commit. https://docs.pymc.io/en/latest/contributing/python_style.html |
Yes, you still need to do that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Checking if the tests pass 🤞
Woooooooooooo |
Thank you for all the help @ricardoV94! Even though it's been a small issue I feel like I've learnt a lot. |
…utions Co-authored-by: Ricardo Vieira <28983449+ricardoV94@users.noreply.github.com>
4d915b0
to
c26db8b
Compare
@LukeLB just merged it. Congrats for your first contributon! Looking forward to your next one :D |
This PR removes the silent normalisation of p-values passed to a distribution. Instead, a UserWarning is raised when p-values do not sum to 1.0 and then normlisation is done. Examples are highlighted below:
In addition, after discussion with @ricardoV94 negative p-values now raise a ValueError:
Changes in this PR
Depending on what your PR does, here are a few things you might want to address in the description: