-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor pareto and laplace #4691
Conversation
It seems like the Pareto has been half refactored already here #4593 It should be enough to remove the unused The tests in distributions_random look fine. |
Ah thank you, I just noticed that. |
pymc_dist = pm.Pareto | ||
pymc_dist_params = {"alpha": 3.0, "m": 2.0} | ||
expected_rv_op_params = {"alpha": 3.0, "m": 2.0} | ||
reference_dist_params = {"alpha": 3.0, "scale": 2.0} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reference_dist_params
should be "b" and "scale" I think, that's why the tests are failing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of seeded_numpy_distribution_builder
, I should use seeded_scipy_distribution_builder
to match the params, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right. Aesara is using scipy for the Pareto
pymc_dist_params = {"alpha": 3.0, "m": 2.0} | ||
expected_rv_op_params = {"alpha": 3.0, "m": 2.0} | ||
reference_dist_params = {"alpha": 3.0, "scale": 2.0} | ||
size = 15 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to specify size, 15 is already the default
@farhanreynaldo I noticed we lost the default transform of the Pareto in #4593. I pushed a fix to this PR. Make sure to pull the changes to your local repo before pushing new commits! |
pymc_dist_params = {"mu": 0.0, "b": 1.0} | ||
expected_rv_op_params = {"mu": 0.0, "b": 1.0} | ||
reference_dist_params = {"loc": 0.0, "scale": 1.0} | ||
size = 15 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size
can be removed here too
I'm sorry I just realized the You can simply import the op from there no need to recreate it. I suggest you check if I missed any other op that's already implemented there. The Truncated Normal and Wald seem to be already there as well |
I am terribly sorry, I did not intend to do so. Should I run the following command before pushing new commits to the current branch?
|
I just saw the link you sent me, but the |
I think I messed up the git merge once again... |
e0c6b97
to
fbd2103
Compare
@farhanreynaldo No problem. I'm not very good at git either :p I think I fixed the commit history in this branch. Let me know if there was anything else you wanted to change or if I can go ahead and merge? |
Yes, you can merge it. Thank you so much for your help on this PR! |
@farhanreynaldo Thanks so much for taking on this! Looking forward to your next PR! |
* refactor pareto * refactor laplace * Reintroduce `Pareto` default transform Co-authored-by: Farhan Reynaldo <farhanreynaldo@gmail.com> Co-authored-by: Ricardo <ricardo.vieira1994@gmail.com>
This PR closes subset of #4686.
It is still unclear to me on the default shape of the
RandomVariable
class, so I might need suggestion on that part. Thetests_to_run
parameter ontest_distribution_random.py
also baffles me, your input is much appreciated.