-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add code example to Potential docstrings #6489
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Dhruvanshu-Joshi, thanks for creating this pull request! Also, thanks for your patience, it seems that our devs have been busy recently... (myself included)
Overall, it looks good! I gave some suggestions to tweak some sentences here and there. Let me know if you would like any assistance with this PR.
Edit: It seems that the pre-commit has not been run successfully; let us know if you would like any pointers regarding that.
pymc/model.py
Outdated
x = pm.Normal('x', mu=0, sigma=1) | ||
y = pm.Normal('y',mu=x, sigma=1,observed=data) | ||
constraint = x>= 0 | ||
potential = pm.Potential('x_constraint',-tt.log(tt.switch(constraint, 1, 1e-30))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
potential = pm.Potential('x_constraint',-tt.log(tt.switch(constraint, 1, 1e-30))) | |
potential = pm.Potential('x_constraint', -pt.log(pt.switch(constraint, 1, 1e-30))) |
We recently switched our backend to PyTensor!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are avoiding to use too many new concepts at once, can we use pm.math for backend computation here instead of explicitly mentioning Theano tensor or PyTensor?
I am quite new to PyTensor and am facing errors when I try to run the suggestion made above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's a great idea. @ricardoV94 mentioned it below too
pymc/model.py
Outdated
potential = pm.Potential('x_constraint',-tt.log(tt.switch(constraint, 1, 1e-30))) | ||
|
||
In this example, we define a constraint on x that it can be either greater than or equal to 0 and pass this constraint to the pm.Potential function. | ||
Now the pm.Potential function takes in a string expression and a scalar value as argument. Hence we pass '-tt.log(tt.switch(constraint, 1, 1e-30))' as second argument which will return a scalar value depending on the constraint and will add to the likelihood function of the model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now the pm.Potential function takes in a string expression and a scalar value as argument. Hence we pass '-tt.log(tt.switch(constraint, 1, 1e-30))' as second argument which will return a scalar value depending on the constraint and will add to the likelihood function of the model. | |
We pass ``-pt.log(pt.switch(constraint, 1, 1e-30))`` as second argument which will return a scalar value depending on if the constraint is met or not and which will added to the likelihood of the model. |
pymc/model.py
Outdated
|
||
In this example, we define a constraint on x that it can be either greater than or equal to 0 and pass this constraint to the pm.Potential function. | ||
Now the pm.Potential function takes in a string expression and a scalar value as argument. Hence we pass '-tt.log(tt.switch(constraint, 1, 1e-30))' as second argument which will return a scalar value depending on the constraint and will add to the likelihood function of the model. | ||
This is done because PyMC3 uses an energy-based model, which means that the probability of a variable configuration is proportional to the negative of the energy of that configuration. So, the negative logarithm of the switch function ensures that the energy is high (i.e. close to negative infinity) when the constraint is not met, and low (i.e. close to zero) when the constraint is met. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done because PyMC3 uses an energy-based model, which means that the probability of a variable configuration is proportional to the negative of the energy of that configuration. So, the negative logarithm of the switch function ensures that the energy is high (i.e. close to negative infinity) when the constraint is not met, and low (i.e. close to zero) when the constraint is met. | |
The negative logarithm of the switch function ensures that this component of the log-likelihood is high (i.e. close to 30) when the constraint is not met, and low (i.e. close to zero) when the constraint is met. |
I'm confused with this sentence. What do you mean by "probability of a variable configuration"?
Lastly, personally, I would avoid using the energy concept since it's similar to the likelihood and focus on the log-likelihood since that's what PyMC works with. In the sense that I would avoid introducing too many new concepts as once, although the energy-based analogy is correct (to my understanding and despite my little knowledge of physics 😅).
However, with that, the sentence would revert the direction of the logic if I'm not mistaken...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Answering the first question, 'probability of variable configuration' refers to the likelihood function. By this I simply mean to refer the likelihood of a particular set of values for the variables that the model equation uses.
For the second part, no it does not revert the direction of logic... That's bad framing on my part😅
What I meant from "close to negative infinity" was that the log function/ log-likelihood will be close to negative infinity. The mathematical negative will thus be a large positive value. Energy is the negative logarithmic function that we have defined and thus it will be high when the constraint is not met.
Can we re-frame this as:
The reciprocal exponential function of the negative logarithm is proportional to the likelihood function. When the constraint is met, the switch function returns 1 making the negative logarithm 0 and hence likelihood function has no penalizing effect. When the constraint is not met the switch function returns 1e-30 and the negative logarithm takes a high value(i.e. 30), reciprocal exponential function of which makes the likelihood function very small and successfully penalizes it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, chipping after much time. I'm not sure what you mean by "reciprocal exponential function of the negative logarithm"...
However, it looks like you have it figured out in some threads below! Perhaps you can focus on those suggestions rather than on my comments
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #6489 +/- ##
==========================================
+ Coverage 86.03% 91.91% +5.88%
==========================================
Files 148 89 -59
Lines 27794 14936 -12858
==========================================
- Hits 23913 13729 -10184
+ Misses 3881 1207 -2674
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for opening up the PR. This looks nice already. I have three notes:
First
I think the two examples should be merged into a single one since they both show an example of using Potential to impose a constraint. The first example was incorrect though, and the second one looks better if you make it strict pt.log(pt.switch(cond, 1, 0))
instead of the arbitrary large or small values.
I would add a second example showing how Potential can be used to obtain an arbitrary prior and not just a binary constraint. Maybe, something like this which corresponds to a power law prior for max_items
with pm.Model():
# p(max_items) = 1 / max_items
max_items = pm.Uniform("max_items", lower=1, upper=100)
pm.Potential('power_prior', pm.math.log(1 / max_items))
n_items = pm.Uniform('n_items', lower=1, upper=max_items, observed=60)
Second
I would use pm.math.foo
instead of pt.foo
.
Third
We should add a warning box saying that Potentials are only added to the model logp, and as such can only influence logp based sampling, like the one used by pm.sample
.
They do not affect forward sampling, which is used by sample_prior_predictive
and sample_posterior_predictive
. A warning is always raised to alert user of this.
pymc/model.py
Outdated
constraint = x + y < 1 | ||
potential = pm.Potential('constraint', constraint, my_model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This example does not make sense like this. x + y < 1 is gonna return a variable that evaluates to 0 or 1, which will be added to the model logp. It would make sense to add the log(constraint)
instead.
However, I think it's a bit more readable for users to see a constraint with a switch as that's usually how it crops up in examples. Something like pm.Potential("constraint", pt.log(pt.switch(constraint, 1, 0)))
But this is very similar to the example below, so just show one of them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand how using the logarithm function makes more sense and also notice the repetitiveness in both the examples.
I will add a commit as soon as it is clear if we are moving forward with Pytensor or using pm.math.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pm.math
is fine
pymc/model.py
Outdated
y = pm.Normal('y', mu=x, tau=1) | ||
|
||
constraint = x + y < 1 | ||
potential = pm.Potential('constraint', constraint, my_model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't show the model
argument being used. Most times it's not needed as users add a Potential inside a model context.
* Add (mu, nu) parametrization * add test beta logp * Add relationship between alpha, beta and nu * change test structure * remove white space accidently added in docstring * remove sample size from docstring --------- Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently on my mobile and can't properly review but want to leave a reminder for myself to review when I gat the chance. It should follow numpydoc: https://numpydoc.readthedocs.io/en/latest/format.html. Examples in their own section below parameters and returns. Warnings also as their own section, the spaces both before and after the parameter name and the parameter info... https://pymc-data-umbrella.xyz/en/latest/sprint/tutorials/docstring_tutorial.html#edit-the-docstring might also be helpful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think abbreviate a bit the explanations? I don't know if it's just me but they feel a tad too long.
pymc/model.py
Outdated
constraint = x>= 0 | ||
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30)) | ||
|
||
In this example, we define a constraint on x to be greater or equal to 0 via the ``pm.Potential`` function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would put the explanations above the respective examples
pymc/model.py
Outdated
x = pm.Normal('x', mu=0, sigma=1) | ||
y = pm.Normal('y', mu=x, sigma=1, observed=data) | ||
constraint = x>= 0 | ||
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the absolute constraint with (1 vs 0) may be more educative than (1 vs 1e-30). I would perhaps show that and mention an alternative something like (1 vs 0.5) would provide a soft constraint instead of a hard one.
I find the 1e-30 odd because it behaves like 0 in practice. The sampler will never accept a negative value for 'x', or perhaps once in a lifetime, which is an odd modelling choice.
We should change the mu of x to be 0.1 perhaps, so that the sampler doesn't start at an invalid point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that the absolute constraint with (1 vs 0) might be more educative than (1 vs 1e-30), but the problem with it is that it produces error when trying to sample the model because the logarithm of 0 is not defined. If having a working model that can be sampled without encountering errors along with explaining the use of Potential function is not what is required with this code block, a constraint of 1 vs 0 absolutely makes more sense. So to reconfirm, should I change the constraint?
Soft constraints also are very helpful. Should I make a separate code-block for it or a commented line in the original code-block with an explanation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logarithm of zero is well defined (at least in PyMC land) and leads to -inf logp which is exactly what you want to represent an impossible event in log scale. We use -inf explicitly in most of the logp definitions 9f distributions with constrained support.
About the soft constrained, whichever way you think is better. I have no preference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I try to sample using potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1, 0.0))) I face this error:
AttributeError Traceback (most recent call last)
in
11 potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1, 0)))
12
---> 13 trace = pm.sample(1000,tune=1000)
12 frames
/usr/local/lib/python3.8/dist-packages/aesara/tensor/elemwise.py in transform(r)
600 def transform(r):
601 # From a graph of ScalarOps, make a graph of Broadcast ops.
--> 602 if isinstance(r.type, (NullType, DisconnectedType)):
603 return r
604 if r in scalar_inputs:
AttributeError: 'float' object has no attribute 'type'
On defining the Potential function as potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1.0, 0.0))) I get this:
Interrupted
AttributeError Traceback (most recent call last)
in
11 potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1.0, 0.0)))
12
---> 13 trace = pm.sample(1000,tune=1000)
8 frames
/usr/local/lib/python3.8/dist-packages/pymc/step_methods/hmc/base_hmc.py in astep(self, q0)
166 message_energy = (
167 "Bad initial energy, check any log probabilities that "
--> 168 "are inf or -inf, nan or very small:\n{}".format(error_logp.to_string())
169 )
170 warning = SamplerWarning(
AttributeError: 'numpy.ndarray' object has no attribute 'to_string'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For it to have a valid initial energy with the hard constraint you have to set the prior of x to not be centered at zero like x = pm.Normal("x", mu=0.1, sigma=1)
or set an initial value like x = pm.Normal("x", mu=0, sigma=1, initval=0.1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The negative sign was the problem. I think this is because if we take the negative sign, the Potential function returns in inf instead of -inf. From my understanding, the potential term should be somewhere close to 0 when the constraint is satisfied and very large when the constraint is not satisfied. What I thought was that both inf and -inf can work as this large value. But when Potential value becomes inf, for some reason I face the error "'numpy.ndarray' object has no attribute 'to_string'". So, is it wrong if the Potential value becomes inf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The potential value is just added to the model logp, so it should be -inf (or very negative) when a constraint is violated, so that those draws are rejected. 0 won't have any effect and positive values will make the proposals more likely to be accepted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright👍
I shall include this in the docstring too.
Thanks for the help!
Thinking about it again, I like your original idea of adding a joint constraint. Perhaps as a third example? Something like |
pymc/model.py
Outdated
x = pm.Normal('x', mu=0, sigma=1) | ||
y = pm.Normal('y', mu=x, sigma=1, observed=data) | ||
constraint = x>= 0 | ||
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it should be negated
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30)) | |
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 1e-30)) |
pymc/model.py
Outdated
with pm.Model(): | ||
# p(max_items) = 1 / max_items | ||
max_items = pm.Uniform("max_items", lower=1, upper=100) | ||
pm.Potential('power_prior', -pm.math.log(max_items)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are equivalent, but maybe this one is more readable for users
pm.Potential('power_prior', -pm.math.log(max_items)) | |
pm.Potential('power_prior', pm.math.log(1/max_items)) |
* Add (mu, nu) parametrization * add test beta logp * Add relationship between alpha, beta and nu * change test structure * remove white space accidently added in docstring * remove sample size from docstring --------- Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
The mcmc module relies on the `"tune"` stat to figure out the number of tune/draw iterations post sampling. These changes remove reliance on any weird squeeze-cat magic. Co-authored-by: Virgile Andreani <armavica@ulminfo.fr>
* _joint_logp to pymc/logprob/joint_logprob.py * _get_scaling, _check_no_rvs, logp to logprob/joint_logprob.py * logcdf to logprob/abstract.py * ignore_logprob to logprob/utils.py
Co-authored-by: Ricardo Vieira <28983449+ricardov94@users.noreply.github.com>
Reorder arguments more logically [citation needed]
* adding function for mutable coords * Added coords_mutable * add coords_mutable * Make `coords_mutable` an explicit kwarg * Remove reference to undefined variable * Adding an if block --------- Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
* Build docs in simplified environment * Fix docs build * Remove private API section from docs
pymc/model.py
Outdated
.. warning:: | ||
|
||
Potential functions only influence logp based sampling, like the one used by ``pm.sample``. Potentials, modify the log-probability of the model by adding a contribution to the logp which is used by sampling algorithms which rely on the information about the observed data to generate posterior samples. | ||
Potentials are not applicable in the context of forward sampling because they don't affect the prior distribution itself, only the computation of the logp. | ||
Forward sampling algorithms generate sample points from the prior distribution of the model, without taking into account the likelihood function. In other words, it does not use the information about the observed data. | ||
Hence, Potentials do not affect forward sampling, which is used by ``sample_prior_predictive`` and ``sample_posterior_predictive``. A warning saying "The effect of Potentials on other parameters is ignored during prior predictive sampling" is always raised to alert user of this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should use a header following https://numpydoc.readthedocs.io/en/latest/format.html#warnings
pymc/model.py
Outdated
Have a look at the following example: | ||
|
||
In this example, we define a constraint on x to be greater or equal to 0 via the ``pm.Potential`` function. | ||
We pass ``-pm.math.log(pm.math.switch(constraint, 1, 0))`` as second argument which will return an expression depending on if the constraint is met or not and which will be added to the likelihood of the model. | ||
The probablity density that this model produces agrees strongly with the constraint that x should be greater than or equal to 0. All the cases who do not satisfy the constraint are strictly not considered. | ||
|
||
.. code:: python | ||
|
||
with pm.Model() as model: | ||
x = pm.Normal('x', mu=0, sigma=1) | ||
y = pm.Normal('y', mu=x, sigma=1, observed=data) | ||
constraint = x>= 0 | ||
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 0.0))) | ||
|
||
However, if we use ``-pm.math.log(pm.math.switch(constraint, 1, 0.5))`` the potential again penalizes the likelihood when constraint is not met but with some deviations allowed. Here, Potential function is used to pass a soft constraint. | ||
A soft constraint is a constraint that is only partially satisfied. The effect of this is that the posterior probability for the parameters decreases as they move away from the constraint, but does not become exactly zero. | ||
This allows the sampler to generate values that violate the constraint, but with lower probability. | ||
|
||
.. code:: python | ||
|
||
with pm.Model() as model: | ||
x = pm.Normal('x', mu=0.1, sigma=1) | ||
y = pm.Normal('y', mu=x, sigma=1, observed=data) | ||
constraint = x>=0 | ||
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 0.5))) | ||
|
||
In this example, Potential is used to obtain an arbitrary prior. This prior distribution refers to the prior knowledge that the values of max_items are likely to be small rather than being large. | ||
The prior probability of max_items is defined using a Potential object with the log of the inverse of max_items as its value. | ||
This means that larger values of max_items have a lower prior probability density, while smaller values of max_items have a higher prior probability density. | ||
When the model is sampled, the posterior distribution of ``max_items`` given the observed value of ``n_items`` will be influenced by the power-law prior defined in the Potential object | ||
|
||
.. code:: python | ||
|
||
with pm.Model(): | ||
# p(max_items) = 1 / max_items | ||
max_items = pm.Uniform("max_items", lower=1, upper=100) | ||
pm.Potential('power_prior', pm.math.log(1/max_items)) | ||
|
||
n_items = pm.Uniform('n_items', lower=1, upper=max_items, observed=60) | ||
|
||
In the next example, the soft_sum_constraint potential encourages x and y to have a small sum, effectively adding a soft constraint on the relationship between the two variables. | ||
This can be useful in cases where you want to ensure that the sum of multiple variables stays within a certain range, without enforcing an exact value. | ||
In this case, the larger the deviation, larger will be the negative value (-((x + y)**2)) which the MCMC sampler will attempt to minimize. However, the sampler might generate values for some small deviations but with lower probability hence this is a soft constraint. | ||
|
||
.. code:: python | ||
|
||
with pm.Model() as model: | ||
x = pm.Normal('x', mu=0.1, sigma=1) | ||
y = pm.Normal('y', mu=x, sigma=1, observed=data) | ||
soft_sum_constraint = pm.Potential("soft_sum_constraint", -((x + y)**2)) | ||
|
||
The potential value is incorporated into the model log-probability, so it should be -inf (or very negative) when a constraint is violated, so that those draws are rejected. 0 won't have any effect and positive values will make the proposals more likely to be accepted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All this should go at the bottom of the docstring and have a section header too. Ref: https://numpydoc.readthedocs.io/en/latest/format.html#examples
pymc/model.py
Outdated
var: PyTensor expression to be added to the model joint logp | ||
model: The model object to which the potential function is added. If ``None`` is provided, the current model is used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
var: PyTensor expression to be added to the model joint logp | |
model: The model object to which the potential function is added. If ``None`` is provided, the current model is used. | |
var : PyTensor expression | |
Expression to be added to the model joint logp | |
model : Model, optional | |
The model object to which the potential function is added. If ``None`` is provided, the current model is used. |
a space is needed both before and after the colon (existing docstrings are generally not a good example to follow), and the first line should contain only name : type
the description comes in the 2nd+ line and indented.
* added n_zerosum_axes and added backwards compatibility for previous parameter name * fixed typo that caused the zerosum_axes param not to be saved correctly in the new param * adapted tests to use n_zerosum_axes --------- Co-authored-by: Michal Raczycki <michalraczycki@macbook-pro-michal.home>
) * added dim inference from xarray, deprecation warning and unittest for the new feature * fixed typo in warning * fixed accidental quotation around dim * fixed failing assertions * found and fixed cause of the failing test * changed the coords assertion according to suggested form * fixing mypy type missmatch * working on getting the test to work * removed typecasting to string on dim_name, was causing the mypy to fail * took care locally of mypy errors * Typo/formatting fixes --------- Co-authored-by: Michal Raczycki <michalraczycki@macbook-pro-michal.home> Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
* Make isort happy with imports order * Avoid shadowing the logprob fn with a local name * Deal with unused error names * Remove unreachable return * Remove unread variable * Fix type checking for variational/approximations * A few typing fixes
* added value check for step samplers * changing error message to be more informative * added test checking if variable not being in model.value_vars will trigger Value error * implemented changes from PR review * Fix rebase --------- Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
* Extract return part of `pm.sample` * Speed up `test_mcmc` * Group tests related to `pm.sample` return parameters * Consolidate tests of return options Removes a regression test added in pymc-devs#3821 because it took 14 seconds.
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Hey @OriolAbril I have implemented all the suggested changes. Is there something else I can do to improve this PR or is this good to merge? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doc preview: https://pymcio--6489.org.readthedocs.build/projects/docs/en/6489/api/generated/pymc.Potential.html looks good, I haven't reviewed the code nor looked at the potential merge/rebase issues
@Dhruvanshu-Joshi the changes look good but you got some conflicts (as you can see by the 162 changed files). Usually when this happens the easiest solution is to open a new PR from the up-to-date main branch + your final changes |
Hey @OriolAbril @ricardoV94 I have opened a new PR 6559 for the same issue and even referenced this PR and the issue there. Hope it solves all the conflicts. |
closing as it was superseeded by #6559 |
What is this PR about?
In this PR , I have provided a solution to close the issue #6399.
The docstring for the Potential function is rather limited. I have made efforts to include some code examples to better understand this. I also focused on the nature of the arguments that are passed to this function and have included an example for the same.
Checklist
Documentation