Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add code example to Potential docstrings #6489

Closed
wants to merge 48 commits into from

Conversation

Dhruvanshu-Joshi
Copy link
Member

@Dhruvanshu-Joshi Dhruvanshu-Joshi commented Jan 28, 2023

What is this PR about?
In this PR , I have provided a solution to close the issue #6399.
The docstring for the Potential function is rather limited. I have made efforts to include some code examples to better understand this. I also focused on the nature of the arguments that are passed to this function and have included an example for the same.

Checklist

Documentation

  • Add code example and a short explanation of the code example to Potential docstrings.
  • In the first example, we define a constraint such that the probability density that the model produces takes into consideration that the sum of x and y should be lesser than 1.
  • In the second example, we emphasis that the var argument can be a PyTensor or Scalar variable. For the same purpose, I use a function that'll return a Scalar variable. Had we not done it , we might have encountered an error.

Copy link
Member

@larryshamalama larryshamalama left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Dhruvanshu-Joshi, thanks for creating this pull request! Also, thanks for your patience, it seems that our devs have been busy recently... (myself included)

Overall, it looks good! I gave some suggestions to tweak some sentences here and there. Let me know if you would like any assistance with this PR.

Edit: It seems that the pre-commit has not been run successfully; let us know if you would like any pointers regarding that.

pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated
x = pm.Normal('x', mu=0, sigma=1)
y = pm.Normal('y',mu=x, sigma=1,observed=data)
constraint = x>= 0
potential = pm.Potential('x_constraint',-tt.log(tt.switch(constraint, 1, 1e-30)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
potential = pm.Potential('x_constraint',-tt.log(tt.switch(constraint, 1, 1e-30)))
potential = pm.Potential('x_constraint', -pt.log(pt.switch(constraint, 1, 1e-30)))

We recently switched our backend to PyTensor!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are avoiding to use too many new concepts at once, can we use pm.math for backend computation here instead of explicitly mentioning Theano tensor or PyTensor?
I am quite new to PyTensor and am facing errors when I try to run the suggestion made above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's a great idea. @ricardoV94 mentioned it below too

pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated
potential = pm.Potential('x_constraint',-tt.log(tt.switch(constraint, 1, 1e-30)))

In this example, we define a constraint on x that it can be either greater than or equal to 0 and pass this constraint to the pm.Potential function.
Now the pm.Potential function takes in a string expression and a scalar value as argument. Hence we pass '-tt.log(tt.switch(constraint, 1, 1e-30))' as second argument which will return a scalar value depending on the constraint and will add to the likelihood function of the model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Now the pm.Potential function takes in a string expression and a scalar value as argument. Hence we pass '-tt.log(tt.switch(constraint, 1, 1e-30))' as second argument which will return a scalar value depending on the constraint and will add to the likelihood function of the model.
We pass ``-pt.log(pt.switch(constraint, 1, 1e-30))`` as second argument which will return a scalar value depending on if the constraint is met or not and which will added to the likelihood of the model.

pymc/model.py Outdated

In this example, we define a constraint on x that it can be either greater than or equal to 0 and pass this constraint to the pm.Potential function.
Now the pm.Potential function takes in a string expression and a scalar value as argument. Hence we pass '-tt.log(tt.switch(constraint, 1, 1e-30))' as second argument which will return a scalar value depending on the constraint and will add to the likelihood function of the model.
This is done because PyMC3 uses an energy-based model, which means that the probability of a variable configuration is proportional to the negative of the energy of that configuration. So, the negative logarithm of the switch function ensures that the energy is high (i.e. close to negative infinity) when the constraint is not met, and low (i.e. close to zero) when the constraint is met.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This is done because PyMC3 uses an energy-based model, which means that the probability of a variable configuration is proportional to the negative of the energy of that configuration. So, the negative logarithm of the switch function ensures that the energy is high (i.e. close to negative infinity) when the constraint is not met, and low (i.e. close to zero) when the constraint is met.
The negative logarithm of the switch function ensures that this component of the log-likelihood is high (i.e. close to 30) when the constraint is not met, and low (i.e. close to zero) when the constraint is met.

I'm confused with this sentence. What do you mean by "probability of a variable configuration"?

Lastly, personally, I would avoid using the energy concept since it's similar to the likelihood and focus on the log-likelihood since that's what PyMC works with. In the sense that I would avoid introducing too many new concepts as once, although the energy-based analogy is correct (to my understanding and despite my little knowledge of physics 😅).

However, with that, the sentence would revert the direction of the logic if I'm not mistaken...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answering the first question, 'probability of variable configuration' refers to the likelihood function. By this I simply mean to refer the likelihood of a particular set of values for the variables that the model equation uses.

For the second part, no it does not revert the direction of logic... That's bad framing on my part😅

What I meant from "close to negative infinity" was that the log function/ log-likelihood will be close to negative infinity. The mathematical negative will thus be a large positive value. Energy is the negative logarithmic function that we have defined and thus it will be high when the constraint is not met.

Can we re-frame this as:
The reciprocal exponential function of the negative logarithm is proportional to the likelihood function. When the constraint is met, the switch function returns 1 making the negative logarithm 0 and hence likelihood function has no penalizing effect. When the constraint is not met the switch function returns 1e-30 and the negative logarithm takes a high value(i.e. 30), reciprocal exponential function of which makes the likelihood function very small and successfully penalizes it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, chipping after much time. I'm not sure what you mean by "reciprocal exponential function of the negative logarithm"...

However, it looks like you have it figured out in some threads below! Perhaps you can focus on those suggestions rather than on my comments

@codecov
Copy link

codecov bot commented Feb 2, 2023

Codecov Report

Merging #6489 (639f471) into main (08e324c) will increase coverage by 5.88%.
The diff coverage is 92.51%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #6489      +/-   ##
==========================================
+ Coverage   86.03%   91.91%   +5.88%     
==========================================
  Files         148       89      -59     
  Lines       27794    14936   -12858     
==========================================
- Hits        23913    13729   -10184     
+ Misses       3881     1207    -2674     
Impacted Files Coverage Δ
pymc/backends/arviz.py 96.39% <ø> (ø)
pymc/distributions/__init__.py 100.00% <ø> (ø)
pymc/distributions/censored.py 100.00% <ø> (ø)
pymc/distributions/simulator.py 87.23% <ø> (ø)
pymc/distributions/transforms.py 99.36% <ø> (ø)
pymc/distributions/truncated.py 99.29% <ø> (+64.78%) ⬆️
pymc/initial_point.py 100.00% <ø> (ø)
pymc/logprob/censoring.py 100.00% <ø> (+29.89%) ⬆️
pymc/logprob/mixture.py 98.35% <ø> (+63.18%) ⬆️
pymc/logprob/scan.py 97.35% <ø> (+77.77%) ⬆️
... and 107 more

Copy link
Member

@ricardoV94 ricardoV94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for opening up the PR. This looks nice already. I have three notes:

First

I think the two examples should be merged into a single one since they both show an example of using Potential to impose a constraint. The first example was incorrect though, and the second one looks better if you make it strict pt.log(pt.switch(cond, 1, 0)) instead of the arbitrary large or small values.

I would add a second example showing how Potential can be used to obtain an arbitrary prior and not just a binary constraint. Maybe, something like this which corresponds to a power law prior for max_items

with pm.Model():
  # p(max_items) = 1 / max_items
  max_items = pm.Uniform("max_items", lower=1, upper=100)
  pm.Potential('power_prior', pm.math.log(1 / max_items))

  n_items = pm.Uniform('n_items', lower=1, upper=max_items, observed=60)

Second

I would use pm.math.foo instead of pt.foo.

Third

We should add a warning box saying that Potentials are only added to the model logp, and as such can only influence logp based sampling, like the one used by pm.sample.

They do not affect forward sampling, which is used by sample_prior_predictive and sample_posterior_predictive. A warning is always raised to alert user of this.

pymc/model.py Outdated
Comment on lines 2046 to 2047
constraint = x + y < 1
potential = pm.Potential('constraint', constraint, my_model)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example does not make sense like this. x + y < 1 is gonna return a variable that evaluates to 0 or 1, which will be added to the model logp. It would make sense to add the log(constraint) instead.

However, I think it's a bit more readable for users to see a constraint with a switch as that's usually how it crops up in examples. Something like pm.Potential("constraint", pt.log(pt.switch(constraint, 1, 0)))

But this is very similar to the example below, so just show one of them

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand how using the logarithm function makes more sense and also notice the repetitiveness in both the examples.
I will add a commit as soon as it is clear if we are moving forward with Pytensor or using pm.math.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pm.math is fine

pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated Show resolved Hide resolved
pymc/model.py Outdated
y = pm.Normal('y', mu=x, tau=1)

constraint = x + y < 1
potential = pm.Potential('constraint', constraint, my_model)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't show the model argument being used. Most times it's not needed as users add a Potential inside a model context.

pymc/model.py Outdated Show resolved Hide resolved
Dhruvanshu-Joshi and others added 6 commits February 3, 2023 18:31
* Add (mu, nu) parametrization

* add test beta logp

* Add relationship between alpha, beta and nu

* change test structure

* remove white space accidently added in docstring

* remove sample size from docstring

---------

Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
Copy link
Member

@OriolAbril OriolAbril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently on my mobile and can't properly review but want to leave a reminder for myself to review when I gat the chance. It should follow numpydoc: https://numpydoc.readthedocs.io/en/latest/format.html. Examples in their own section below parameters and returns. Warnings also as their own section, the spaces both before and after the parameter name and the parameter info... https://pymc-data-umbrella.xyz/en/latest/sprint/tutorials/docstring_tutorial.html#edit-the-docstring might also be helpful

Copy link
Member

@ricardoV94 ricardoV94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think abbreviate a bit the explanations? I don't know if it's just me but they feel a tad too long.

pymc/model.py Outdated
constraint = x>= 0
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30))

In this example, we define a constraint on x to be greater or equal to 0 via the ``pm.Potential`` function.
Copy link
Member

@ricardoV94 ricardoV94 Feb 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would put the explanations above the respective examples

pymc/model.py Outdated
x = pm.Normal('x', mu=0, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
constraint = x>= 0
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30))
Copy link
Member

@ricardoV94 ricardoV94 Feb 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the absolute constraint with (1 vs 0) may be more educative than (1 vs 1e-30). I would perhaps show that and mention an alternative something like (1 vs 0.5) would provide a soft constraint instead of a hard one.

I find the 1e-30 odd because it behaves like 0 in practice. The sampler will never accept a negative value for 'x', or perhaps once in a lifetime, which is an odd modelling choice.

We should change the mu of x to be 0.1 perhaps, so that the sampler doesn't start at an invalid point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that the absolute constraint with (1 vs 0) might be more educative than (1 vs 1e-30), but the problem with it is that it produces error when trying to sample the model because the logarithm of 0 is not defined. If having a working model that can be sampled without encountering errors along with explaining the use of Potential function is not what is required with this code block, a constraint of 1 vs 0 absolutely makes more sense. So to reconfirm, should I change the constraint?

Soft constraints also are very helpful. Should I make a separate code-block for it or a commented line in the original code-block with an explanation?

Copy link
Member

@ricardoV94 ricardoV94 Feb 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logarithm of zero is well defined (at least in PyMC land) and leads to -inf logp which is exactly what you want to represent an impossible event in log scale. We use -inf explicitly in most of the logp definitions 9f distributions with constrained support.

About the soft constrained, whichever way you think is better. I have no preference.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I try to sample using potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1, 0.0))) I face this error:

AttributeError Traceback (most recent call last)
in
11 potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1, 0)))
12
---> 13 trace = pm.sample(1000,tune=1000)

12 frames
/usr/local/lib/python3.8/dist-packages/aesara/tensor/elemwise.py in transform(r)
600 def transform(r):
601 # From a graph of ScalarOps, make a graph of Broadcast ops.
--> 602 if isinstance(r.type, (NullType, DisconnectedType)):
603 return r
604 if r in scalar_inputs:

AttributeError: 'float' object has no attribute 'type'

On defining the Potential function as potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1.0, 0.0))) I get this:

Interrupted

AttributeError Traceback (most recent call last)
in
11 potential = pm.Potential('x_constraint',-pm.math.log(pm.math.switch(constraint, 1.0, 0.0)))
12
---> 13 trace = pm.sample(1000,tune=1000)

8 frames
/usr/local/lib/python3.8/dist-packages/pymc/step_methods/hmc/base_hmc.py in astep(self, q0)
166 message_energy = (
167 "Bad initial energy, check any log probabilities that "
--> 168 "are inf or -inf, nan or very small:\n{}".format(error_logp.to_string())
169 )
170 warning = SamplerWarning(

AttributeError: 'numpy.ndarray' object has no attribute 'to_string'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For it to have a valid initial energy with the hard constraint you have to set the prior of x to not be centered at zero like x = pm.Normal("x", mu=0.1, sigma=1) or set an initial value like x = pm.Normal("x", mu=0, sigma=1, initval=0.1)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The negative sign was the problem. I think this is because if we take the negative sign, the Potential function returns in inf instead of -inf. From my understanding, the potential term should be somewhere close to 0 when the constraint is satisfied and very large when the constraint is not satisfied. What I thought was that both inf and -inf can work as this large value. But when Potential value becomes inf, for some reason I face the error "'numpy.ndarray' object has no attribute 'to_string'". So, is it wrong if the Potential value becomes inf?

Copy link
Member

@ricardoV94 ricardoV94 Feb 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The potential value is just added to the model logp, so it should be -inf (or very negative) when a constraint is violated, so that those draws are rejected. 0 won't have any effect and positive values will make the proposals more likely to be accepted

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright👍
I shall include this in the docstring too.
Thanks for the help!

@ricardoV94
Copy link
Member

ricardoV94 commented Feb 5, 2023

Thinking about it again, I like your original idea of adding a joint constraint. Perhaps as a third example?

Something like Potential(”soft_sum_constraint", -((x + y)**2)

pymc/model.py Outdated
x = pm.Normal('x', mu=0, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
constraint = x>= 0
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it should be negated

Suggested change
potential = pm.Potential('x_constraint', -pm.math.log(pm.math.switch(constraint, 1, 1e-30))
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 1e-30))

pymc/model.py Outdated
with pm.Model():
# p(max_items) = 1 / max_items
max_items = pm.Uniform("max_items", lower=1, upper=100)
pm.Potential('power_prior', -pm.math.log(max_items))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are equivalent, but maybe this one is more readable for users

Suggested change
pm.Potential('power_prior', -pm.math.log(max_items))
pm.Potential('power_prior', pm.math.log(1/max_items))

michaelosthege and others added 14 commits February 7, 2023 13:49
* Add (mu, nu) parametrization

* add test beta logp

* Add relationship between alpha, beta and nu

* change test structure

* remove white space accidently added in docstring

* remove sample size from docstring

---------

Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
The mcmc module relies on the `"tune"` stat to figure out the number of
tune/draw iterations post sampling.
These changes remove reliance on any weird squeeze-cat magic.

Co-authored-by: Virgile Andreani <armavica@ulminfo.fr>
* _joint_logp to pymc/logprob/joint_logprob.py
* _get_scaling, _check_no_rvs, logp to logprob/joint_logprob.py
* logcdf to logprob/abstract.py
* ignore_logprob to logprob/utils.py
twiecki and others added 10 commits February 15, 2023 21:25
Co-authored-by: Ricardo Vieira <28983449+ricardov94@users.noreply.github.com>
Reorder arguments more logically [citation needed]
* adding function for mutable coords

* Added coords_mutable

* add coords_mutable

* Make `coords_mutable` an explicit kwarg

* Remove reference to undefined variable

* Adding an if block

---------

Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
* Build docs in simplified environment

* Fix docs build

* Remove private API section from docs
pymc/model.py Outdated
Comment on lines 2093 to 2098
.. warning::

Potential functions only influence logp based sampling, like the one used by ``pm.sample``. Potentials, modify the log-probability of the model by adding a contribution to the logp which is used by sampling algorithms which rely on the information about the observed data to generate posterior samples.
Potentials are not applicable in the context of forward sampling because they don't affect the prior distribution itself, only the computation of the logp.
Forward sampling algorithms generate sample points from the prior distribution of the model, without taking into account the likelihood function. In other words, it does not use the information about the observed data.
Hence, Potentials do not affect forward sampling, which is used by ``sample_prior_predictive`` and ``sample_posterior_predictive``. A warning saying "The effect of Potentials on other parameters is ignored during prior predictive sampling" is always raised to alert user of this.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pymc/model.py Outdated
Comment on lines 2040 to 2091
Have a look at the following example:

In this example, we define a constraint on x to be greater or equal to 0 via the ``pm.Potential`` function.
We pass ``-pm.math.log(pm.math.switch(constraint, 1, 0))`` as second argument which will return an expression depending on if the constraint is met or not and which will be added to the likelihood of the model.
The probablity density that this model produces agrees strongly with the constraint that x should be greater than or equal to 0. All the cases who do not satisfy the constraint are strictly not considered.

.. code:: python

with pm.Model() as model:
x = pm.Normal('x', mu=0, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
constraint = x>= 0
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 0.0)))

However, if we use ``-pm.math.log(pm.math.switch(constraint, 1, 0.5))`` the potential again penalizes the likelihood when constraint is not met but with some deviations allowed. Here, Potential function is used to pass a soft constraint.
A soft constraint is a constraint that is only partially satisfied. The effect of this is that the posterior probability for the parameters decreases as they move away from the constraint, but does not become exactly zero.
This allows the sampler to generate values that violate the constraint, but with lower probability.

.. code:: python

with pm.Model() as model:
x = pm.Normal('x', mu=0.1, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
constraint = x>=0
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 0.5)))

In this example, Potential is used to obtain an arbitrary prior. This prior distribution refers to the prior knowledge that the values of max_items are likely to be small rather than being large.
The prior probability of max_items is defined using a Potential object with the log of the inverse of max_items as its value.
This means that larger values of max_items have a lower prior probability density, while smaller values of max_items have a higher prior probability density.
When the model is sampled, the posterior distribution of ``max_items`` given the observed value of ``n_items`` will be influenced by the power-law prior defined in the Potential object

.. code:: python

with pm.Model():
# p(max_items) = 1 / max_items
max_items = pm.Uniform("max_items", lower=1, upper=100)
pm.Potential('power_prior', pm.math.log(1/max_items))

n_items = pm.Uniform('n_items', lower=1, upper=max_items, observed=60)

In the next example, the soft_sum_constraint potential encourages x and y to have a small sum, effectively adding a soft constraint on the relationship between the two variables.
This can be useful in cases where you want to ensure that the sum of multiple variables stays within a certain range, without enforcing an exact value.
In this case, the larger the deviation, larger will be the negative value (-((x + y)**2)) which the MCMC sampler will attempt to minimize. However, the sampler might generate values for some small deviations but with lower probability hence this is a soft constraint.

.. code:: python

with pm.Model() as model:
x = pm.Normal('x', mu=0.1, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
soft_sum_constraint = pm.Potential("soft_sum_constraint", -((x + y)**2))

The potential value is incorporated into the model log-probability, so it should be -inf (or very negative) when a constraint is violated, so that those draws are rejected. 0 won't have any effect and positive values will make the proposals more likely to be accepted
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All this should go at the bottom of the docstring and have a section header too. Ref: https://numpydoc.readthedocs.io/en/latest/format.html#examples

pymc/model.py Outdated
Comment on lines 2104 to 2105
var: PyTensor expression to be added to the model joint logp
model: The model object to which the potential function is added. If ``None`` is provided, the current model is used.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
var: PyTensor expression to be added to the model joint logp
model: The model object to which the potential function is added. If ``None`` is provided, the current model is used.
var : PyTensor expression
Expression to be added to the model joint logp
model : Model, optional
The model object to which the potential function is added. If ``None`` is provided, the current model is used.

a space is needed both before and after the colon (existing docstrings are generally not a good example to follow), and the first line should contain only name : type the description comes in the 2nd+ line and indented.

michaelraczycki and others added 11 commits February 22, 2023 11:14
* added n_zerosum_axes and added backwards compatibility for previous parameter name

* fixed typo that caused the zerosum_axes param not to be saved correctly in the new param

* adapted tests to use n_zerosum_axes

---------

Co-authored-by: Michal Raczycki <michalraczycki@macbook-pro-michal.home>
)

* added dim inference from xarray, deprecation warning and unittest for the new feature

* fixed typo in warning

* fixed accidental quotation around dim

* fixed failing assertions

* found and fixed cause of the failing test

* changed the coords assertion according to suggested form

* fixing mypy type missmatch

* working on getting the test to work

* removed typecasting to string on dim_name, was causing the mypy to fail

* took care locally of mypy errors

* Typo/formatting fixes

---------

Co-authored-by: Michal Raczycki <michalraczycki@macbook-pro-michal.home>
Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
* Make isort happy with imports order

* Avoid shadowing the logprob fn with a local name

* Deal with unused error names

* Remove unreachable return

* Remove unread variable

* Fix type checking for variational/approximations

* A few typing fixes
* added value check for step samplers

* changing error message to be more informative

* added test checking if variable not being in model.value_vars will trigger Value error

* implemented changes from PR review

* Fix rebase

---------

Co-authored-by: Michael Osthege <michael.osthege@outlook.com>
* Extract return part of `pm.sample`

* Speed up `test_mcmc`

* Group tests related to `pm.sample` return parameters

* Consolidate tests of return options

Removes a regression test added in pymc-devs#3821 because it took 14 seconds.
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@Dhruvanshu-Joshi
Copy link
Member Author

Hey @OriolAbril I have implemented all the suggested changes. Is there something else I can do to improve this PR or is this good to merge?

Copy link
Member

@OriolAbril OriolAbril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc preview: https://pymcio--6489.org.readthedocs.build/projects/docs/en/6489/api/generated/pymc.Potential.html looks good, I haven't reviewed the code nor looked at the potential merge/rebase issues

@ricardoV94
Copy link
Member

@Dhruvanshu-Joshi the changes look good but you got some conflicts (as you can see by the 162 changed files). Usually when this happens the easiest solution is to open a new PR from the up-to-date main branch + your final changes

@Dhruvanshu-Joshi
Copy link
Member Author

Hey @OriolAbril @ricardoV94 I have opened a new PR 6559 for the same issue and even referenced this PR and the issue there. Hope it solves all the conflicts.

@OriolAbril
Copy link
Member

closing as it was superseeded by #6559

@OriolAbril OriolAbril closed this Mar 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.