Skip to content

Commit

Permalink
Updating docstring according to suggestions
Browse files Browse the repository at this point in the history
  • Loading branch information
Dhruvanshu-Joshi committed Mar 5, 2023
1 parent 9ed1a06 commit 840436a
Showing 1 changed file with 46 additions and 36 deletions.
82 changes: 46 additions & 36 deletions pymc/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -2048,78 +2048,88 @@ def Potential(name, var, model=None):
The Potential function is used to add arbitrary factors (such as constraints or other likelihood components) to adjust the probability density of the model.
Warnings
----------
Potential functions only influence logp based sampling, like the one used by ``pm.sample``. Potentials, modify the log-probability of the model by adding a contribution to the logp which is used by sampling algorithms which rely on the information about the observed data to generate posterior samples.
--------
Potential functions only influence logp based sampling, like the one used by ``pm.sample``.
Potentials, modify the log-probability of the model by adding a contribution to the logp which is used by sampling algorithms which rely on the information about the observed data to generate posterior samples.
Potentials are not applicable in the context of forward sampling because they don't affect the prior distribution itself, only the computation of the logp.
Forward sampling algorithms generate sample points from the prior distribution of the model, without taking into account the likelihood function. In other words, it does not use the information about the observed data.
Hence, Potentials do not affect forward sampling, which is used by ``sample_prior_predictive`` and ``sample_posterior_predictive``. A warning saying "The effect of Potentials on other parameters is ignored during prior predictive sampling" is always raised to alert user of this.
Forward sampling algorithms generate sample points from the prior distribution of the model, without taking into account the likelihood function.
In other words, it does not use the information about the observed data.
Hence, Potentials do not affect forward sampling, which is used by ``sample_prior_predictive`` and ``sample_posterior_predictive``.
A warning saying "The effect of Potentials on other parameters is ignored during prior predictive sampling" is always emitted to alert user of this.
Parameters
----------
name : str expression
var : PyTensor expression
Expression to be added to the model joint logp
name : str
Name of the potential variable to be registered in the model.
var : tensor-like
Expression to be added to the model joint logp.
model : Model, optional
The model object to which the potential function is added. If ``None`` is provided, the current model is used.
The model object to which the potential function is added.
If ``None`` is provided, the current model is used.
Returns
-------
var: var, with name attribute
var : tensor-like
The registered, named model variable.
Examples
--------
Have a look at the following example:
In this example, we define a constraint on x to be greater or equal to 0 via the ``pm.Potential`` function.
In this example, we define a constraint on ``x`` to be greater or equal to 0 via the ``pm.Potential`` function.
We pass ``-pm.math.log(pm.math.switch(constraint, 1, 0))`` as second argument which will return an expression depending on if the constraint is met or not and which will be added to the likelihood of the model.
The probablity density that this model produces agrees strongly with the constraint that x should be greater than or equal to 0. All the cases who do not satisfy the constraint are strictly not considered.
The probablity density that this model produces agrees strongly with the constraint that ``x`` should be greater than or equal to 0. All the cases who do not satisfy the constraint are strictly not considered.
.. code:: python
with pm.Model() as model:
x = pm.Normal('x', mu=0, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
constraint = x>= 0
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 0.0)))
However, if we use ``-pm.math.log(pm.math.switch(constraint, 1, 0.5))`` the potential again penalizes the likelihood when constraint is not met but with some deviations allowed. Here, Potential function is used to pass a soft constraint.
A soft constraint is a constraint that is only partially satisfied. The effect of this is that the posterior probability for the parameters decreases as they move away from the constraint, but does not become exactly zero.
x = pm.Normal("x", mu=0, sigma=1)
y = pm.Normal("y", mu=x, sigma=1, observed=data)
constraint = x >= 0
potential = pm.Potential("x_constraint", pm.math.log(pm.math.switch(constraint, 1, 0.0)))
However, if we use ``-pm.math.log(pm.math.switch(constraint, 1, 0.5))`` the potential again penalizes the likelihood when constraint is not met but with some deviations allowed.
Here, Potential function is used to pass a soft constraint.
A soft constraint is a constraint that is only partially satisfied.
The effect of this is that the posterior probability for the parameters decreases as they move away from the constraint, but does not become exactly zero.
This allows the sampler to generate values that violate the constraint, but with lower probability.
.. code:: python
with pm.Model() as model:
x = pm.Normal('x', mu=0.1, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
constraint = x>=0
potential = pm.Potential('x_constraint', pm.math.log(pm.math.switch(constraint, 1, 0.5)))
In this example, Potential is used to obtain an arbitrary prior. This prior distribution refers to the prior knowledge that the values of max_items are likely to be small rather than being large.
The prior probability of max_items is defined using a Potential object with the log of the inverse of max_items as its value.
This means that larger values of max_items have a lower prior probability density, while smaller values of max_items have a higher prior probability density.
x = pm.Normal("x", mu=0.1, sigma=1)
y = pm.Normal("y", mu=x, sigma=1, observed=data)
constraint = x >= 0
potential = pm.Potential("x_constraint", pm.math.log(pm.math.switch(constraint, 1, 0.5)))
In this example, Potential is used to obtain an arbitrary prior.
This prior distribution refers to the prior knowledge that the values of ``max_items`` are likely to be small rather than being large.
The prior probability of ``max_items`` is defined using a Potential object with the log of the inverse of ``max_items`` as its value.
This means that larger values of ``max_items`` have a lower prior probability density, while smaller values of ``max_items`` have a higher prior probability density.
When the model is sampled, the posterior distribution of ``max_items`` given the observed value of ``n_items`` will be influenced by the power-law prior defined in the Potential object
.. code:: python
with pm.Model():
# p(max_items) = 1 / max_items
max_items = pm.Uniform("max_items", lower=1, upper=100)
pm.Potential('power_prior', pm.math.log(1/max_items))
# p(max_items) = 1 / max_items
max_items = pm.Uniform("max_items", lower=1, upper=100)
pm.Potential("power_prior", pm.math.log(1/max_items))
n_items = pm.Uniform('n_items', lower=1, upper=max_items, observed=60)
n_items = pm.Uniform("n_items", lower=1, upper=max_items, observed=60)
In the next example, the soft_sum_constraint potential encourages x and y to have a small sum, effectively adding a soft constraint on the relationship between the two variables.
In the next example, the ``soft_sum_constraint`` potential encourages ``x`` and ``y`` to have a small sum, effectively adding a soft constraint on the relationship between the two variables.
This can be useful in cases where you want to ensure that the sum of multiple variables stays within a certain range, without enforcing an exact value.
In this case, the larger the deviation, larger will be the negative value (-((x + y)**2)) which the MCMC sampler will attempt to minimize. However, the sampler might generate values for some small deviations but with lower probability hence this is a soft constraint.
In this case, the larger the deviation, larger will be the negative value (-((x + y)**2)) which the MCMC sampler will attempt to minimize.
However, the sampler might generate values for some small deviations but with lower probability hence this is a soft constraint.
.. code:: python
with pm.Model() as model:
x = pm.Normal('x', mu=0.1, sigma=1)
y = pm.Normal('y', mu=x, sigma=1, observed=data)
soft_sum_constraint = pm.Potential("soft_sum_constraint", -((x + y)**2))
x = pm.Normal("x", mu=0.1, sigma=1)
y = pm.Normal("y", mu=x, sigma=1, observed=data)
soft_sum_constraint = pm.Potential("soft_sum_constraint", -((x + y)**2))
The potential value is incorporated into the model log-probability, so it should be -inf (or very negative) when a constraint is violated, so that those draws are rejected. 0 won't have any effect and positive values will make the proposals more likely to be accepted
The potential value is incorporated into the model log-probability, so it should be -inf (or very negative) when a constraint is violated, so that those draws are rejected. 0 won't have any effect and positive values will make the proposals more likely to be accepted.
"""
model = modelcontext(model)
Expand Down

0 comments on commit 840436a

Please sign in to comment.