Changes to logprob computation between v4 and v5 #6394
-
Under x_true = np.array([1, 2, 3, 4, 5])
y_obs = np.random.normal(2 * x_true)
with pm.Model() as pmodel:
x_hat = pm.Uniform("x_hat", 0, 1, shape=x_true.shape, transform=None)
# L = pm.StudentT("L", mu=x_hat, observed=y_obs) # the first version of the code sample was this,
but this was not actually what I ran.
rv = pm.StudentT("L", mu=x_hat, observed=y_obs)
L = pm.logprob.joint_logprob({rv: y_obs}, sum=True) # This is what I actually had
assert isinstance(L, at.TensorVariable)
assert L.ndim == 0
# PyMC v4 returns the RV, but for .eval() we need the RV-value-variable
vvar = pmodel.rvs_to_values[x_hat]
# compare the loglikelihood from PyMC with the one from SciPy
x_test = numpy.random.normal(x_true, scale=0.1)
actual = L.eval({vvar: x_test})
expected = scipy.stats.t.logpdf(loc=x_test, loc=y_obs, scale=1)
numpy.testing.assert_almost_equal(actual, expected, 6) But with
And compiling a function with f = pytensor.function(inputs=[vvar], outputs=[L], on_unused_input="ignore")
actual = f(x_test)[0]
numpy.testing.assert_almost_equal(actual, expected, 6)
I'm asking this because it's currently blocking my PR to upgrade our |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
L is a RV not a logp expression. Theres no place where the value variable enters the graph of the RV, hence the correct unused input error. If you want the logp of L call |
Beta Was this translation helpful? Give feedback.
L is a RV not a logp expression. Theres no place where the value variable enters the graph of the RV, hence the correct unused input error.
If you want the logp of L call
model.logp(vars=[L], sum=False)
(orcompile_logp
if you want a compiled function)