-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Use more readable ignore logprob #6596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use more readable ignore logprob #6596
Conversation
…eping their interdependencies intact
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #6596 +/- ##
==========================================
- Coverage 92.03% 91.54% -0.50%
==========================================
Files 92 92
Lines 15539 15541 +2
==========================================
- Hits 14302 14227 -75
- Misses 1237 1314 +77
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to not inline assign_custom_measurable_outputs
inside ignore_logprob
since I believe it is now only used there?
That function is a bit more powerful. It can be used to toggle some outputs on and off as measurable. Although it's not being used anywhere else I would leave it separately until it's proven not ever neeeded |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR has already been merged, but I still had some questions here and there 😅
@@ -294,3 +305,32 @@ def reconsider_logprob(rv: TensorVariable) -> TensorVariable: | |||
new_node.op = copy(new_node.op) | |||
new_node.op.__class__ = original_op_type | |||
return new_node.outputs[node.outputs.index(rv)] | |||
|
|||
|
|||
def ignore_logprob_multiple_vars( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be good to add a test for ignore_logprob_multiple_vars
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The functionality is a core requirement of stack_logprob
, and it's covered in this test, when it was first introduced:
pymc/tests/logprob/test_tensor.py
Lines 134 to 171 in c709abf
@pytest.mark.parametrize("reverse", (False, True)) | |
def test_measurable_make_vector_interdependent(reverse): | |
"""Test that we can obtain a proper graph when stacked RVs depend on each other""" | |
x = pt.random.normal(name="x") | |
y_rvs = [] | |
prev_rv = x | |
for i in range(3): | |
next_rv = pt.random.normal(prev_rv + 1, name=f"y{i}") | |
y_rvs.append(next_rv) | |
prev_rv = next_rv | |
if reverse: | |
y_rvs = y_rvs[::-1] | |
ys = pt.stack(y_rvs) | |
ys.name = "ys" | |
x_vv = x.clone() | |
ys_vv = ys.clone() | |
logp = joint_logprob({x: x_vv, ys: ys_vv}) | |
assert_no_rvs(logp) | |
y0_vv = y_rvs[0].clone() | |
y1_vv = y_rvs[1].clone() | |
y2_vv = y_rvs[2].clone() | |
ref_logp = joint_logprob({x: x_vv, y_rvs[0]: y0_vv, y_rvs[1]: y1_vv, y_rvs[2]: y2_vv}) | |
rng = np.random.default_rng() | |
x_vv_test = rng.normal() | |
ys_vv_test = rng.normal(size=3) | |
np.testing.assert_allclose( | |
logp.eval({x_vv: x_vv_test, ys_vv: ys_vv_test}), | |
ref_logp.eval( | |
{x_vv: x_vv_test, y0_vv: ys_vv_test[0], y1_vv: ys_vv_test[1], y2_vv: ys_vv_test[2]} | |
), | |
) |
This PR just refactored it out, because it proved useful elesewhere #6529
Having said that, we can definitely add a targeted test.
This PR replaces uses of
assign_custom_measurable_outputs
by the more readableignore_logprob
.It also refactors a utility to ignore the logprob of multiple (potentially nested) variables consistently.
This PR is a spinoff of #6529