-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pointpriors #663
Pointpriors #663
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @bgctw !
Can you clarify why you want the .~
statements to be treated as a single log-prob in your case? You mention that your motivation is tempering; it's a but unclear to me why varwise_logpriors
are needed for this. And why is the Chain
needed in this case? When I think of tempering in our context, I'm imaging altering the likelihood / prior weightings during sampling, not as a post-inference step.
Maybe even write a short bit of psuedo-code outlining what you want to do with this could help!
From your initial motivation in #662, I feel like we can probably find alternative approaches that might be a bit simpler:)
My goal is to modify the log-density during sampling. I imagine putting something similar to Hence, I want to query the log-densities of the prior components as seen by a sampler that generated the samples in a The single number provided by The pointwise resolution, i.e. resolving also the components of the log-density components of |
Pull Request Test Coverage Report for Build 11091092915Details
💛 - Coveralls |
use loop for prior in example Unfortunately cannot make it a jldoctest, because relies on Turing for sampling
Ah, gotcha; this was the aspect I was missing 👍
Makes sense 👍 Taking this into account, I'm wondering if maybe it would be better to just generalize the existing DynamicPPL.jl/src/loglikelihoods.jl Lines 2 to 5 in 24a7380
We can just add a "switch" to it (or maybe just inspect the leaf context) to determine what logprobs we should keep around. AFAIK this should just require implementing the following:
Then we can just add alternatives to the following user-facing method DynamicPPL.jl/src/loglikelihoods.jl Lines 230 to 257 in 24a7380
e.g. So all in all, basically what you've already done, but just as part of the Thoughts? |
Trying to unify those two is a good idea. In fact, I originally started exploring/modifying based on However, I did not come far with this. |
I will attempt the implementation that you suggested, assuming that components of the prior are not resolved to the same detail as the components of the likelihood. |
I pushed a new commit that integrates The hardest part was to create a single Another issue, is that now I could not yet recreate julia-repl block in the documentation of the function, because current Turing, which is required for sampling in the docstring, is not compatible with current DynamicPPL. |
Lovely @bgctw ! I'll a proper have a look at it a bit later today:) |
In order for the user to select relevant information and for saving processing time, it could be helpful to have two keyword arguments with defaults: Would these be reasonable? |
by forwarding dot_tilde_assume to tilde_assume
I found a way to record single log-density prior components in The forwarding of |
Forwarding to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I was working on some changes in your branch and wanted to make a PR to yours, but doesn't seem like that works due to you being on a fork o.O (or maybe I'm just being stupid).
So instead I made a new PR over at #669 . You can see the diff from yours to mine that I added here: https://github.com/TuringLang/DynamicPPL.jl/pull/669/files/5842656154a5b2f9a0377c45a4d4438933971a11..8bd2085098208fc58d1e33bbe48ec56e7efcd691
EDIT: Did this because it was a bit easier to demonstrate what I had in mind rather than explaining it through a bunch of comments
I see
Your PR is based on an older version of this PR. What is the way forward now? Should I try to merge your changes to this PR? Or should I try to implement my subsequent changes to your PR? |
and avoid recording likelihoods when invoked with leaf-Likelihood context
…gdensities mostly taken from TuringLang#669
bgctw first forwared dot_tilde_assume to get a correct vi and then recomputed it for recording component prior densities. Replaced this by the Hack of torfjelde that completely drops vi and recombines the value, so that assume is called only once for each varName,
pointwise_prior_logdensities int api.md docu
I transferred the developments in #669 to this PR. The solution with dropping the updated |
Regarding this, I think it's worth preserving the |
Co-authored-by: Tor Erlend Fjelde <tor.github@gmail.com>
… on already used model Co-authored-by: Tor Erlend Fjelde <tor.github@gmail.com>
to work with literal models
The suggestions from code review introduced some errors in the tests, which I tried to fix. However, I did not succeed for the "pointwise_logdensities chain" testset. Could you, please, have another look, if this is a problem of the test setup or the tested functionality. Your test is more strict, because it compares to |
Ah yeah, it's failing because |
Fixed the test @bgctw :) |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #663 +/- ##
==========================================
+ Coverage 75.93% 77.66% +1.73%
==========================================
Files 29 29
Lines 3519 3587 +68
==========================================
+ Hits 2672 2786 +114
+ Misses 847 801 -46
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Thanks @torfjelde for patiently guiding me through this process. |
Another maybe: I find it more convenient to work with the results of the pointwise functions applied to function as_chains(lds_pointwise)
Chains(stack(values(lds_pointwise); dims=2), collect(keys(lds_pointwise)))
end
chn = as_chains(logjoints_pointwise); # from @testset "pointwise_logdensities chain"
names(chn)
get(chn, :x)[1] == logjoints_pointwise["x"] One could even think of letting the
Since this would break the current interface of |
dependent on Turing.jl
I just pushed a final change to the docstring of
Of course! Glad to hear you found it useful:)
Hmm, I'm a bit uncertain about this. I do see your reasoning that it might be beneficial, but I think, at least at the moment, I'm reluctant to make this part of DynamicPPL 😕 Generally, we adopt features in DPPL once we feel like there's sufficient need for it; atm, I think most people using But how about you convert that comment into an issue so a) we can keep track of the desired feature and see if there are other people who share the interest in this, and b) so that the current impl you are using can also be discovered more easily by others?:) |
I will do that after its available on master. |
Added it to the merge queue; thank you @bgctw ! |
Tackles #662: Querying the log-density of components of the prior.
The implementation does not decompose the log-density of dot-tilde expressions, because a possible solution (first commit, but removed in 3rd commit again) would need to decompose
dot_assume
, which is not under context control. However, I do need to pass computation to child-contexts, because I want to inspect log-density transformation by child-contexts. Therefore, I called itvarwise_logpriors
rather thanpointwise_logpriors
.In addition, I decided for a different handling of a Chains of samples compared to
pointwise_likelihoods
, because I did not fully comprehend its differentpush!!
methods and different initializers for the collecting OrderedDict and what applies at which conditions. Rather, I tried separating concerns of querying densities for a single sample and applying it to a Chains object. I hope that the mutation of a pre-accocated array is ok here.