-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
composing lmh and smc #17
Comments
|
I tried around a bit but have not found a way so far. I keep getting conflicting requirement of returning both a |
I have been trying an approach to achieve the combination of lmh and smc, but ultimately this failed. It may be interesting for reference. I try to sketch the setup in what follows:
the idea was to set up an smc loop where the particle state is an lmh.t, which is propagated between smc iterations. i then tried to add a final score to the lmh model that depends on the smc iteration; this was possible in my use case as the scoring only depended on the final output of the lmh model; in this way i tried to implement a stepwise sharpening of the likelihood over smc runs. however, this turned out not the way i expected. i expected that the model |
disclaimer: this may be just me not seeing a solution
i'm currently trying to build an inference scheme combining lmh within smc. i believe this is similar to what's called rm-smc in this paper, cited in the docs.
roughly it goes like this:
i'm facing the following difficulty: i don't know how to carry the mcmc state of a particle through the resampling step, and then continue Lmh sampling from that same state -- but with scoring functions that have changed after the Smc resampling. what i've tried is to pass an
Lmh_inference.t
computation as particle state across the resampling. but as far as i could see, when i define anLmh_inference.t
computation, calls toLmh_inference.score
within it are already fixed and i have to way to modify them after a resampling. this would give no way to update the likelihood on the fly. is that correct?as i'm writing this i could imagine parametrizing the log-likelihood to
score
with using mutable state, e.g. references, which i would have to manage manually between smc iterations. that seems clunky and potentially thread-unsafe.in the paper mentioned above there appears to be an elegant solution to this kind of composability -- does dagger also enjoy this? is there a way to do it that i don't see? do i have to switch the nesting around? ie. call
Smc_inference.yield
inside theLmh
model?The text was updated successfully, but these errors were encountered: