Description
I've realised that had a misconception about how to compute the incremental marginal likelihoods at the end of each update in the presence of conditional resampling.
Chopin's book gives this formula,
but it's a bit clunky because it requires us to record whether resampling took place. Charles had a nice solution by noting that these formulas coincidence of we reset the weights to unity after resampling.
Since this works I've reverted back to this approach in 731d206.
I'm still a bit uncomfortable with this since really the weights after resampling should be 1/N. I'm not sure whether there is any use case where this distinction would actually matter, but it's worth thinking about. It's also worth nothing that we require an additional logsumexp which it would be nice to avoid.