Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constructor for SBC_results object #76

Open
dmi3kno opened this issue Jan 18, 2023 · 4 comments
Open

Constructor for SBC_results object #76

dmi3kno opened this issue Jan 18, 2023 · 4 comments

Comments

@dmi3kno
Copy link

dmi3kno commented Jan 18, 2023

SBC today is a combination of SBC engine and a plotting package, which do not have to be under the same umbrella if the interface is clearly defined. I want use the excellent rank plots (thank you @TeemuSailynoja!) without using SBC to run the calibration.

I would like to be able to construct a minimally acceptable SBC_results object from

  • the list of $M$ posterior objects (I prefer draws_arrays). The samples originate from coda format, so they are fitted not with Stan, but with some other software. Therefore my objects might be missing some of the useful metrics like Rhat and ESS.
  • the list of $M$ simulated values for each of the parameters.

I would ideally like to see something like

construct_SBC_results(draws_lst, values_lst)

which will return a (lean) results object, which can be acceptable for plotting functions (e.g. plot_rank_hist(), plot_ecdf(), etc).

@dmi3kno
Copy link
Author

dmi3kno commented Jan 18, 2023

Iterating over the list of draws and a list of calculated values allows us to calculate the ranks using SBC::calculate_ranks_draws_matrix(variables, dm) for each draw object, so we end up with a vector of $M$ ranks (for each parameter). Then we need to run posterior::summarize_draws() on each draws element and reshape the results into the table which resembles $stats tibble. Now' there's a lot of superfluous information which is not used by the plots today (Rhat, ESS, etc). And there are some things which are necessary (like z_score used in plot_contraction), but probably would not be available, unless the constructor calculates them. I suspect most of it is sitting in compute_SBC().

I am asking can we please carve it out into a separate function(s) and export this functionality to be able to construct the results without using the rest of the engine. This would have to be done if SBC plots would ever be spun off into a separate package (like SBCplots) or merged into the bayesplot for example.

Your answer could be "write a custom backend for your sampler" (fmcmc in my case, but can be my own sampler tomorrow). It's a good argument and worth doing for stable MCMC engines. However, there's still an argument for using my own engine (say, in stantargets), which has a better support for analytical workflows. Most of my analysis right now (both in Stan and in R/fmcmc) is wrapped into targets so my whole graph is re-executed at once. I understand that I can probably wrap SBC into the targets but this seems to be an overshoot, given that each package has its own parallelization and caching solutions.

@martinmodrak
Copy link
Collaborator

I agree that more modularization would be sensible and playing more nicely with target is definitely something we'd like to have.

The good news is that I think that most of what you need is already implemented, although not very well documented.

Regarding the plots, all the plotting functions (and some more) support at least SBC_results and data.frame (in a specific format) inputs (via S3). The SBC_results implentation is in all cases just a thin wrapper over the data.frame implementations. So for example, this works:

rank_df <- data.frame(variable = rep(c("A", "B"), each = 10), 
           rank = sample.int(10, size = 20, replace = TRUE) - 1,
           sim_id = rep(1:10, times = 2),
           max_rank = 10)

plot_ecdf(rank_df)
plot_rank_hist(rank_df)
plot_coverage(rank_df)
empirical_coverage(rank_df, width = 0.95)

If you for some reason really need the SBC_results object, than most of the heavy lifting is done in SBC_statistics_from_single_fit. The big sin there is that the function does not currently have default values for a lot of arguments, so you have to supply a lot of stuff repeatedly, but you can call it as:

dm <- posterior::example_draws()
true_values <- posterior::draws_matrix(mu = 1, tau = 3, `theta[1]` = 0.1, `theta[2]` = 0.5)
SBC_statistics_from_single_fit(dm, variables = true_values,
                               generated = list(), 
                               thin_ranks = 1,
                               ensure_num_ranks_divisor = 2,
                               dquants = NULL,
                               backend = NULL)

and get the statistics in the format used by SBC_results (you only need to additionally supply sim_id).

Finally, you can construct a minimal SBC_results object from your fits and then have recompute_SBC_statistics do the work for you:

fits <- list(posterior::example_draws(), posterior::example_draws())
res_raw <- SBC_results(data.frame(sim_id = 1:2), fits, 
                       default_diagnostics = data.frame(), 
                       errors = list(NULL,NULL), NULL, NULL, NULL, NULL)

true_values <- posterior::draws_matrix(mu = c(1,1.5), tau = c(3,2.2), `theta[1]` = c(0.1, 0.5), `theta[2]` = c(-1, 0.5))

datasets <- SBC_datasets(true_values, generated = list(NULL, NULL))
res <- recompute_SBC_statistics(res_raw, datasets,  backend = NULL)

Note that in both cases, I can use backend = NULL, because the backend is only used to get some default values of stuff like thin_ranks. For a fit object, anything that supports posterior::as_draws_matrix will just work.

Some of the solutions are not ideal and potentially a bit fragile (the final code chunk relis on SBC_results not doing too many checks), so improvements are definitely needed. But is this enough to get you going for now?

@dmi3kno
Copy link
Author

dmi3kno commented Jan 18, 2023

This is definitely very useful! Thank you.
Since "there's nothing more permanent than temporary", can we agree to open issues for

  • improving documentation
  • refactoring interfaces to "decouple" the logic

The first item is quite urgent, while the second can very well wait until the next big release.

@martinmodrak
Copy link
Collaborator

martinmodrak commented Jan 19, 2023

I am happy to invest some energy into moving this forward, but I'd like to understand your use case better before delving into this. Are you interested only in producing the plots from computations done outside the package? Or is there something else you have in mind?

Note that I cannot "just" decouple the logic - compute_SBCis built in such a way that you can set keep_fits = FALSE and the fits get never transferred from the individual workers and a list of all fits is never kept in memory - because a) once you do a lot of simulations, they just cannot fit in memory at all and b) if your simulations are fast, transferring the fits actually takes a huge chunk of your wall time. There definitely are ways to make this part of the package cleaner, but there are some complexities, so I need to better understand the possible use cases before jumping in.

Also, if you just need the plots and not the extra baggage our package has, bayesplot has implementation of the ECDF plots, although with a quite different interface - to give an example the following code produces two identical pictures (modulo style) :

N_sims <- 500
true <- rnorm(N_sims)
fits <- matrix(rnorm(N_sims * 100, sd = 2), nrow = 100, ncol = N_sims)

bayesplot::ppc_pit_ecdf(true, fits, plot_diff = TRUE)

# Manually computing ranks, a bit ugly
SBC_ranks <- array(N_sims)
for(i in 1:N_sims) {
  SBC_ranks[i] <- SBC::calculate_ranks_draws_matrix(
    posterior::draws_matrix(x = true[i]),
    posterior::draws_matrix(x = fits[,i]))
}

rank_df <- data.frame(sim_id = 1:N_sims,
                      variable = "x",
                      rank = SBC_ranks,
                      max_rank = 100)

SBC::plot_ecdf_diff(rank_df)

Our implementation differs in minor aspects from the bayesplot one (most of it are minor tweaks/fixes that may at some point make it to bayesplot), so the plots will not always be identical, but the differences should not generally matter (e.g. the above code with N_sims = 10 will produce slightly different results)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants