Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reorganize the stan example repository #153

Open
yao-yl opened this issue Jun 29, 2019 · 13 comments
Open

Reorganize the stan example repository #153

yao-yl opened this issue Jun 29, 2019 · 13 comments
Assignees

Comments

@yao-yl
Copy link

yao-yl commented Jun 29, 2019

Summary:

Reorganize the stan example repository and include predictive metrics for the purpose of a collective model set which will be used as the Bayesian computation benchmark.

Description:

Just as a fake data simulation can possibly reveal the drawback of a given model, a fake model simulation is the ideal way to evaluate an inference algorithm. Unfortunately, it is not uncommon to see research papers merely use a few linear regressions as the benchmark for evaluating a new sampling or approximate inference algorithm they propose, partially due to the tendency of random-seed-hacking or model-hacking, and also because there is no single large scale collective model set for Bayesian computation in the first place.

Currently, the stan example repo contains more than 270 unique models with data. Many of them are from real-world tasks and therefore are a more representative model space in terms of real Bayesian computation problems that user will encounter.

Depending on the purpose, there are various metrics that the user can specify. To name a few:

  • Treating the Stan sampling result as the ground truth, how much divergence (compared with HMC) in the parameter space does an approximation method has? When we store the HMC output (mean and sd for all parameters in each model), a developer of a new approximate method can easily compute such divergence in the parameter space.
  • The predictive capacity. When the approximate inference itself is considered part of the model that implicitly regularize the posterior, we could also compare the predictive ability, or more specifically elpd, of each model+inference combination. For part of the dataset, an independent test set can be simulated, for others, we can rely on cross-validation.
  • Replication of exact inference. Introduced by @seantalts et al in SBC paper, we can use the simulated posterior p-value as a metric for how exact a sampling algorithm is, or at least how unbiased an approximation inference is.
  • These 270 models represent a reasonable share of the model space--the space of high dimensional distribution functions, we should be able to quantify and characterize these posterior distributions by some geometry metrics, such as the global/tail curvature. It enables researchers to better understand when/why their computation methods work or fail. We could also encode other meta-characterizations such as the number of parameters, sample size, whether it is a hierarchical model, whether it is centered parametrization, etc.

To this end, there are a few modifications required:

  1. Currently, some of the input data are randomly generated (e.g.,). All input should be fixed for the purpose of cross-method comparison.

  2. Not all models are runnable for stan sampling. Some models are designed to reveal the parametrization tricks in Stan (e.g., centered-parametrization in hierarchical regression) and therefore we know the stan sampling result is not exact. It is fine to keep them, but we should also manually label those models since we will use stan sampling as the ground truth otherwise.

  3. Output likelihood in the generated quantity block. We can then run cross-validation by calling LOO and therefore compare the expected log predictive density (elpd) across methods.

  4. Run Stan sampling on all these models, record the mean and sd of the posterior distribution for all parameters (excluding transformed parameters) and loo-elpd.

SBC requires fake data simulation, which might be non-trivial to make it automated. To start with I can first make a pull request that implements these improvements.

Expected output:

  • For each model, there should be fixed (i.e. non-random) input file.
  • The stan file outputs the likelihood.
  • A separate file that records the ground truth of stan sampling, the elpd therein, and other meta-characterizations.

Current Version:

v2.19.1

@yao-yl yao-yl self-assigned this Jun 29, 2019
@breckbaldwin
Copy link

I'd like to see efforts to make it as easy as possible for researchers to use the model/data repo and report results. That adds:

  1. Evaluation scripts that run over all the repo and report results. Hopefully *nix shell scripts will work across Windows/Mac/Linux. I believe that windows has added *nix style scripting, otherwise maybe require Cygwin to be installed.
  2. Documentation that makes this straightforward.

@breckbaldwin
Copy link

@axch
Copy link

axch commented Jul 1, 2019

Re: input files: Rather than hardcoding synthetic inputs, I think it would be better to deliver a reproducible generator program (i.e., one that gives the same results when run with the same random seed). Advantages of a program:

  1. It's smaller, and therefore easier to download, store on disk, etc.
  2. It's self-documenting, making it perfectly clear where the data set came from (sampled from the prior, or with some modifications, or intentionally mis-specified, etc).
  3. A program can be parameterized, making it easy to do research on data sets that differ only in size, or number of (synthetic) regressors, or etc.
  4. If the repository needs to evolve, version differences in generator programs are much more interpretable than version differences in generated data sets.

We can also ship a canonical small output from the generator program, in case someone wants to verify that they aren't experiencing version skew.

@axch
Copy link

axch commented Jul 1, 2019

Re: reproducibility: How feasible would it be (maybe some ways down the line) to provide a Kubernetes configuration (with associated Docker files, etc) that stands up some cloud machines to rerun Stan on all the models, and possibly to run the researcher's method as well? It would be the simplest possible interface: put $ here, and get everything recomputed. Including, for instance, allowing researchers to re-assess Stan on different versions of the models or data sets, etc, just by locally modifying the benchmark repository.

Just an idea to think about; I don't mean to create lots of extra work.

@bob-carpenter
Copy link
Contributor

This is not a stan-dev/stan issue. Could you please move this to either (a) example-models or (b) stat_comp_benchmarks or if it's preliminary, to (c) design-docs. Or to Discourse for discussion until there's a plan for implementation.

@bob-carpenter
Copy link
Contributor

Thanks for writing this up, Yuling. Until the issue gets moved, let me reply:

These 270 models represent a reasonable share of the model spac

By what measure? I think we want to look at code coverage of our special functions and for model types. Are there any ODE models in there? Anything with more than a tiny amount of data?

Not all models are runnable for stan sampling.

I think we need to carefully distinguish programs and models. The model is a mathematical concept defining a joint density. The program is a Stan program. For a given model, some programs might work and others might not. For example, a centered parameterization might not work, but a non-centered parameterization might be able to sample with no problem. Now we could argue they're different models because they sample different parameters, so it's not exactly clear to me where to draw the line. For instance, the centered parameterization draws from p(alpha, sigma) and the non-centered from p(alpha_std, sigma). These are different joint densities, related through the transform alpha_std = alpha / sigma. Are these different models?

Output likelihood in the generated quantity block. We can then run cross-validation by calling LOO

Using an approximation seems risky as a gold-standard evaluation. How reliable is LOO?

Run Stan sampling on all these models, record the mean and sd of the posterior distribution for all parameters (excluding transformed parameters)

Don't we also want standard error (i.e., from ESS) on both mean and posterior sd? That's how @betanalpha is testing in stat_comp_benchmark. I like the idea of testing with actual square error from simulated parameters as it avoids the approximation of estiating ESS.

[@breckbaldwin] Evaluation scripts that run over all the repo and report results.

A script testing estimation within standard error in stat_comp_benchmarks.

[@breckbaldwin] Documentation that makes this straightforward.

Of course. The existing scripts are simple---just input CSV for your draws and get output. But interpreting that output assumes some statistical sophistication in understanding MCMC standard error beyond what we'll be able to impart in doc for this tool.

[@axch] Rather than hardcoding synthetic inputs, I think it would be better to deliver a reproducible generator program

I agree. That was part of the original plan so that we could test scalability. The difficulty is always generating predictors for regressions (or any other unmodeled data). We also need a generator to do SBC, though that would most efficiently live in the model file itself, which complicates the issue of what the models should look like.

[@axch] How feasible would it be (maybe some ways down the line) to provide a Kubernetes configuration

I think this is critical as we're going to want to run this during algorithm development. I don't know anything about Kubernetes per se, but we're moving a lot of our testing to AWS with the help of some dev ops consultants. So @seantalts is the one to ask about that as he's managing the consultants.

@seantalts
Copy link
Member

re: kubernetes - we don't have any plans for such a cluster right now and we'll run out of EC2 credits this November, but we might still be able to pursue such a thing. My first question is around isolation - are kubernetes jobs isolated enough to be a reliable benchmarking platform?

@betanalpha
Copy link

betanalpha commented Jul 2, 2019 via email

@bob-carpenter
Copy link
Contributor

bob-carpenter commented Jul 2, 2019 via email

@yao-yl
Copy link
Author

yao-yl commented Jul 2, 2019

[@axch] input files: Rather than hardcoding synthetic inputs, I think it would be better to deliver a reproducible generator program (i.e., one that gives the same results when running with the same random seed).

Say we fix the random seed and generate the input in R (as it is). Should we worry about R might change its random number generator in the future?

[@bob-carpenter] This is not a stan-dev/stan issue. Could you please move this to either (a) example-models or (b) stat_comp_benchmarks or if it's preliminary, to (c) design-docs. Or to Discourse for discussion until there's a plan for implementation.

It should be better transferred to example-models, which requires permission there. Could you add me writing permission there or you could help me transfer there?

[@bob-carpenter] By what measure? I think we want to look at code coverage of our special functions and for model types. Are there any ODE models in there? Anything with more than a tiny amount of data?

No there are no ODE models and no large data. Nevertheless, it should still be better than running a few linear regressions? In the ideal situation, we should wish to have a better coverage of models and users could also contribute to the model list too.

[@bob-carpenter] For instance, the centered parameterization draws from p(alpha, sigma) and the non-centered from p(alpha_std, sigma). These are different joint densities, related through the transform alpha_std = alpha / sigma. Are these different models?

I agree. It is indeed a combination of model, parametrization, and sampling scheme. Fortunately, we have enough diagnostics that should be able to tell whether stan sampling result is enough for one example model within that parametrization form, and this all I am intended to say for this point.

[@bob-carpenter] But aren't predictions just different random variables with their own posteriors? I don't see how it's such a different use case. Unless you mean using something like 0/1 loss instead of our usual log loss.

I think it is the difference between the estimation of parameters (given the model) vs the prediction of future outcome. As an obvious a caveat, A wrong inference in the wrong model might do better than a correct inference.

@bob-carpenter
Copy link
Contributor

bob-carpenter commented Jul 2, 2019 via email

@yao-yl yao-yl transferred this issue from stan-dev/stan Jul 3, 2019
@yao-yl
Copy link
Author

yao-yl commented Jul 3, 2019

I don't understand the difference. A prediction is coded as a parameter in Stan (or a generated quantity if it's simple enough and can be factored). Either way it's a random variable we can sample in the posterior conditioned on observed data.

Right, predictions can be viewed as generated quantities of parameters, and parameters can also be interpreted as predictive quantities. But in the practical level, we could always treat Stan sampling as the true posterior distribution for parameters and therefore a benchmark for all approximation method whenever Stan sampling passes all diagnostics— there can only be one truth. For prediction, It is unnecessary to match the exact posterior predictive distributions in Stan sampling as they are not necessarily optimal in the first place. We have seen examples in which an approximation method gives problematic posterior distribution of parameters but still ok or even better prediction.

@bob-carpenter
Copy link
Contributor

bob-carpenter commented Jul 3, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants