-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resume a cmdstanpy.CmdStanModel.sample()
run into a cmdstanpy.CmdStanMCMC
object after finishing the python execution
#365
Comments
a few workflow questions: what is the use case? continuing warmup? or getting more draws in order to increase the precision of your estimates - in which case, we should offer guidance on just how much precision is possible and what needs to be done w/r/t csv output config in order to get this. In order to start a bunch of chains to continue sampling, you need the following inputs:
if you want to get more draws for more sig figs in your estimate, then we need a way to create a combined runset over all the samples. if all samples have the same number of draws, this should fly - we might want to relax the checks on the CSV header which might be overly strict w/r/t config. |
The use case would be just being able to restart whatever state your model is at. So for example I could be running my model in a computer cluster for days, and suddenly if administrators say they have to shutdown the cluster for whatever reason, then I would like to be able to resume and continue in another machine. Also, as I said in my first post, when loading cmdstan files into arviz, one gets fewer elements than when loading a CmdStanMCMC object. Perhaps those extra arguments are redundant but I think it is nicer to be able to resume exactly the same thing after sampling, than if one has the files generated by cmdstanpy stored in the hard drive. It could also happen that the suggestion I made through this issue is a stupid suggestion :). |
thanks for the background. I don't think your request is stupid - just that it covers a lot of different situations. for the particular use case - clusters - that's a bit tricky, what with node and FS setup. |
If I understand the request correctly, this seems like it would be a core cmdstan feature more than an interface request, correct? |
yes and no. core cmdstan already provides the ability to start with a specified step size, metric, and initial parameter values. this means that restarting won't produce identical results to runs which ran for the same number of iterations without the 't stop/restart. if users are OK with this, then the interface should be able to do the output file munging / input data assembly. it would require keeping around the input data files as well as the CSV output files. |
This was done in the R ecosystem with its own package, https://donaldrwilliams.github.io/chkptstanr/. I think we may similarly want to say this is out of scope for the basic cmdstanpy wrapper |
agree - not going to do this in CmdStanPy. |
Summary:
Is there a way to resume a
cmdstanpy.CmdStanModel.sample()
run into acmdstanpy.CmdStanMCMC
object after finishing the python execution?Description:
After running a
cmdstanpy.CmdStanModel.sample()
in a e.g cluster, one would like to be able to obtain the associatedcmdstanpy.CmdStanMCMC
from the files generated during sampling, once these files are copied from the cluster into the local machine.In this way, one can use e.g
arviz
to obtain the same inference data from theCmdStanMCMC
object that we would obtain if an arviz.inference object is created just after the.sample()
execution finishes. At the moment, one can only load files generated through cmdstanpy by using arviz to load cmdstan files (and in this way we miss some of the attributes that are load when directly loading aCmdStanMCMC
into arviz).Also, another (more interesting) user case would be if we would like to resume the sampling, from the last element in the chain and using the already precomputed parameters during warm up (which is usually the most time consuming step).
Current Version:
0.9.68
The text was updated successfully, but these errors were encountered: