-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
best practice for snapshot_every_n_steps
#1283
Comments
Hi @ShoufaChen thanks for the issue, we should update the documentation to explain this better. To answer your questions: it depends mainly on the size and composition of your state. If you're storing eg an int representing index or file-offset, then it shouldn't be an issue. If your state is very large, involving eg buffers of data for shuffling, then the overhead of creating a checkpoint and passing it through multiprocessing queue may slow down training, this variable lets you decrease the frequency of checkpointing. If eg you know you're checkpointing every 1000 steps, you can set this value to 1000. |
@andrewkho Hi, wondering if this arg will affects the final state loading? For example, if Does that me I should always need to make the steps for ckpting divisible by Thank you. |
@yzhangcs you can still request a checkpoint/state_dict at any time, the dataloader will load the last snapshot and "fast forward" the required steps to get to the correct point. In your example, dataloader would load the state from step 4, then throw away one batch before continuing at step 5. Hope this helps! @divyanshk lets double check if our docs can be improved |
@andrewkho thank you for the nice response! |
@andrewkho It would be better if adding explanations of “fast forward” into docs for anyone curious about it |
Hello,
Thank you for your awesome implementation of StatefulDataloader.
I have a question about
snapshot_every_n_steps
. It seems there is not much detailed explanation about this argument.snapshot_every_n_steps=1
) cause a data loading burden?cc @andrewkho
The text was updated successfully, but these errors were encountered: