-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REQUEST] universal checkpoint for ZeRO - 1,2,3 #2921
Comments
Any plans to work on that, Tunji? We could have really used that feature in the current 80b m4 training, as we would like to add new parameters that were previously frozen and thus aren't in the optimizer. which also of adds a new interesting feature request. Train a new model with some pretrained frozen params and then towards the end of the training expand the optimizer to include the frozen params and unfreeze those to finetune the whole ensemble. Granted one could train from the beginning with lr=0 for frozen params, but it will require a lot more memory from the beginning. So this approach could save days to weeks of training for a large model training. |
+1 |
Excuse me, can stage1 directly resume the number of different cards, or do you need to set some parameters? |
does it mean deepspeed already supports automatically changing world size for Zero-1,2,3 if I use |
I'm attempting to enhance DeepSpeed by enabling it to support a dynamic world size. This is particularly for the setup involving AdamW, stage 3, and bf16. However, I'm uncertain about the level of complexity in comparison to expanding DeepSpeed's support for a universal dynamic world size across all optimizers and precisions. Could you provide some insights on this matter? |
No, it's not. It's currently a confusing situation as As the heavy lifting to support universal checkpoint has been done, porting it to ZeRO should take significantly less effort than it did to create the initial work as all the components are already in place. So it's really about @tjruwase and his team finding time and prioritizing this effort to make this happen. Clearly it's very desirable by many users at this point. |
Is it possible and easier if we convert Deepspeed's checkpoint into Megatron-Deepspeed format, change the world size, then convert back to Deepspeed's format 😀 |
Hmm, there you convert from/to TP/DP/PP topology. In ZeRO-3 you only have DP so perhaps it might be possible, but it won't find info on TP/PP and probably fail. e.g. it'd expect a different set of shard files for TP and PP, which don't exist in ZeRO-3. But if it worked once you convert it to the universal checkpoint, the tricky part would be to move to the new topology, as again the code is written for Meg-DS 3D topology. But as I have just explained the ZeRO case is much simpler than TP/DP/PP so it should be relatively easy to make it work with just ZeRO files. |
I think it can be achieved by a single tool similar to zero_2_fp32.py |
I have implemented the conventional tool, and I now find myself faced with a minor question. In the 'bf16_zero_*_optim_states.pt' file, the loss scaler is stored as <deepspeed.runtime.fp16.loss_scaler.LossScaler object at 0x7f0733de5610>. However, the address 0x7f0733de5610 doesn't serve any purpose, correct? Additionally, is there a need to scale the stored optimizer state (gradient and square of gradient for all trainable params) according to the old and new world size? |
I'm just a contributor - so I am tagging @tjruwase who hopefully will have the resources to address your questions, @GradientGuru |
@GradientGuru, saving loss_scaler as an object instead of state_dict is a bug. Please feel free to submit a PR. Thanks! |
I'm not sure this is the right place to ask, we're researchers in a situation where we sometimes get access to a bunch of GPUs, and sometimes we don't, and we're counting on this donated GPU time to train a large model that requires deepspeed Zero 2 even just to fit into the GPUs. We're trying figure out how to best handle the changing world sizes that come in the above setting, as it looks like currently we would not be able to restore optimizer state from a checkpoint. Im wondering what advice do you have on how we could proceed? |
It's the perfect place to ask - @tjruwase, is it possible to raise the priority for this time? This is a very critical requirement for users to choose Deepspeed over other frameworks. Thanks a ton! |
Didn't know about megatron. Do you mean this library: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron.html? or is it something else? |
Or probably this one: https://github.com/microsoft/Megatron-DeepSpeed |
Or this one? https://github.com/bigscience-workshop/Megatron-DeepSpeed. Both of the above are forks of the original from NVIDIA: https://github.com/NVIDIA/Megatron-LM. |
@stas00, correct! I didn't know how to partially close an issue :) |
I updated the OP that stage 1 is done. |
Also interested in stage 2 support. |
@zaptrem, stage 1/2 and bf16_optimizer are supported. Only stage 3 support is pending. @samadejacobs, @lekurile FYI |
We tried to use this last night and the universal checkpoint conversion script failed because our DS checkpoint was missing universal _checkpoint_info. We commented out all references to that and converted it anyway then got this error when we tried to restore from the newly converted universal checkpoint:
|
Hello @zaptrem, Just as a clarification, are you using Megatron-DeepSpeed for creation of the checkpoint? In Megatron-DeepSpeed, when the checkpoint gets saved, there's a call to state_dict[UNIVERSAL_CHECKPOINT_INFO] = _universal_checkpoint_info(model) If you're not using Megatron-DeepSpeed, you can try ensuring that the same universal checkpoint metadata that gets stored in the Please share any questions or concerns. Thanks, |
We're not training a language model. Is that just a fork of DeepSpeed or something specifically for transformer LMs? |
Instead of the universal checkpoint, I use the code from @tjruwase to convert a DeepSpeed checkpoint without TP and PP (128 ranks) to another DeepSpeed checkpoint (32 ranks). Discussions and comments are welcome. |
@zaptrem, just wanted to check if this something we can still help with? Thanks! |
@rgtjf, thanks for sharing your usage of the conversion script. However, our plan is to focus on universal checkpointing, which is general, so that it replaces the conversion script. Are you able to work with us to make universal checkpointing work correctly for your scenario? I am looking at your report here. Thanks! |
@tjruwase A big thank you for your quick reply. I'd love to work with you to make universal checkpointing better. In my testing, I've found that merging in order isn't particularly correct, looking forward to more insight.
In this example, I found that it wasn't the alphabetical order or number order. |
@xylian86, thanks for the icing on this cake :). I am delighted to close this issue. |
Is your feature request related to a problem? Please describe.
I think we now have all the components ready to do universal checkpoint in ZeRO - 1,2,3, like we had for BF16Optimizer.
The need is to be able to add/remove gpus when resuming from a checkpoint with a different number of gpus once the training started.
Thank you.
Progress update:
@tjruwase
The text was updated successfully, but these errors were encountered: