Replies: 1 comment 1 reply
-
Hi, restoring NLP models can be complicated as there are potentially many different artifacts needed for restore. This is why we've created the .nemo file and .restore_from. We mainly use the PyTorch Lightning Checkpoints to auto-resume from training. They can be used in other workflows outside of NeMo, but then the user would need to add custom implementation for restore. And to answer your question "I did not know where did the megatron-bert-cased_encoder_config.json come from?" Also, we will be updating the NeMo NLP documentation with this information in the near future. Sorry for the current inconvenience. |
Beta Was this translation helpful? Give feedback.
-
Describe your question
After training token classification model, several checkpoints .ckpt were generated along with .nemo. Load .nemo is ok. But I tried to load checkpoints via:
but give an Error:
I unpacked the .nemo and found 5 files:
I did not know where did the megatron-bert-cased_encoder_config.json come from?
Environment overview (please complete the following information)
using nemo docker from ngc and at BRANCH = 'r1.0.0rc1'
Beta Was this translation helpful? Give feedback.
All reactions