How to use pretrained tranformer encoder layers from trained T5 model, into CANARY's transf_encoder layers? #9425
DeveshS1209
started this conversation in
General
Replies: 1 comment
-
We don't support such a feature at the moment. You can add it potentially by writing a new module that is the T5 module, and changing the encoder config. However the init from pretrained method only works on NeMo models, so you'll have to manually load state dict and change weights |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Can anyone give insights on how to use/load pre-trained encoder transformer encoder layers of T5 model into CANARY's transf_encoder layers?
Currently, I am loading only conformer encoder layers as follows:
** in config file **
init_from_pretrained_model: "pretrained_100h.nemo"
in trainer.py
pretrained_model = SpeechEncDecSelfSupervisedModel.restore_from(
cfg.init_from_pretrained_model
)
print(pl.utilities.model_summary.summarize(pretrained_model))
How should I proceed ahead?
Beta Was this translation helpful? Give feedback.
All reactions