-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] add a frontend module in wespeaker and support wavlm #344
Conversation
b9b8fb2
to
a85085f
Compare
with torch.cuda.amp.autocast(enabled=configs['enable_amp']): | ||
features, _ = model.module.frontend(wavs, wavs_len) | ||
|
||
with torch.cuda.amp.autocast(enabled=configs['enable_amp']): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is necessary to add amp context here. There is no pytorch model involved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
solved
def spec_aug(feats, num_t_mask=1, num_f_mask=1, max_t=10, max_f=8, prob=0.6): | ||
# feats batch: (B,T,F) | ||
# do spec_aug on all batch samples using a same group of params randomly | ||
# TODO (hongji): do spec_aug on each sample separately |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can directly use the implementation in https://pytorch.org/audio/master/generated/torchaudio.transforms.FrequencyMasking.html#torchaudio.transforms.FrequencyMasking
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I will try it later.
Hello @JiJiJiang , I have listed some comments. Besides, there seems no independent recipe with run.sh. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well Done !!!!
All pre-trained models and configs in the pretrained page can be loaded and used normally after this update!