-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Whisper large fine-tuning on wenetspeech, mutli-hans-zh #1483
Conversation
Good point. It would be great if you know some experiment results or papers which use multi-objective to fine-tune whisper. |
Have other people reported similar observations when fine-tuning Whisper on WenetSpeech? Similarly, we experienced severe deletion errors while training Zipformer on WenetSpeech (see #1130), could this be the problem of the dataset? |
thanks! i'll look into it. |
@marcoyang1998 You're correct, see wenet-e2e/WenetSpeech#54. One solution is to retrain with the new labels provided by wenet-e2e/WenetSpeech#54, but I'm thinking that for such colloquial scenarios, there might be a better way to evaluate, such as reducing the weight of errors related to modal particles. When people want to use ASR to add subtitles to their videos, it's clear that it would be more helpful if the model could automatically omit these colloquial words. |
modified a default param.
removed unnecessary comments
minor updates
minor updates
egs/speechio/ASR/zipformer/icefall-asr-multi-zh-hans-zipformer-ctc-2023-10-24
Outdated
Show resolved
Hide resolved
hi, thank you for your work! i went through the pr and left a comment and few modifications, if you feel like those are proper then i think this pr is ready to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, waiting for CI tests to be done
thanks!
This PR adds:
Following PR TODOs: