-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training setting problem #25
Comments
I manage to get some progress. Now I training on data from LibriSpeech train-clean-100 and train-clean-360 and testing on train-dev-clean. After 40k steps the SDR reached only to ~5. Is it possible that it is related to the batch size that I am using (6)? Another question - what is the the learning rate policy? Did you fixed it on 1e-3 throughout the whole training or updated it? Thanks. |
hi,Can you share your settings, I run the same situation , thanks |
I have also the same question. |
Hi,
Thank you for publishing your code!
I am encountering a training problem. As an initial phase I have tried to train only on 1000 samples from LibriSpeech train-clean-100 dataset. I am using the default configuration as was published in your VoiceFilter repo. The only difference is that I used batch size of 6 due to memory limitations. Is it possible that the problem is related to the small batch size that I use?
Another question is related to the generation of the training and testing sets. I have noticed that there is an option to use a VAD for generating the training set but by default it is not used. What is the best practice? to use the VAD or not?
I appreciate your help!
The text was updated successfully, but these errors were encountered: