You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What a big mistake. Thanks for letting me know about this!
You're right and the depth parameter should be hp.depth.prenet here. So eventually the PreNet MLP was deeper than it originally had to be. (Intended 80->256->256 but was 80->256->256->256)
Typically, being a deeper model than intended won't affect the performance of the network much. However, in this case I think it might have changed some train-time behavior. I cannot guess whether it had positive/negative effect on the performance, so users might want to fix this error before training, or simply use the code with an error.
@wookladin Could you please pin this issue? Making a bugfix commit will break the pre-trained model, and apparently we don't have a budget for re-training the model. So we'd better make sure that the users to become aware of this bug, while acknowledging Heejo for reporting this issue.
@CODEJIN fyi I don't have a write access to this repo so I'm asking Kang-wook to do something for this repo.
Hi, guys.
Thank you so much about sharing this code. And, I think I found a minor bug, so I am reporting it.
https://github.com/mindslab-ai/cotatron/blob/38079aa2c95d647ec915ec6e8102ae5653623b78/modules/tts_decoder.py#L64-L65
I think the
prenet depth parameter
must behp.depth.prenet
, nothp.depth.encoder
, is it right?Please check it.
Thanks,
Heejo
The text was updated successfully, but these errors were encountered: