Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Result with other datasets #6

Open
tarepan opened this issue Jul 22, 2019 · 1 comment
Open

Result with other datasets #6

tarepan opened this issue Jul 22, 2019 · 1 comment

Comments

@tarepan
Copy link

tarepan commented Jul 22, 2019

Summary

I will share my result of the Universal Vocoder in other datasets.

Thanks for your great library and impressive result/demo.
It seems that you are interested in other datasets (#2), I will share my result. (if not interested, please feel free to ignore!)

I forked this repository and used this for other dataset, JSUT (Japanese single female utterances, total 10 hours).
Though the model trained on single female speaker, it works very well even toward out-of-domain speaker's test data (other females, male, and even English speaker).
Below is the result/demo.
https://tarepan.github.io/UniversalVocoding

In my impression, RNN_MS (Universal Vocoder) seems to learn utterances from human mouth/vocalTract, which is independent from language. So interesting.

I am grad if my result is good for your further experiments.
Again, thanks for your great library.

@bshall
Copy link
Owner

bshall commented Jul 24, 2019

Hi @tarepan,

That's a great result, thanks for sharing.

It's really interesting that the out of domain english speaker is quite a lot more noisy than the out of domain Japanese speakers. I think it would be great to train a model on a dataset with multiple languages (like they do in the paper) and compare to that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants