You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great work you're doing here.
I've been testing your tool, it's easy to use and gives fine results.
Since I'm looking for a tool to generate a phonemized imput for the VITS model (in onnx format), I need to use the same tokenizer (phonemizer) that model espects. I've found that your pretrained models already have the dictionary embedded in them. Can I ask where did those dictionaries come from? In your colab training example you use CUNY-CL/wikipron's ones, but I was wondering if those are the ones you used originally or just in the example.
Thanks.
The text was updated successfully, but these errors were encountered:
Hi!
Great work you're doing here.
I've been testing your tool, it's easy to use and gives fine results.
Since I'm looking for a tool to generate a phonemized imput for the VITS model (in onnx format), I need to use the same tokenizer (phonemizer) that model espects. I've found that your pretrained models already have the dictionary embedded in them. Can I ask where did those dictionaries come from? In your colab training example you use CUNY-CL/wikipron's ones, but I was wondering if those are the ones you used originally or just in the example.
Thanks.
The text was updated successfully, but these errors were encountered: