-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Step wise guidelines #128
Comments
The step wise guideline will come soon, but for now you can just refer to other examples to prepare your own data.
|
Thank you @mlxu995. I generated model using google command speech dataset and your code. But how to use the model to recognize live words? Also during stage 4,
|
For the last message, you can try to set the atol=1e-5 (in export_onnx.py). Note that the mdtc has a bigger error range than ds-tcn because it's finally output is a summation of the output of multi layers. |
To use the model to recognize live words, you can follow this guidelines (https://github.com/wenet-e2e/wekws/blob/main/runtime/android/README.md) |
Thank you @mlxu995 |
@Reethuch You can try this web demo. https://www.modelscope.cn/studios/thuduj12/KWS_Nihao_Xiaojing/summary |
Hi team,
This is awesome. I want to recognize few set of key words from live audio and print them. I am planning to use my own training dataset. I don't understand what the flow is. What inputs to give(i mean arguments)? and what is the expected output.
The text was updated successfully, but these errors were encountered: