-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Need help] How to realize Syllable-level Voice Recognition with sherpa-onnx Open Vocabulary Keyword Spotting #920
Comments
After reading the issue you posted before, I know what you want. Actually, I already have a model modeling with pinyin in my machine, will push to huggingface. |
Thank you, I am eager to test it. |
What a pity, I can't find a realtime VAD for mandarin Syllables. If there's a VAD that can detect the beginning of every mandarin syllables(https://courses.washington.edu/chin342/ipa/syllables.html), |
@diyism Please follow the progress of this PR https://github.com/k2-fs/icefall/pull/1662 . It adds models trained with pinyin. The recognized results look like this, I think they are what you want.
|
Using vad + kws is not a normal way to implement this feature. I am very surprised you have not implemented this feature in a year, did you try following our suggestions to train a model based on pinyin? Did you have some troubles training the model? Anyway, I am adding the recipe now, see the comments above. |
I guess "VAD splitting" can prevent interference between syllables, such as "jiang3 you3 bo2" being recognized as "jiang3 bo2 bo2" (蒋伯伯), or "gai4 ge2" being recognized as "gai4 kuo4" (概括). Additionally, VAD can avoid issues caused by continuously sliding time windows to extract segments for recognition, like in whisper-timestamped realtime recognition. It can also prevent the problem of missing syllables: ['zhòng', 'diǎn', 'ne', 'xiǎng', 'kàn', 'sān', 'gè', 'wèn', 'tí'] missing the middle 'ne'. Human ears and VAD can clearly know that there is a syllable 'ne' in the middle. It seems that whisperX has the best VAD, but it's not for mandarin(https://github.com/m-bain/whisperX): Anyway, I'm looking forward to the progress of decode_pinyin.py: k2-fsa/icefall#1662 |
whisperX's VAD is also not a syllable-level VAD, the "That's"(in the whisperX README) have 2 syllables, and whisperX is Even if whisperX supports mandarin, the "./sherpa-onnx-kws-zipformer-wenetspeech-3.3M-2024-01-01/test_wavs/4.wav" won't be separated syllable by syllable: The first 3 syllables of "jiang3 you3 bo2" in the 4.wav will be seperated as "jiang3 you3" and "bo2". |
I'm wrong, the whisperX can seperate the first 2 syllables of "./sherpa-onnx-kws-zipformer-wenetspeech-3.3M-2024-01-01/test_wavs/4.wav":
Maybe I can use whisperX as a "realtime syllable-level speech recognizer", but it is not enough fast:
While sherpa-onnx-kws.py is very fast, but it can't recognize the second syllable:
And the whisperx also has the interference problem between syllables:
The first syllable's tone is wrong. sherpa-onnx-kws with 3 wav files of single syllable:
Currently for first 3 syllables of ./sherpa-onnx/sherpa-onnx-kws-zipformer-wenetspeech-3.3M-2024-01-01/test_wavs/4.wav:
It's perfect for sherpa-onnx-kws with 3 wav files of single syllable, |
There was a project about "Singing voice phoneme segmentation" (syllable segmentation) 6 years ago: |
There's a simple project that can segment syllables 5 years ago(https://github.com/diyism/thetaOscillator-syllable-segmentation), |
I found a simple command in ~/miniconda3/bin/sherpa-onnx-keyword-spotter which was installed by building sherpa-onnx, and I've test it with:
the output:
It seems the timestamps are not correct, according to the previous sox commands that splited 3 wav files perfectly recognized by sherpa-onnx-keyword-spotter:
The correct timestamps should be:
It seems the allosaurus project (https://github.com/xinjli/allosaurus) can produce correct timestamps but it missed a syllable of "bei"(in 5 syllables of "jiang3 you3 bo2 bei4 pai1"):
Is there any parameter that can improve the timestamps results? |
If I truncate the first syllables (jiang3 and you3), and lower the "--keywords-threshold" into 0.08, it can recognize the second syllable:
But the timestamp results are still not correct. And use "--keywords-threshold=0.08" to parse the whole test_wavs/4.wav file, the result is still the wrong "蒋伯伯":
|
It seems the allosaurus project (https://github.com/xinjli/allosaurus) can produce IPAs and correct timestamps but it missed a syllable of "bei" (in 5 syllables of "jiang3 you3 bo2 bei4 pai1"):
|
I've been always trying to use Sherpa to implement syllable-level speech recognition
(1.use a few pinyins to detect hotword directly;
2.or send a long sequence of pinyins to a LLM(gpt or claude) to convert it into the most appropriate Chinese sentence)
(k2-fsa/sherpa-ncnn#177)
I found that you released the sherpa-onnx Open Vocabulary Keyword Spotting at 2024-02(https://k2-fsa.github.io/sherpa/onnx/kws/pretrained_models/index.html)
So I imagine I can utilize it to realize syllable-level speech recognition:
I've modified the sherpa-onnx-kws-zipformer-wenetspeech-3.3M-2024-01-01/keywords.txt into:
Tested it with the AHPUymhd's code(#760),
specify the sound_files = ["./sherpa-onnx-kws-zipformer-wenetspeech-3.3M-2024-01-01/test_wavs/4.wav"],
(jiang3 you3 bo2 bei4 pai1 dao4 ...)
the output:
jiang3/bo2/bo2
I understand that the "bo2 bo2"(伯伯 uncle) is a more frequently used word than "you3 bo2"(but the Syllable-level Voice Recognition needs to be future-proof and can recognize any new word in the future into pinyins).
So I very carefully split the ./sherpa-onnx-kws-zipformer-wenetspeech-3.3M-2024-01-01/test_wavs/4.wav to ensure each WAV file contains only one syllable:
Now, if I run the python code with "sound_files = ["./sherpa-onnx-kws-zipformer-wenetspeech-3.3M-2024-01-01/test_wavs/jiang3.wav"]", it can correctly output "jiang3",
and if I run it with "you3.wav", it can output "you3",
if I run it with "bo2.wav", it can say "bo2",
It's perfect, even if there're other interfering pinyins(h uí, d á, ...) in keywords.txt file(I am dreaming of adding all 1300 pinyins into it).
So, I guess the sherpa-onnx Open Vocabulary Keyword Spotting is fully capable of perfectly recognizing Chinese mono-syllables,
but a method is needed to segment each syllable.
Maybe something like silero-vad can do it.
Any idea ?
@danpovey @csukuangfj @pkufool @marcoyang1998
The text was updated successfully, but these errors were encountered: