Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Streaming Zipformer-Transducer recipe for KsponSpeech #1651

Merged
merged 4 commits into from
Jun 16, 2024

Conversation

whsqkaak
Copy link
Contributor

KsponSpeech is a large-scale spontaneous speech corpus of Korean.
This corpus contains 969 hours of open-domain dialog utterances,
spoken by about 2,000 native Korean speakers in a clean environment.

All data were constructed by recording the dialogue of two people
freely conversing on a variety of topics and manually transcribing the utterances.

The transcription provides a dual transcription consisting of orthography and pronunciation,
and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments.

The original audio data has a pcm extension.
During preprocessing, it is converted into a file in the flac extension and saved anew.

KsponSpeech is publicly available on an open data hub site of the Korea government.
The dataset must be downloaded manually.

For more details, please visit:

Streaming Zipformer-Transducer (Pruned Stateless Transducer + Streaming Zipformer)

Number of model parameters: 79,022,891, i.e., 79.02 M

Training on KsponSpeech (with MUSAN)

The CERs are:

decoding method chunk size eval_clean eval_other comment decoding mode
greedy search 320ms 10.21 11.07 --epoch 30 --avg 9 simulated streaming
greedy search 320ms 10.22 11.07 --epoch 30 --avg 9 chunk-wise
fast beam search 320ms 10.21 11.04 --epoch 30 --avg 9 simulated streaming
fast beam search 320ms 10.25 11.08 --epoch 30 --avg 9 chunk-wise
modified beam search 320ms 10.13 10.88 --epoch 30 --avg 9 simulated streaming
modified beam search 320ms 10.1 10.93 --epoch 30 --avg 9 chunk-size
greedy search 640ms 9.94 10.82 --epoch 30 --avg 9 simulated streaming
greedy search 640ms 10.04 10.85 --epoch 30 --avg 9 chunk-wise
fast beam search 640ms 10.01 10.81 --epoch 30 --avg 9 simulated streaming
fast beam search 640ms 10.04 10.7 --epoch 30 --avg 9 chunk-wise
modified beam search 640ms 9.91 10.72 --epoch 30 --avg 9 simulated streaming
modified beam search 640ms 9.92 10.72 --epoch 30 --avg 9 chunk-size

Note: simulated streaming indicates feeding full utterance during decoding using decode.py,
while chunk-size indicates feeding certain number of frames at each time using streaming_decode.py.

Number of model parameters: 79,022,891, i.e., 79.02 M

##### Training on KsponSpeech (with MUSAN)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you upload the models to huggingface and put their links here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you upload the models to huggingface and put their links here?

I uploaded the models to huggingface. See this link

Fix it in 3c970e7

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you replace it with a symlink if it is copied from the librispeech recipe?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you replace it with a symlink if it is copied from the librispeech recipe?

Fix it in ed9cc83

@yfyeung
Copy link
Collaborator

yfyeung commented Jun 16, 2024

Hi @whsqkaak,

I am currently looking into Korean ASR datasets. Could you please share some information about the commonly used Korean ASR training datasets and popular benchmarks like CommonVoice and FLEURS?

Thanks in advance.

@csukuangfj
Copy link
Collaborator

Thank you for your contribution!

@csukuangfj csukuangfj merged commit c13c7aa into k2-fsa:master Jun 16, 2024
253 checks passed
@whsqkaak whsqkaak deleted the feature/ksponspeech-recipe branch June 17, 2024 01:15
@whsqkaak
Copy link
Contributor Author

whsqkaak commented Jun 17, 2024

Hi @whsqkaak,

I am currently looking into Korean ASR datasets. Could you please share some information about the commonly used Korean ASR training datasets and popular benchmarks like CommonVoice and FLEURS?

Thanks in advance.

Hi, @yfyeung!

Open source Korean ASR datasets are very rare.
For training Korean ASR model, Usually use KsponSpeech or zeroth-korean dataset.

KsponSpeech is publicly available on an open data hub site of the Korea government. But as far i know, only koreans can use it.

zeroth-korean dataset is free to use. But the dataset is very small. There are 51.6 hours transcribed Korean audio for training data (22,263 utterances, 105 people, 3000 sentences) and 1.2 hours transcribed Korean audio for testing data (457 utterances, 10 people).

You can find some korean ASR datasets in Common Voice and FLEURS.

@yujinqiu
Copy link

@whsqkaak only streaming model? can we have non-streaming model?

@whsqkaak
Copy link
Contributor Author

@whsqkaak only streaming model? can we have non-streaming model?

@yujinqiu
As soon as I have time, I plan to train other models like zipformer.
I've never trained non-streaming models yet.
So if you have any models you recommend to train, please let me know.

@yujinqiu
Copy link

@whsqkaak korean non-streaming zipformer2 model like this one (thai) is what I want.
https://huggingface.co/yfyeung/icefall-asr-gigaspeech2-th-zipformer-2024-06-20/tree/main/exp
Typical use case:
with non-streaming model, we can use transcribe media file (movie, podcast etc.) to generate subtitle.

@whsqkaak
Copy link
Contributor Author

@yujinqiu I make pull request. Please see #1664

@yujinqiu
Copy link

@whsqkaak I've test #1664 model, the non-streaming model works great. thank you for your effort.
Are you interested in training a Japanese model? #1611
I've waiting for the model almost half an year.

@yuyun2000
Copy link

@whsqkaak I've test #1664 model, the non-streaming model works great. thank you for your effort. Are you interested in training a Japanese model? #1611 I've waiting for the model almost half an year.

如果你需要,我可以给你一个性能不那么强的日语模型,stream-stateless7的zipformer

@csukuangfj
Copy link
Collaborator

我们都需要!欢迎开源

@yuyun2000
Copy link

军锅 球球不回我 github回的这么快

@whsqkaak
Copy link
Contributor Author

@whsqkaak I've test #1664 model, the non-streaming model works great. thank you for your effort. Are you interested in training a Japanese model? #1611 I've waiting for the model almost half an year.

If I have time, I'll give it a try!

yfyeung pushed a commit to yfyeung/icefall that referenced this pull request Aug 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants