Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible support real-time transcription with websockets? #25

Open
joaogabrieljunq opened this issue Feb 21, 2024 · 7 comments
Open

Comments

@joaogabrieljunq
Copy link

I would like to know if it is possible to use CTranslate2 hosted model pipeline linked with a websocket service like twilio to receive audio streams. Like https://github.com/ufal/whisper_streaming or https://github.com/collabora/WhisperLive that uses faster-whisper. Is it possible now or how could that be implemented if I need to dive into repository code?

I want to code and test this scenario to build a multi-client server to transcript multiple audio streams at the same time using GPU.

@shashikg
Copy link
Owner

Hi @joaogabrieljunq

This project is more inclined toward offline ASR. Though I have some plans to work on streaming ASR using whisper in the future. Doing streaming ASR with whisper is tricky, with lots of things to consider: latency, concurrency, and all. I am very skeptical if using whisper will be beneficial for streaming ASR at all. But again, unless I try, I can't say.

But you might want to have a look at NVIDIA's RIVA models; they provide a very solid streaming ASR pipeline (comparable to many commercial streaming ASR services). https://docs.nvidia.com/deeplearning/riva/user-guide/docs/asr/asr-overview.html#streaming-recognition

@joaogabrieljunq
Copy link
Author

joaogabrieljunq commented Feb 21, 2024

Thank you for your response @shashikg. I am trying to test something like https://github.com/ufal/whisper_streaming but most streaming implementations don't have batch inference and most batch inference projects don't support audio streams. By the way, thanks for NVIDIA's RIVA models link, I'll definitely look into that as an alternative.

@amdrozdov
Copy link

Hello @joaogabrieljunq I did a small PR that allows in-memory chunks. with this mode you can do basic realities asr, but you will need to implement custom VAD and hypothesis buffer on your side.

@tjongsma
Copy link

tjongsma commented Aug 16, 2024

Hi @amdrozdov,

I've been using your PR to do some basic streaming, and it seems to work pretty decently! The only issue I have is that on occasion it seems to only transcribe a tiny portion of the actual data. E.g. I'll have a chunk of 15 seconds of audio, and it will only transcribe one word (very quickly, in about 0.5s) while ignoring the rest of the chunk. If I then keep concatenating additional streaming data to the chunk at some point it will actually pick up the whole chunk. Any idea why this happens? Thanks for your work!

@amdrozdov
Copy link

Hello @tjongsma! You need VAD in front of ASR to respect following schema:
[Input stream] -> [VAD] (wait until 1 utterance 3-6 sec) -> Whisper S2T (process all chunks). If you want to do batched/parallel, you will need to process multiple connections in [input stream] part and do multithreading/batched VAD

@tjongsma
Copy link

Thanks @amdrozdov! I'll try it out, that last part seems tricky to implement.

@tjongsma
Copy link

Alright back again, since it seemed tricky, I disabled VAD and I set batch_size=1 just to see if that would work first. Turns out, it doesn't. The same issue remains. This isn't the case with e.g. faster-whisper and insanely-fast-whisper, even with the batched and/or VAD enabled versions of those whisper versions. Any clue as to what could be the issue? Because I would like to use whispers2T as it is faster than both those versions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants