-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible support real-time transcription with websockets? #25
Comments
This project is more inclined toward offline ASR. Though I have some plans to work on streaming ASR using whisper in the future. Doing streaming ASR with whisper is tricky, with lots of things to consider: latency, concurrency, and all. I am very skeptical if using whisper will be beneficial for streaming ASR at all. But again, unless I try, I can't say. But you might want to have a look at NVIDIA's RIVA models; they provide a very solid streaming ASR pipeline (comparable to many commercial streaming ASR services). https://docs.nvidia.com/deeplearning/riva/user-guide/docs/asr/asr-overview.html#streaming-recognition |
Thank you for your response @shashikg. I am trying to test something like https://github.com/ufal/whisper_streaming but most streaming implementations don't have batch inference and most batch inference projects don't support audio streams. By the way, thanks for NVIDIA's RIVA models link, I'll definitely look into that as an alternative. |
Hello @joaogabrieljunq I did a small PR that allows in-memory chunks. with this mode you can do basic realities asr, but you will need to implement custom VAD and hypothesis buffer on your side. |
Hi @amdrozdov, I've been using your PR to do some basic streaming, and it seems to work pretty decently! The only issue I have is that on occasion it seems to only transcribe a tiny portion of the actual data. E.g. I'll have a chunk of 15 seconds of audio, and it will only transcribe one word (very quickly, in about 0.5s) while ignoring the rest of the chunk. If I then keep concatenating additional streaming data to the chunk at some point it will actually pick up the whole chunk. Any idea why this happens? Thanks for your work! |
Hello @tjongsma! You need VAD in front of ASR to respect following schema: |
Thanks @amdrozdov! I'll try it out, that last part seems tricky to implement. |
Alright back again, since it seemed tricky, I disabled VAD and I set batch_size=1 just to see if that would work first. Turns out, it doesn't. The same issue remains. This isn't the case with e.g. faster-whisper and insanely-fast-whisper, even with the batched and/or VAD enabled versions of those whisper versions. Any clue as to what could be the issue? Because I would like to use whispers2T as it is faster than both those versions |
I would like to know if it is possible to use CTranslate2 hosted model pipeline linked with a websocket service like twilio to receive audio streams. Like https://github.com/ufal/whisper_streaming or https://github.com/collabora/WhisperLive that uses faster-whisper. Is it possible now or how could that be implemented if I need to dive into repository code?
I want to code and test this scenario to build a multi-client server to transcript multiple audio streams at the same time using GPU.
The text was updated successfully, but these errors were encountered: