-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Websocket Support for Streaming Input and Output #320
Comments
hi @ChenghaoMou, thank you for looking into and trying LitServe 💜 !! We support streaming which can be enabled by adding a Please let me know if it helps. |
Thanks for the prompt response! @aniketmaurya If I am reading the documentation right, the current streaming is only for the output, not the input. It feels more like server side events (one input and multiple outputs) rather than websocket (streaming both input and output). The difference could be, for example, in speech to text:
I hope this makes sense. |
yes @ChenghaoMou, the current streaming is server-sent event. Let's keep this open and we can evaluate this feature based on requests from the community. |
Come on, websocket is basic feature. That was the first thing I required after signing to PRO account. Currently they open ports only if there is GET endpoint that returns 200. Using FastAPI you just need to enable both GET request and WSS endpoint. Me as community member replies that this is a crucial and basic element in software development for web & mobile apps. Studio AI have some Whisper examples, but they do it wrong. They just encapsulate Whisper with StreamLit and it looks like REST API or it just processes whole audio file. It made me a little bit sad as perfectionist and software developer with 20 years of experience :-D |
🚀 Feature
Support websocket endpoints to allow two-way real-time data communication.
Motivation
Currently, the requests are processed with the expectation that the data is complete and stateless. However, the input data isn't always ready immediately for use cases like speech to text, text to speech, audio/speech understanding, especially in time-sensitive situations. With the recent release of Realtime API from OpenAI and a new family of voice AI models (ultravox, mini-omni, llama-omni, moshi), support for streaming input and output could benefit the community in many ways and unlock even more creative uses of AI models.
Pitch
Support streaming input and output with websocket or any other methods to allow real-time AI applications.
Alternatives
A typical FastAPI websocket implementation is very template-like:
However, this might make the batching impossible or complicated.
I am new to this repo, so if there is a workaround by hacking the server/spec/api to allow websocket, I am more than happy to contribute. If this is duplicate/irrelevant, sorry for the trouble.
Thanks a million for open sourcing this awesome project. ❤️
Additional context
The text was updated successfully, but these errors were encountered: