-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Dynamic Voice Response with Interruption Handling for LLM Outputs #14
Comments
Additionally, we aim to transform our AI assistant into an active participant in meetings. During meetings, the assistant should be able to respond to queries (using tools that retrieve information from systems, the web, and BI), recall past points discussed, and provide timely comments throughout the meeting. The assistant will wait for a command with its name to begin interacting and should assist in maintaining a summarized action plan and topics already discussed. The system should function effectively in voice-based meeting rooms. |
We propose implementing a feature that allows the LLM model to stream its response in smaller chunks (or using a similar strategy), enabling voice playback to begin as soon as the user starts speaking. If the user interrupts the response, playback will pause, and the response flow will be dynamically adjusted.
This enhancement aims to optimize both cost and processing time by avoiding the need to process or pay for the entire response when an interruption occurs.
Key Objectives:
This feature would improve user experience and efficiency, especially in scenarios where immediate and responsive interactions are crucial.
The text was updated successfully, but these errors were encountered: