-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve how text streaming is displayed #255
Comments
I am currently facing an issue with how to select the AI's ongoing reply for caching the message content. At present, Moxin renders all the messages in the store at once for the |
@Guocork besides our offline conversations, I think a good starting point could be to add a character (e.g. ●) simulating the cursor at the end of the message (while streaming). If you're struggling with controlling how the text is updated on the screen, this could be a starting point that will make you identify when and and where to update the message content ( With that implemented we can check if pacing the text display is necessary or not. |
I fix that in pr #262 |
For the time being we'll close this with the improvement in #262 |
Problem
Currently the text streaming looks quite choppy compared to other LM platforms. The text is not displayed evenly or at a consistent pace, which makes the UI look janky.
Compare how the text is displayed in different apps:
Claude
The text is displayed line by line, with the Claude icon at the bottom simulating a cursor.
Screen.Recording.2024-09-19.at.3.32.03.PM.mov
ChatGPT (desktop)
The text seems to be mostly displayed word by word, with a "rounded cursor" at the current end of text.
Screen.Recording.2024-09-19.at.3.32.45.PM.mov
Moxin
Different-sized blocks of text are displayed at different speeds. This might not look as bad on this video, but it feels a lot worse on slower hardware.
Screen.Recording.2024-09-19.at.3.33.23.PM.mov
Proposed fix
We could take inspiration from other LM tools and implement some pacing that makes the text be displayed more evenly. We could do this either line by line like Claude, word by word (ChatGPT), or some other way that introduces consistency.
Adding some type of cursor indication at the end of the text seems like a good idea to make clear that text is being streamed. This could also help with the issue described in #199. This could be something simple like temporarily adding a special character like the ChatGPT “●” (U+25CF) at the end of the text until the streaming has finished. Or it could be a separate icon beneath the text like in Claude (it could also be animated).
Notes
Issue #199 describes how to look for when the model has finished streaming. Also there mechanisms in place already being used to hide the "Stop Streaming" button once streaming has finished.
The text was updated successfully, but these errors were encountered: