Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve how text streaming is displayed #255

Closed
joulei opened this issue Sep 19, 2024 · 4 comments
Closed

Improve how text streaming is displayed #255

joulei opened this issue Sep 19, 2024 · 4 comments
Assignees

Comments

@joulei
Copy link
Collaborator

joulei commented Sep 19, 2024

Problem

Currently the text streaming looks quite choppy compared to other LM platforms. The text is not displayed evenly or at a consistent pace, which makes the UI look janky.

Compare how the text is displayed in different apps:

Claude
The text is displayed line by line, with the Claude icon at the bottom simulating a cursor.

Screen.Recording.2024-09-19.at.3.32.03.PM.mov

ChatGPT (desktop)
The text seems to be mostly displayed word by word, with a "rounded cursor" at the current end of text.

Screen.Recording.2024-09-19.at.3.32.45.PM.mov

Moxin

Different-sized blocks of text are displayed at different speeds. This might not look as bad on this video, but it feels a lot worse on slower hardware.

Screen.Recording.2024-09-19.at.3.33.23.PM.mov

Proposed fix

We could take inspiration from other LM tools and implement some pacing that makes the text be displayed more evenly. We could do this either line by line like Claude, word by word (ChatGPT), or some other way that introduces consistency.
Adding some type of cursor indication at the end of the text seems like a good idea to make clear that text is being streamed. This could also help with the issue described in #199. This could be something simple like temporarily adding a special character like the ChatGPT “●” (U+25CF) at the end of the text until the streaming has finished. Or it could be a separate icon beneath the text like in Claude (it could also be animated).

Notes

Issue #199 describes how to look for when the model has finished streaming. Also there mechanisms in place already being used to hide the "Stop Streaming" button once streaming has finished.

@Guocork
Copy link
Collaborator

Guocork commented Sep 20, 2024

I am currently facing an issue with how to select the AI's ongoing reply for caching the message content. At present, Moxin renders all the messages in the store at once for the chat_line rendering. In this case, determining how to select the AI's current reply in the conversation with the user becomes a challenge. So far, I haven't found a corresponding solution."
This is the set_text function
This is the render function

@joulei
Copy link
Collaborator Author

joulei commented Sep 25, 2024

@Guocork besides our offline conversations, I think a good starting point could be to add a character (e.g. ●) simulating the cursor at the end of the message (while streaming). If you're struggling with controlling how the text is updated on the screen, this could be a starting point that will make you identify when and and where to update the message content (ChatLine - draw_walk()), and also it'll require you to pass a flag from ChatPanel to ChatLine indicating wether that's the end of the stream or if there's more text incoming.

With that implemented we can check if pacing the text display is necessary or not.

@Guocork
Copy link
Collaborator

Guocork commented Sep 27, 2024

I fix that in pr #262

@joulei
Copy link
Collaborator Author

joulei commented Nov 18, 2024

For the time being we'll close this with the improvement in #262

@joulei joulei closed this as completed Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants