bug: Model Response Generation Sometimes Gets Stuck #1762
Labels
category: model running
Inference ux, handling context/parameters, runtime
type: bug
Something isn't working
Jan version
0.5.10
Describe the Bug
https://discord.com/channels/1107178041848909847/1307915350406467664/1313071502358483025
The model occasionally gets stuck at "Generating response..." state and doesn't complete the generation. This appears to happen randomly and has been observed with different models (demonstrated with both Qwen and Llama models).
Steps to Reproduce
Screenshots / Logs
What is your OS?
The text was updated successfully, but these errors were encountered: