-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Description
When Force Reasoning: Inject reasoning prompt to encourage step-by-step thinking is turned on, GLM 4.7 became very confused for me. I sent in a prompt and a few follow-up prompts, and then it questioned me about ultrathink really hard. It was doubting itself and deleting code it had already written because I told it to ultrathink really hard multiple times (because that prompt injection happens each time). I'm not sure if this prompt injection is correct and the model itself is just confused or if perhaps there is an issue here with GLMProxy.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels