You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The underlying issue was a bug that message annotations were split between different assistant response messages. This bug was fixed by combining assistant messages into one. Also ideally UI message should be a sequence of user-assistant-user-assistant messages to make rendering in the UI easy, and combining assistant messages achieves that. Many users reporting that they needed to put hacks in place to combine assistant messages.
With this in mind, I have introduced parts on UI messages: #4670
I hope message parts are the way forward here. Please let me know if that addresses the limitations that you were facing, and if there are missing features in the new approach.
Description
Let's say the client is responding to a message with a tool-call and no content:
Let's say the LLM responds with text to explain the tool result. The final message in
onFinish
instreamText
will look something like:So far so good. However, as the message gets streamed, something weird happens. The ui messages in the client go from:
To having the resulting content in the same assistant message:
Instead, we should be getting a new uiMessage with only content and no toolInvocations, like this:
Code example
No response
AI provider
@ai-sdk/anthropic@1.1.5
Additional context
No response
The text was updated successfully, but these errors were encountered: