Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tool messages with empty content get their content overwritten in streaming on subsequent calls #4659

Open
rclmenezes opened this issue Feb 2, 2025 · 2 comments
Assignees
Labels
ai/ui bug Something isn't working

Comments

@rclmenezes
Copy link

Description

Let's say the client is responding to a message with a tool-call and no content:

[
  {
    "role": "user",
    "content": "my request"
  },
  {
    "role": "assistant",
    "content": [
      {
        "type": "tool-call",
        "toolCallId": "call_pdVC17S6U1bYY5r5Hf6MxzWs",
        "toolName": "myTool",
        "args": {
          "foo": "bar"
        }
      }
    ]
  },
  {
    "role": "tool",
    "content": [
      {
        "type": "tool-result",
        "toolCallId": "call_pdVC17S6U1bYY5r5Hf6MxzWs",
        "toolName": "newTab",
        "result": {
          "dead": "beef"
        }
   }
]

Let's say the LLM responds with text to explain the tool result. The final message in onFinish in streamText will look something like:

[
  {
    "role": "assistant",
    "content": [
      {
        "type": "text",
        "text": "Text explaining it"
      }
    ],
    "id": "msg-e1uQNCCwPYems2vSFjXVw5dD"
  }
]

So far so good. However, as the message gets streamed, something weird happens. The ui messages in the client go from:

[
    {
        "role": "user",
        "content": "my request",
        "toolInvocations: [...]
    },
    {
        "role": "assistant",
        "content": "",
        "toolInvocations: [...]
    }
]

To having the resulting content in the same assistant message:

[
    {
        "role": "user",
        "content": "my request",
        "toolInvocations: [...]
    },
    {
        "role": "assistant",
        "content": "Text explaining it",
        "toolInvocations: [...]
    }
]

Instead, we should be getting a new uiMessage with only content and no toolInvocations, like this:

[
    {
        "role": "user",
        "content": "my request",
        "toolInvocations: [...]
    },
    {
        "role": "assistant",
        "content": "",
        "toolInvocations: [...]
    },
    {
        "role": "assistant",
        "content": "Text explaining it",
        "toolInvocations: []
    }
]

Code example

No response

AI provider

@ai-sdk/anthropic@1.1.5

Additional context

No response

@rclmenezes rclmenezes added the bug Something isn't working label Feb 2, 2025
@rclmenezes
Copy link
Author

See #4591 (comment) for another example

@lgrammel lgrammel self-assigned this Feb 5, 2025
@lgrammel lgrammel added the ai/ui label Feb 5, 2025
@lgrammel
Copy link
Collaborator

lgrammel commented Feb 5, 2025

The underlying issue was a bug that message annotations were split between different assistant response messages. This bug was fixed by combining assistant messages into one. Also ideally UI message should be a sequence of user-assistant-user-assistant messages to make rendering in the UI easy, and combining assistant messages achieves that. Many users reporting that they needed to put hacks in place to combine assistant messages.

With this in mind, I have introduced parts on UI messages: #4670

Check out this example of using message parts with tool invocations: https://github.com/vercel/ai/blob/main/examples/next-openai/app/use-chat-tools/page.tsx

I hope message parts are the way forward here. Please let me know if that addresses the limitations that you were facing, and if there are missing features in the new approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai/ui bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants