-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Description
When executing both frontend and backend tools in a single user request, CopilotKit throws an OpenAI API error related to tool call ID validation. This issue affects both the deprecated coagents-starter
example and the newer with-langgraph-python
example.
The error occurs when a user requests actions that require both server-side execution (e.g., get_weather
) and frontend UI updates (e.g., change_color
) in the same prompt.
Examples Affected
coagents-starter
(original example)with-langgraph-python
(modernized example)
Steps to Reproduce
- Run this example
- In the chat interface, enter a prompt that triggers both frontend and backend tools:
Get the weather in SF, and change the color to brown
Expected Behavior
Both tools should execute successfully:
- The backend
get_weather
tool should fetch weather data for San Francisco - The frontend
change_color
tool should update the UI color to brown - The assistant should provide a response summarizing both actions
Actual Behavior
The request fails with an OpenAI API error:
openai.BadRequestError: Error code: 400 - {
'error': {
'message': "Invalid parameter: 'tool_call_id' of 'call_qhvIZb4R9Jzv59TZPYBPnDih' not found in 'tool_calls' of previous message.",
'type': 'invalid_request_error',
'param': 'messages.[26].tool_call_id',
'code': None
}
}
It seems that some message sequencing / ordering issue is happening when generating the requests to the LLM is triggering the issue.
May be related to CopilotKit/CopilotKit#561
This looks to be not a problem with the example per se, but with how its using copilotkit sdk or a copilotkit issue.
Thanks!