Replies: 4 comments
-
Currently I am doing something like this but this is quite hacky. Not sure if there is a better way to do this function filterMessages<T extends IMessage[] | SavedMessage[]>(messages: T) {
const _messages = [];
let i = 0;
while (i < messages.length) {
const message = messages[i];
_messages.push(message);
if ('toolName' in message && message.toolName === 'toolB') {
i += 2;
} else if (
'toolInvocations' in message &&
message.toolInvocations?.some(
(toolCall) => toolCall.toolName === 'toolB',
)
) {
i += 2;
} else {
i += 1;
}
}
return _messages;
} |
Beta Was this translation helpful? Give feedback.
-
I even tried adding this to the prompt "Just reply with an empty string after the toolB tool call" but no luck |
Beta Was this translation helpful? Give feedback.
-
Currently not possible. Please file a feature request. |
Beta Was this translation helpful? Give feedback.
-
@lgrammel Will do! But in the meantime is there a better way to do option B as you had suggested in this #3798 (comment) |
Beta Was this translation helpful? Give feedback.
-
Hello,
Currently I have the following
useChat
hookand the corresponding API route
My understanding is since the
maxSteps
are set to 3 whenever a tool is invoked the SDK will send the results of the tool to the LLM and the client will be getting 2 back to back messages one with the results of the tool invocation and the next with the string output from the LLM created using the tools result. This is desirable behavior for my use case but only fortoolA
. Whenever the LLM decides to invoketoolB
I want to just return the output to the tool execution without sending the tool output to the LLM and getting a string response as well. Essentially set themaxSteps
to 1. Is this possible to do in the AI SDK and if so are there any recommended ways to do that. I am trying to do what is being recommended in the comment here #3798 (comment) primarily option B. Thanks in advance!Beta Was this translation helpful? Give feedback.
All reactions