You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I'm working with function calling using my own models, but I'm encountering issues with parsing the model's output into a function tool. Currently, my model produces the following output:
However, when calling the text generation interface (TGI) for chat completion, I receive an error stating that it couldn't parse the text.
Could you clarify how the model's output should be structured to correctly parse into a function tool? Additionally, can this work with streamed output as well?
Lastly, sometimes the model generates non-function output. In such cases, I would like to receive normal content instead of a function call. Could you advise how to handle both scenarios effectively?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I'm working with function calling using my own models, but I'm encountering issues with parsing the model's output into a function tool. Currently, my model produces the following output:
However, when calling the text generation interface (TGI) for chat completion, I receive an error stating that it couldn't parse the text.
Could you clarify how the model's output should be structured to correctly parse into a function tool? Additionally, can this work with streamed output as well?
Lastly, sometimes the model generates non-function output. In such cases, I would like to receive normal content instead of a function call. Could you advise how to handle both scenarios effectively?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions