Replies: 1 comment
-
Hi @bobbyo , thanks for reaching us😊. Actually, our team has discussed this before, about whether to support raw response or not. Currently we are not planning to expose the 'raw' option on llm tool, you could use 'prompt' tool + custom python script tool to achieve return the finish reason on node. Docs about prompt and python here: https://microsoft.github.io/promptflow/reference/tools-reference/prompt-tool.html |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is it possible to determine the finish reason for an llm tool call?
For example, in a flow with a llm tool call, how can the next flow step determine if the llm tool call finished because due to 'stop', 'length', 'function_call', ...?
This is important for building reliable pipelines & debugging invalid json responses, as finish reason 'stop' often results in invalid json even when json mode response was requested.:
"The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response." -- OpenAI API docs
(I know the 'finish_reason' is visible in the files under ~/.promptflow for local runs, however I do not see any promptflow API way to determine the finish reason...)
Beta Was this translation helpful? Give feedback.
All reactions