-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Output Confidence Score #62
Comments
The OpenAI API does not return strings as Prompt API does, but chat completion/chunk objects that contain the logprobs if the user has enabled them when creating the chat session. These objects also support reporting back multiple choices if the user has requested them. However, I think it is tedious to unbox the answer from these objects if you didn't request logprobs or additional choices. |
Thanks for the extra context @christianliebel. I think we want to align to developer expectations as much as possible by reusing the types of object shapes they've seen elsewhere. Although, I would really prefer if we didn't have to use underscored names like I see three possibilities here:
I'm somewhat attracted by (2), but I don't know how web developers feel about it. |
+1 to option |
Enable developers to filter LLM responses based on confidence. This could be achieved by providing a confidence score with each response, potentially derived from per-token log-likelihood. This would improve the reliability of LLM-powered applications by allowing developers to reject low-confidence outputs.
The text was updated successfully, but these errors were encountered: