feat: HuggingFaceAPIChatGenerator add token usage
data
#8375
+47
−3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why:
Adds token
usage
metadata to responses from HuggingFaceAPIChatGenerator.usage
dictionary response meta field has the following two keysprompt_tokens
andcompletion_tokens
matching OpenAI format in token counting.This feature, i.e. OpenAI token usage format compatibility, aside from chat generators interchangeability benefits, is needed for full support of Langfuse GENERATION token usage renderings in traces. See https://github.com/deepset-ai/haystack-private/issues/82 for more details.
What:
usage
meta field with the keysprompt_tokens
andcompletion_tokens
toHuggingFaceAPIChatGenerator
.usage
information in the message metadata.How can it be used:
How did you test it:
usage
meta field and its containedprompt_tokens
andcompletion_tokens
keys in the reply messages.Notes for the reviewer: