Skip to content

[Feature Request]: Get Final Prompts or Token Count  #581

@promentol

Description

@promentol

Is your feature request related to a problem? Please describe.

In order to bill end clients we need to calculate costs related open AI request. As there are many chat completion request to OpenAI, its not possible to estimate right now how much exactly one indexing or querying process consumed.

Describe the solution you'd like

In General for cost control / estimation purposes it will be good to have either final token count for a indexing process, or somewhere to store the final prompts (not prompt templates), which were called, both during indexing and querying (local or global)

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    backlogWe've confirmed some action is needed on this and will plan itenhancementNew feature or requestv3Issues that we know should be closed with the v3 release in late 2025.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions