Skip to content
Discussion options

You must be logged in to vote

Hi @ThePipeFixer — great question! The number in the bottom bar is a running lifetime usage counter for the session, not the size of the prompt we’re about to send. Every time the model finishes a turn we take the usage payload from the Responses API and add it to total_token_usage, which simply keeps summing non-cached input and output tokens via TokenUsage::blended_total and TokenUsageInfo::append_last_usage (codex-rs/core/src/protocol.rs:650, codex-rs/protocol/src/protocol.rs:612). The TUI then renders that cumulative figure in bold (codex-rs/tui/src/bottom_pane/chat_composer.rs:1631) using the helper that merges each new report into the running tally (codex-rs/tui/src/chatwidget.rs:12968

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@zemaj
Comment options

Answer selected by ThePipeFixer
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants