Skip to content

Erroneous tokens per min #2255

@gnaservicesinc

Description

@gnaservicesinc

What version of Codex is running?

codex-cli 0.21.0

Which model were you using?

gpt-5

What platform is your computer?

Linux 6.15.0-4-generic aarch64 aarch64

What steps can reproduce the bug?

Just ran /init for the first time in a repo in a fodler.
\F0\9F\96\90 stream disconnected before completion: Rate limit reached for gpt-5 in organization org-4186rp37ux
4fzmyzgp01lbvp on tokens per min (TPM): Limit 40000000, Used 40000000, Requested 19304. Please try a
gain in 28ms. Visit https://platform.openai.com/account/rate-limits to learn more.

Token usage: total=18006 input=16800 (+ 70656 cached) output=1206 (reasoning 896)

What is the expected behavior?

No response

What do you see instead?

No response

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions