Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Faulty calculation of custom pricing #3165

Closed
r4881t opened this issue Jul 18, 2024 · 1 comment · Fixed by #3175
Closed

[Bug]: Faulty calculation of custom pricing #3165

r4881t opened this issue Jul 18, 2024 · 1 comment · Fixed by #3175

Comments

@r4881t
Copy link
Contributor

r4881t commented Jul 18, 2024

Describe the bug

With the openai gpt-4o-mini, I went ahead and tried setting the price param as

model_4o = "gpt-4o-mini"
llm_config_4o = {
  "config_list": [{"model": model_4o, "api_key": os.environ["OPENAI_API_KEY"]}],
  "stream": False,
  "temperature": 0,
  "cache_seed": 20,
  "timeout": 300,
  "max_tokens": 3 * 1024,
  "price": [0.00015, 0.0006],
}

Since it's mentioned that the price should be price per 1k tokens. However soon I noticed that the price it shows is quite high. So i went deeper and found that the code commited via #2902 contains a bug. Specifically at

return n_input_tokens * price_1k[0] + n_output_tokens * price_1k[1]
where it multiplies price per 1k with the actual number of tokens. Thus gicing wrong value. To fix this I have changed the price to be price per token in my llm config, but the code needs to be updated so that people dont face this issue.

Steps to reproduce

No response

Model Used

No response

Expected Behavior

The price should be computed as price per 1k token

Screenshots and logs

No response

Additional Information

No response

@r4881t r4881t added the bug label Jul 18, 2024
@yiranwu0
Copy link
Collaborator

Yes, thanks! I will submit a PR for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants