Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: litellm.cost_calculator.py::response_cost_calculator - Returning None #5610

Closed
okhat opened this issue Sep 10, 2024 · 6 comments · Fixed by #5618
Closed

[Bug]: litellm.cost_calculator.py::response_cost_calculator - Returning None #5610

okhat opened this issue Sep 10, 2024 · 6 comments · Fixed by #5618
Labels
bug Something isn't working

Comments

@okhat
Copy link

okhat commented Sep 10, 2024

What happened?

This is an extension of #5597, which I can't re-open.

The fix by @krrishdholakia was great but it doesn't yet handle most models by Databricks, only one LM. Can we consider a more general fix? This is less pressing now because we only get a warning, not an error.

When using model=databricks/databricks-meta-llama-3-1-70b-instruct, the error complains about databricks/meta-llama-3.1-70b-instruct-082724. The list of databricks/* models at this path https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json is great for reference.

Relevant log output

05:50:22 - LiteLLM:WARNING: cost_calculator.py:839 - litellm.cost_calculator.py::response_cost_calculator - Returning None. Exception occurred - This model isn't mapped yet. model=databricks/meta-llama-3.1-70b-instruct-082724, custom_llm_provider=databricks. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json./nTraceback (most recent call last):
  File "/opt/anaconda3/envs/jun2024_py310/lib/python3.10/site-packages/litellm/utils.py", line 4936, in get_model_info
    raise ValueError(
ValueError: This model isn't mapped yet. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/anaconda3/envs/jun2024_py310/lib/python3.10/site-packages/litellm/cost_calculator.py", line 819, in response_cost_calculator
    response_cost = completion_cost(
  File "/opt/anaconda3/envs/jun2024_py310/lib/python3.10/site-packages/litellm/cost_calculator.py", line 749, in completion_cost
    raise e
  File "/opt/anaconda3/envs/jun2024_py310/lib/python3.10/site-packages/litellm/cost_calculator.py", line 730, in completion_cost
    ) = cost_per_token(
  File "/opt/anaconda3/envs/jun2024_py310/lib/python3.10/site-packages/litellm/cost_calculator.py", line 219, in cost_per_token
    return databricks_cost_per_token(model=model, usage=usage_block)
  File "/opt/anaconda3/envs/jun2024_py310/lib/python3.10/site-packages/litellm/llms/databricks/cost_calculator.py", line 30, in cost_per_token
    model_info = get_model_info(model=base_model, custom_llm_provider="databricks")
  File "/opt/anaconda3/envs/jun2024_py310/lib/python3.10/site-packages/litellm/utils.py", line 5017, in get_model_info
    raise Exception(
Exception: This model isn't mapped yet. model=databricks/meta-llama-3.1-70b-instruct-082724, custom_llm_provider=databricks. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json.

Twitter / LinkedIn details

No response

@okhat okhat added the bug Something isn't working label Sep 10, 2024
@okhat
Copy link
Author

okhat commented Sep 10, 2024

As a suggestion, shouldn't the warning above occur only once? And then on future requests we shouldn't get the same warning about the same issue in the same run?

More generally, many people will ultimately use models that have not been added to the cost calculator. Should we really bombard them with warnings over that, on every request? I'm a lot less familiar than you guys are, but it seems like cost calculation should just return None silently if not available. There's nothing too urgent about the lack of cost calculation that demands a long and loud warning.

@krrishdholakia
Copy link
Contributor

krrishdholakia commented Sep 10, 2024

@okhat open to suggestions here

the motivation was to make litellm more observable. since we support returning the calculated response cost in the response - response._hidden_params["response_cost"], this error shows up - which indicates why it might not be getting calculated correctly.

Possible ideas (open to your suggestions too):

  • disable all logging litellm._logging._disable_debugging
  • expose a flag to disable cost calculation behaviour / cost calculation warning messages
  • reduce size of error (don't show traceback, just error message)
  • remove warning - and let the place where it's received handle how it should be treated can't do this since response cost returns None, so this error would be swallowed if not logged somewhere.

@krrishdholakia
Copy link
Contributor

looks like we already have support for treating it as a debug error - i'll move to just doing this @okhat
Screenshot 2024-09-10 at 9 37 22 AM

@krrishdholakia
Copy link
Contributor

Will also add the missing databricks model prices

@krrishdholakia
Copy link
Contributor

Databricks has moved their pricing to be in DBUs. So what we can do is store the dbu information, and apply a default conversion (which can be overridden by the user).

@okhat
Copy link
Author

okhat commented Sep 10, 2024

All of these sound good to me! Thanks a ton @krrishdholakia

krrishdholakia added a commit that referenced this issue Sep 11, 2024
* fix(cost_calculator.py): move to debug for noisy warning message on cost calculation error

Fixes #5610

* fix(databricks/cost_calculator.py): Handles model name issues for databricks models

* fix(main.py): fix stream chunk builder for multiple tool calls

Fixes #5591

* fix: correctly set user_alias when passed in

Fixes #5612

* fix(types/utils.py): allow passing role for message object

#5621

* fix(litellm_logging.py): Fix langfuse logging across multiple projects

Fixes issue where langfuse logger was re-using the old logging object

* feat(proxy/_types.py): support adding key-based tags for tag-based routing

Enable tag based routing at key-level

* fix(proxy/_types.py): fix inheritance

* test(test_key_generate_prisma.py): fix test

* test: fix test

* fix(litellm_logging.py): return used callback object
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants