Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: AWS Bedrock Nova error #7181

Open
jleatham opened this issue Dec 11, 2024 · 2 comments
Open

[Bug]: AWS Bedrock Nova error #7181

jleatham opened this issue Dec 11, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@jleatham
Copy link

What happened?

LiteLLM proxy setup with a few models:

Public Model Name:     amazon.nova-pro-v1:0                                Provider: bedrock_converse 
Public Model Name:     us.anthropic.claude-3-5-sonnet-20241022-v2:0        Provider: bedrock
...

Other models work, but Nova is not.
Error is the same with python litellm script as well as using open-webui.
Not sure why the provider is different. I added them the sameway using the Add Model UI.

Relevant log output

Uh-oh! There was an issue connecting to amazon.nova-pro-v1:0.
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=amazon.nova-pro-v1:0
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Received Model Group=amazon.nova-pro-v1:0
Available Model Group Fallbacks=None

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.54.1

Twitter / LinkedIn details

No response

@jleatham jleatham added the bug Something isn't working label Dec 11, 2024
@CrazyShipOne
Copy link

Try replacing model name to 'us.amazon.nova-pro-v1:0'

@jleatham
Copy link
Author

The Amazon model (TMK) is different than Anthropic and Llama in that the modelId does not use the us. prefix. I tried anyway and got the following error:

Uh-oh! There was an issue connecting to us.amazon.nova-pro-v1:0.
{'error': '/chat/completions: Invalid model name passed in model=us.amazon.nova-pro-v1:0. Call `/v1/models` to view available models for your key.'}

Here is the model API guidance from AWS Console:

{
  "modelId": "amazon.nova-pro-v1:0",
  "contentType": "application/json",
  "accept": "application/json",
  "body": {
    "inferenceConfig": {
      "max_new_tokens": 1000
    },
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "text": "this is where you place your input text"
          }
        ]
      }
    ]
  }
}

The above was ran on litellm version 1.56.4. I tried with the regular amazon.nova-pro-v1:0 model and got this error from the proxy server:

17:41:42 - LiteLLM Proxy:ERROR: proxy_server.py:3492 - litellm.proxy.proxy_server.chat_completion(): Exception occured - litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=amazon.nova-pro-v1:0
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Received Model Group=amazon.nova-pro-v1:0
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
Traceback (most recent call last):
  File "/usr/local/lib/python3.13/site-packages/litellm/proxy/proxy_server.py", line 3381, in chat_completion
    responses = await llm_responses
                ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 833, in acompletion
    raise e
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 809, in acompletion
    response = await self.async_function_with_fallbacks(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 2803, in async_function_with_fallbacks
    raise original_exception
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 2619, in async_function_with_fallbacks
    response = await self.async_function_with_retries(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        *args, **kwargs, mock_timeout=mock_timeout
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 2981, in async_function_with_retries
    raise original_exception
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 2887, in async_function_with_retries
    response = await self.make_call(original_function, *args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 2990, in make_call
    response = await response
               ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 958, in _acompletion
    raise e
  File "/usr/local/lib/python3.13/site-packages/litellm/router.py", line 926, in _acompletion
    response = await _response
               ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/litellm/utils.py", line 1202, in wrapper_async
    raise e
  File "/usr/local/lib/python3.13/site-packages/litellm/utils.py", line 1056, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/litellm/main.py", line 411, in acompletion
    _, custom_llm_provider, _, _ = get_llm_provider(
                                   ~~~~~~~~~~~~~~~~^
        model=model, api_base=completion_kwargs.get("base_url", None)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/local/lib/python3.13/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 353, in get_llm_provider
    raise e
  File "/usr/local/lib/python3.13/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 330, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
    ...<8 lines>...
    )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants