Skip to content

[BUG] Authentication Error When Using OpenAI Compatible LLMs - Generic error message #2647

@carvalhomm

Description

@carvalhomm

Description

When configuring CrewAI to use an OpenAI-compatible LLM provider (not OpenAI itself), the framework incorrectly attempts to validate API keys against OpenAI's authentication servers regardless of the specified base_url. This results in authentication failures with error code 401 even when valid credentials for the alternative provider are supplied.

Steps to Reproduce

  1. Install the required dependencies:

langchain==0.3.17
langchain-community==0.3.16
langchain-core==0.3.33
crewai==0.100.0
crewai-tools==0.33.0

  1. Create a Crew using an Open AI compatible LLM instance as the LLM agent (sabia-3, for example)

  2. Don´t instantiate any Open AI credential (API KEY) as we are not using their model

  3. Put the "planning" variable of "Crew" as True and leaving the "planning_llm" as "None"

Expected behavior

CrewAI should respect the base_url parameter and send authentication requests to the specified provider's endpoint rather than OpenAI's servers.

Actual Behavior
CrewAI (via LiteLLM) attempts to validate the API key against OpenAI's servers regardless of the specified base_url, causing authentication failures.

Screenshots/Code snippets

agent = Agent(
role="ROLE",
goal="GOAL",
backstory="BACKSTORY",
llm=LLM(
model="openai/sabia-3",
temperature=0.7,
base_url='https://chat.maritaca.ai/api',
api_key="SABIA_API_KEY"
)
)
Crew(
tasks=[
Task(
description="TASK DESCRIPTION",
expected_output="EXPECTED OUTPUT",
agent=agent
)
],
agents=[
agent
],
process=Process.sequential,
planning=True,
cache=True,
memory=False,
verbose=True
).kickoff()

Operating System

Windows 11

Python Version

3.12

crewAI Version

0.100.0

crewAI Tools Version

0.33.0

Virtual Environment

Venv

Evidence

raise AuthenticationError(
litellm.exceptions.AuthenticationError: litellm.AuthenticationError: AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: asd43bvc**************************xadv. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Image

Possible Solution

When you indicate a "planning_llm" inside the "Crew" it solves the error. The big problem is, i spent over 3 days trying to figure out why my Crew was trying to comunicate with Open AI API when i hae explicitly told the Crew to use another LLM that was compatible with it. The error message when not using an Open AI model needs to change, in order to avoid letting users loose their minds.

One solution can be add a more clear message in the documentation telling about the dependency between the "planning" parameter and the "planning_llm" parameter, as many users doens´t use Open AI to run a crew.

Another solution is to change the error message in order to be more clear about the error is really about.

Additional context

...

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions