Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]Gemini Model Fails with "Invalid response from LLM call - None or empty." When Using Specific System Template #2417

Closed
iniwap opened this issue Mar 20, 2025 · 3 comments
Labels
bug Something isn't working

Comments

@iniwap
Copy link

iniwap commented Mar 20, 2025

Description

When using the Gemini model through OpenRouter with a specific system template containing detailed HTML inline style requirements, the CrewAI agent fails with the error "Invalid response from LLM call - None or empty.". This issue does not occur with Deepseek or Qwen models.

Steps to Reproduce

  1. Configure a CrewAI agent with the following settings:
    • model: Gemini (via OpenRouter)
    • system_template: (Provide the specific system template that causes the issue, including the detailed HTML inline style requirements)
    • prompt_template: "" or "{{.Prompt}}"
    • response_template: "{{ .Response }}"
  2. Run the CrewAI agent with a task that triggers the LLM call.
  3. Observe the "Invalid response from LLM call - None or empty." error.

Expected behavior

The Gemini model should process the system template and generate a valid response, similar to Deepseek and Qwen models.

Screenshots/Code snippets

no special code

Operating System

Windows 11

Python Version

3.11

crewAI Version

102

crewAI Tools Version

none

Virtual Environment

Venv

Evidence

Received None or empty response from LLM call.
An unknown error occurred. Please check the details below.
Error details: Invalid response from LLM call - None or empty.
An unknown error occurred. Please check the details below.
Error details: Invalid response from LLM call - None or empty.
Traceback (most recent call last):
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\xx\Desktop\x\src\x\main.py", line 26, in run
AutowxGzh().crew().kickoff(inputs=inputs)
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 576, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 683, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 781, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\task.py", line 302, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\task.py", line 366, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 258, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 258, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 257, in execute_task
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\xx\Desktop\x\src\x\main.py", line 178, in
x()
File "C:\Users\xx\Desktop\x\src\x\main.py", line 174, in x
run(inputs)
File "C:\Users\xx\Desktop\x\src\x\main.py", line 28, in run
raise Exception(f"An error occurred while running the crew: {e}")
Exception: An error occurred while running the crew: Invalid response from LLM call - None or empty.

Possible Solution

None

Additional context

  • This issue only occurs with the Gemini model. Deepseek and Qwen models work as expected.
  • The system template contains detailed HTML inline style requirements.
  • The issue persists even when simplifying the system template.
  • Confirm that the OpenRouter API key is valid and the API has sufficient quota.
  • Confirm that the network connection is stable.
@iniwap iniwap added the bug Something isn't working label Mar 20, 2025
devin-ai-integration bot added a commit that referenced this issue Mar 20, 2025
@Vidit-Ostwal
Copy link
Contributor

Can you share the specific system_template, will try to reproduce the entire bug.

@iniwap
Copy link
Author

iniwap commented Mar 21, 2025

@Vidit-Ostwal This parameter causes an error whenever it's filled, no matter the content.("" or "{{ .System }}"), or 'Design a mobile-friendly webpag')

@iniwap
Copy link
Author

iniwap commented Mar 21, 2025

I know why,should use "<|start_header_id|>xx<|end_header_id|>xx<|eot_id|>" label,if not ,gemini will occur error.

@iniwap iniwap closed this as completed Mar 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants