You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the Gemini model through OpenRouter with a specific system template containing detailed HTML inline style requirements, the CrewAI agent fails with the error "Invalid response from LLM call - None or empty.". This issue does not occur with Deepseek or Qwen models.
Steps to Reproduce
Configure a CrewAI agent with the following settings:
model: Gemini (via OpenRouter)
system_template: (Provide the specific system template that causes the issue, including the detailed HTML inline style requirements)
prompt_template: "" or "{{.Prompt}}"
response_template: "{{ .Response }}"
Run the CrewAI agent with a task that triggers the LLM call.
Observe the "Invalid response from LLM call - None or empty." error.
Expected behavior
The Gemini model should process the system template and generate a valid response, similar to Deepseek and Qwen models.
Screenshots/Code snippets
no special code
Operating System
Windows 11
Python Version
3.11
crewAI Version
102
crewAI Tools Version
none
Virtual Environment
Venv
Evidence
Received None or empty response from LLM call.
An unknown error occurred. Please check the details below.
Error details: Invalid response from LLM call - None or empty.
An unknown error occurred. Please check the details below.
Error details: Invalid response from LLM call - None or empty.
Traceback (most recent call last):
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\xx\Desktop\x\src\x\main.py", line 26, in run
AutowxGzh().crew().kickoff(inputs=inputs)
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 576, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 683, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 781, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\task.py", line 302, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\task.py", line 366, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 258, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 258, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 257, in execute_task
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\xx\Desktop\x\src\x\main.py", line 178, in
x()
File "C:\Users\xx\Desktop\x\src\x\main.py", line 174, in x
run(inputs)
File "C:\Users\xx\Desktop\x\src\x\main.py", line 28, in run
raise Exception(f"An error occurred while running the crew: {e}")
Exception: An error occurred while running the crew: Invalid response from LLM call - None or empty.
Possible Solution
None
Additional context
This issue only occurs with the Gemini model. Deepseek and Qwen models work as expected.
The system template contains detailed HTML inline style requirements.
The issue persists even when simplifying the system template.
Confirm that the OpenRouter API key is valid and the API has sufficient quota.
Confirm that the network connection is stable.
The text was updated successfully, but these errors were encountered:
@Vidit-Ostwal This parameter causes an error whenever it's filled, no matter the content.("" or "{{ .System }}"), or 'Design a mobile-friendly webpag')
Description
When using the Gemini model through OpenRouter with a specific system template containing detailed HTML inline style requirements, the CrewAI agent fails with the error "Invalid response from LLM call - None or empty.". This issue does not occur with Deepseek or Qwen models.
Steps to Reproduce
model
: Gemini (via OpenRouter)system_template
: (Provide the specific system template that causes the issue, including the detailed HTML inline style requirements)prompt_template
: "" or "{{.Prompt}}"response_template
: "{{ .Response }}"Expected behavior
The Gemini model should process the system template and generate a valid response, similar to Deepseek and Qwen models.
Screenshots/Code snippets
no special code
Operating System
Windows 11
Python Version
3.11
crewAI Version
102
crewAI Tools Version
none
Virtual Environment
Venv
Evidence
Received None or empty response from LLM call.
An unknown error occurred. Please check the details below.
Error details: Invalid response from LLM call - None or empty.
An unknown error occurred. Please check the details below.
Error details: Invalid response from LLM call - None or empty.
Traceback (most recent call last):
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\xx\Desktop\x\src\x\main.py", line 26, in run
AutowxGzh().crew().kickoff(inputs=inputs)
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 576, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 683, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\crew.py", line 781, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\task.py", line 302, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\task.py", line 366, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 258, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 258, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 257, in execute_task
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in invoke
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 166, in _invoke_loop
raise e
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\AppData\Local\Programs\Python\Python311\Lib\site-packages\crewai\agents\crew_agent_executor.py", line 217, in _get_llm_response
raise ValueError("Invalid response from LLM call - None or empty.")
ValueError: Invalid response from LLM call - None or empty.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\xx\Desktop\x\src\x\main.py", line 178, in
x()
File "C:\Users\xx\Desktop\x\src\x\main.py", line 174, in x
run(inputs)
File "C:\Users\xx\Desktop\x\src\x\main.py", line 28, in run
raise Exception(f"An error occurred while running the crew: {e}")
Exception: An error occurred while running the crew: Invalid response from LLM call - None or empty.
Possible Solution
None
Additional context
The text was updated successfully, but these errors were encountered: