Skip to content

Commit 08f6969

Browse files
authored
fix: Add warm_up() calls to tests after ChatGenerator tool initialization changes (#9965)
* In tests call agent.warm_up() where needed * Add reno * Update wording in reno note * Update reno note
1 parent 6c78f10 commit 08f6969

File tree

4 files changed

+30
-0
lines changed

4 files changed

+30
-0
lines changed
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
---
2+
upgrade:
3+
- |
4+
Agent and LLMMessagesRouter components now require explicit warm_up() calls before run() when used in standalone
5+
mode (outside of a Pipeline) with tools. This change was introduced by PR #9942 which added warm_up() methods to
6+
ChatGenerator components for tool initialization.
7+
8+
**Who is affected:** Users running Agent or LLMMessagesRouter standalone with ChatGenerators that have tools.
9+
10+
**Action required:** Call warm_up() before run() or run_async():
11+
12+
.. code-block:: python
13+
14+
# Agent with tools
15+
agent = Agent(llm=OpenAIChatGenerator(model="gpt-4"), tools=[my_tool])
16+
agent.warm_up() # Required in standalone mode
17+
result = agent.run(messages=[ChatMessage.from_user("Hello")])
18+
19+
# LLMMessagesRouter with tools
20+
router = LLMMessagesRouter(generator=OpenAIChatGenerator(model="gpt-4"), tools=[my_tool])
21+
router.warm_up() # Required in standalone mode
22+
result = router.run(messages=[ChatMessage.from_user("Hello")])
23+
24+
**Note:** This requirement only applies to standalone usage. When these components are used within a Pipeline,
25+
warm_up() is called automatically. We are reconsidering this design choice - see issue #9966 for discussion
26+
about automatically calling warm_up() at runtime instead of requiring explicit calls.

test/components/agents/test_agent_breakpoints.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -534,6 +534,7 @@ def test_live_resume_from_tool_invoker(self, tmp_path, weather_tool):
534534
)
535535

536536
try:
537+
agent.warm_up()
537538
agent.run(messages=[ChatMessage.from_user("What's the weather in Berlin?")], break_point=agent_breakpoint)
538539
except BreakpointException:
539540
pass

test/components/routers/test_llm_messages_router.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -206,6 +206,7 @@ def test_live_run(self):
206206
)
207207

208208
messages = [ChatMessage.from_user("Hello")]
209+
router.warm_up()
209210
result = router.run(messages)
210211
print(result)
211212

test/tools/test_pipeline_tool.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -297,6 +297,7 @@ def test_live_pipeline_tool(self):
297297
)
298298

299299
# Let the Agent handle a query
300+
agent.warm_up()
300301
result = agent.run([ChatMessage.from_user("Who was Nikola Tesla?")])
301302

302303
assert len(result["messages"]) == 5 # System msg, User msg, Agent msg, Tool call result, Agent mgs
@@ -343,6 +344,7 @@ async def test_live_async_pipeline_tool(self):
343344
)
344345

345346
# Let the Agent handle a query
347+
agent.warm_up()
346348
result = await agent.run_async([ChatMessage.from_user("Who was Nikola Tesla?")])
347349

348350
assert len(result["messages"]) == 5 # System msg, User msg, Agent msg, Tool call result, Agent mgs

0 commit comments

Comments
 (0)