Skip to content

web_search_preview tool call is not limited by max_turns option #783

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jihun-im-open opened this issue May 29, 2025 · 5 comments
Closed
Labels
bug Something isn't working

Comments

@jihun-im-open
Copy link

Please read this first

Yes

Describe the bug

max_turns of run_streamed() is ignored for web_search_preview tool

Debug information

  • Agents SDK version: v0.0.15
  • Python version: Python 3.11

Repro steps

https://gist.github.com/jihun-im-open/b1351fb44320f83d72d50ab9f98a50da

Expected behavior

MaxTurn Error should be raised

@jihun-im-open jihun-im-open added the bug Something isn't working label May 29, 2025
@rm-openai
Copy link
Collaborator

All the web_search_preview calls happen on the server in one "turn", so this is expected. Is your goal to have at most 1 call to search?

@jihun-im-open
Copy link
Author

I have encountered more than 6 sets of the following web_search_call events and those are regarded as a single turn:

  • response.web_search_call.in_progress
  • response.web_search_call.searching
  • response.web_search_call.completed

Here is full response stream:
https://gist.github.com/jihun-im-open/2468aca698f6dd34b226156008fa23bd

My intention is that it would be great if max_turns could limit the count of web_search_call.xxx events. This is because web_search_call could also fall into an almost infinite loop.

@rm-openai
Copy link
Collaborator

Gotcha. We can't do this today, because max_turns is a local (SDK) param but the web searches happen in the API. An alternate approach is to set a max_tokens instead, which will force the model to use fewer tokens and hence fewer web searches?

@jihun-im-open
Copy link
Author

Thanks @rm-openai ,

I see that we cannot control the API.

I solved this problem by counting ResponseWebSearchCallInProgressEvent events and canceling RunResultStreaming if a threshold is exceeded.

@rm-openai
Copy link
Collaborator

Makes sense. I'll also look into adding more configurabilty to the API. thanks for the feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants