-
Notifications
You must be signed in to change notification settings - Fork 8.1k
Description
Feature Request
We can enable streaming for SelectoGroupChat's built-in selector by introducing an option in SelectorGroupChat, e.g., model_client_stream so the model client will be used in streaming model. It will use create_stream rather than create.
As the next step, we can enable streaming of orchestration events through run_stream so the streaming output will be visible from consumer of run_stream. Issue here: #6161
--- Below is the original bug report ---
What happened?
Describe the bug
Some llm models only support stream = True. The assistant agent supports this very well by setting model_client_stream = True. But the OpenAIChatCompletionClient does not allow to pass stream = True to it. Therefore, it's not very possible to use llm models which only supports stream = True.
To Reproduce
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': 'invalid_parameter_error', 'param': None, 'message': 'This model only support stream mode, please enable the stream parameter to access the model. ', 'type': 'invalid_request_error'}, 'id':
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Which packages was the bug in?
Python AgentChat (autogen-agentchat>=0.4.0)
AutoGen library version.
Python dev (main branch)
Other library version.
No response
Model used
No response
Model provider
None
Other model provider
No response
Python version
None
.NET version
None
Operating system
None