-
Notifications
You must be signed in to change notification settings - Fork 2.7k
OpenAI Responses API Plugin #4192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| if isinstance(event, ResponseErrorEvent): | ||
| self._handle_error(event) | ||
|
|
||
| if isinstance(event, ResponseCreatedEvent): | ||
| self._handle_response_created(event) | ||
|
|
||
| if isinstance(event, ResponseOutputItemDoneEvent): | ||
| chunk = self._handle_output_items_done(event) | ||
|
|
||
| if isinstance(event, ResponseTextDeltaEvent): | ||
| chunk = self._handle_response_output_text_delta(event) | ||
|
|
||
| if isinstance(event, ResponseCompletedEvent): | ||
| chunk = self._handle_response_completed(event) | ||
|
|
||
| if chunk is not None: | ||
| self._event_ch.send_nowait(chunk) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we set verbosity to 'low' or None and disable thinking by default? This mainly applies when working with the GPT-5 family models. Otherwise, the time to first text token tends to be quite slow, which can make it feel like the LLM request is hanging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The alternative would be streaming reasoning token as llm chunks (user can set weather to let LLM "speak out" reasoning token with a flag), or we do stream thinking chunk inside LLM plugin but make it only sends as part of transcripts but not pass into TTS. cc @theomonnom let me know your thoughts
| elif isinstance(tool, llm.ProviderTool): | ||
| schemas.append(tool.to_dict()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should manually check every ProviderTool individually here (FileSearch, ...).
llm.ProviderTool doesn't have a to_dict() method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just updated, checking openai.tools.OpenAITool works
|
woohoo! nice work @tinalenguyen |
To use:
llm=openai.responses.LLM()The tools included:
openai.tools.WebSearch()openai.tools.FileSearch()openai.tools.CodeInterpreter()