Skip to content

Conversation

@tinalenguyen
Copy link
Member

@tinalenguyen tinalenguyen commented Dec 8, 2025

To use:
llm=openai.responses.LLM()

The tools included:
openai.tools.WebSearch()
openai.tools.FileSearch()
openai.tools.CodeInterpreter()

@tinalenguyen tinalenguyen requested a review from a team December 8, 2025 07:59
Comment on lines +203 to +219
if isinstance(event, ResponseErrorEvent):
self._handle_error(event)

if isinstance(event, ResponseCreatedEvent):
self._handle_response_created(event)

if isinstance(event, ResponseOutputItemDoneEvent):
chunk = self._handle_output_items_done(event)

if isinstance(event, ResponseTextDeltaEvent):
chunk = self._handle_response_output_text_delta(event)

if isinstance(event, ResponseCompletedEvent):
chunk = self._handle_response_completed(event)

if chunk is not None:
self._event_ch.send_nowait(chunk)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we set verbosity to 'low' or None and disable thinking by default? This mainly applies when working with the GPT-5 family models. Otherwise, the time to first text token tends to be quite slow, which can make it feel like the LLM request is hanging.

Copy link
Contributor

@toubatbrian toubatbrian Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alternative would be streaming reasoning token as llm chunks (user can set weather to let LLM "speak out" reasoning token with a flag), or we do stream thinking chunk inside LLM plugin but make it only sends as part of transcripts but not pass into TTS. cc @theomonnom let me know your thoughts

Comment on lines 225 to 226
elif isinstance(tool, llm.ProviderTool):
schemas.append(tool.to_dict())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should manually check every ProviderTool individually here (FileSearch, ...).
llm.ProviderTool doesn't have a to_dict() method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just updated, checking openai.tools.OpenAITool works

@tinalenguyen tinalenguyen merged commit ce2df30 into main Jan 8, 2026
16 of 18 checks passed
@tinalenguyen tinalenguyen deleted the tina/openai-responses-plugin branch January 8, 2026 01:16
@davidzhao
Copy link
Member

woohoo! nice work @tinalenguyen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants