-
Notifications
You must be signed in to change notification settings - Fork 6k
fix: chat completions API now also passes tools along #1167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
63943d6 to
eb9b723
Compare
SummaryRemoves the legacy Notable changes
ReviewNice sweep: codebase is leaner and tool logic is now in one place 👍.
Other than those nits, the changes look solid! |
ca2c197 to
f6a0a3a
Compare
8e3c9ab to
9346410
Compare
…ns.rs (#1177) The main motivator behind this PR is that `stream_chat_completions()` was not adding the `"tools"` entry to the payload posted to the `/chat/completions` endpoint. This (1) refactors the existing logic to build up the `"tools"` JSON from `client.rs` into `openai_tools.rs`, and (2) updates the use of responses API (`client.rs`) and chat completions API (`chat_completions.rs`) to both use it. Note this PR alone is not sufficient to get tool calling from chat completions working: that is done in #1167. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1177). * #1167 * __->__ #1177
47c9aaf to
b7d9e36
Compare
SummaryThis PR adds end-to-end support for streaming tool/function calls: the chat-completions client now accumulates Notes• Introduces Review
Overall, solid improvement—just minor clean-ups needed 👍 |
SummaryAdds full streaming-tool-call support to the Rust core:
ReviewNice step toward parity with the new Chat Completions features—code is clear and well-structured.
Otherwise looks solid—LGTM once the nits above are addressed. |
e5f9b9c to
0b88045
Compare
Update what we log to make `RUST_LOG=debug` a bit easier to work with. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/1196). * #1167 * __->__ #1196
Prior to this PR, there were two big misses in
chat_completions.rs:stream_chat_completions()was only including items of typeResponseItem::Messagewhen building up the"messages"JSON for thePOSTrequest to thechat/completionsendpoint. This fixes things by ensuring other variants (FunctionCall,LocalShellCall, andFunctionCallOutput) are included, as well.process_chat_sse(), we were not recording tool calls and were only emitting items of typeResponseEvent::OutputItemDone(ResponseItem::Message)to the stream. Now we introduceFunctionCallState, which is used to accumulate thedeltas of typetool_calls, so we can ultimately emit aResponseItem::FunctionCall, when appropriate.While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.)
The other important part of this PR is some cleanup in
core/src/codex.rs. In particular, it was hard to reason about howrun_task()was building up the list of messages to include in a request across the various cases:I like to think things are a bit cleaner now where:
zdr_transcript(if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yetpending_inputincludes any messages the user has submitted while the turn is in flight that need to be injected as part of the nextPOSTto the modelinput_for_next_turnincludes the tool call outputs that have not been sent back to the model yet