-
Notifications
You must be signed in to change notification settings - Fork 288
feat: Add responses API #373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Cool. I tried this against an existing project, and I think something might be wrong with the
I am guessing that in chat.rs, pub enum ResponseFormat {
/// The type of response format being defined: `text`
Text,
/// The type of response format being defined: `json_object`
JsonObject,
/// The type of response format being defined: `json_schema`
JsonSchema {
json_schema: ResponseFormatJsonSchema,
},
} might need to be... pub enum ResponseFormat {
/// The type of response format being defined: `text`
Text,
/// The type of response format being defined: `json_object`
JsonObject,
/// The type of response format being defined: `json_schema`
JsonSchema(ResponseFormatJsonSchema),
} So that |
Thanks for testing it @twitchax and good catch, I think you're right. I updated it to use a fwiw, I mostly hand generated most of this as I couldn't find any generators that produced nice output. So with how complex the API is, I do worry there might be other subtle cases like this. I suppose if there are more issues like this they can be fixed as encountered though. |
@samvrlewis, yeah, I agree. Not sure how to fully exercise it, honestly, lol. |
@samvrlewis, your changes fixed this issue with parsing the |
Actually, getting another issue, which just seems like a serialization problem. When I switch to
Looks like there is an extra output type called
|
Huh, yeah, there is no status.
Unless I'm misunderstanding them, the docs seem to say that it should have one though.. Maybe "populated when items are returned via API" means "not in response to a request"? 🤷♂️ In any case, I pushed an update to make the field optional so it doesn't make deser. Thanks for the ongoing testing @twitchax! |
Yeah, I am guessing it is only present when |
It looks like |
Ah yeah looks like OpenAI added a bunch new tools last week: https://platform.openai.com/docs/changelog ! Have updated the PR to include them, and updated the example to use the same MCP example from the above link. |
Adds support for the OpenAI responses API
Thought about it some more and I think for the responses types it probably makes less sense to export them all at the root of the types crate, as there's so many duplicated (but subtly different) types and naming becomes confusing. Have instead made everything available through |
LGTM after that change. Had to bump a few types, but all my tests still pass, so... 👍? Lol. Haven't tried the MCP stuff, but I bet it works. May try it in a few weeks. If you're interested, I'm currently using this for https://github.com/twitchax/triage-bot, which is maybe 70% there? Mostly works, but needs some tweaking. |
woohoo, thanks for trying! 🎉
It seems to work for at least the simple example I have in the code now that uses https://mcp.deepwiki.com/mcp. Hopefully for more complicated cases it works too.
Looks cool! How well does SurrealDB work for retrieving context on demand in that? At work we have a somewhat similar service that tries to associate incidents with recent pull requests but it doesn't work very well, as it's really hard to give it enough context to let it figure things out. Your approach of doing the initial triaging seems a lot more promising. How is it working for you? |
Nice. I haven't used it in a real context yet, but I like SurrealDB. I'd argue that any DB with a full-text search would work fairly well? I think there is more opportunity to "agentize" some of these things, or get more context into the hands of the LLM. My approach right now uses a two-phase system. First phase (using Even if my bot-server has access to the same data, I like the idea of just having a separate process run an MCP so I don't have to deal with all of the back-and-forth function calls in the bot code. Looks like Rust is getting some love, so an MCP server may be pretty painless to just drop in. Definitely some promising results, but I'm trying to push the envelope on what is possible. In your case, I think that's exactly what remote MSP is for. Instead of gathering a bunch of context that will likely eat up tokens, give |
@samvrlewis, do you know the best way to respond with type https://platform.openai.com/docs/guides/function-calling?api-mode=responses Maybe a new enum option needs to be added to Maybe, due to the explosion of options, |
There's a lot of possible input items in the responses APIs. Ideally it'd be nice to have strict types, but for now we can use a custom user defined json value.
Ooh, yeah, there are a lot of input items that aren't there right now. 😬 Would be nice to have these all strictly typed but for now I've done as you suggested and added a Added another example that uses the Thanks for the prompts on MCP, btw! Definitely something to explore a bit further when I have some time. I do worry about how well the model would work with a codebase of significant complexity though, if it's needing to navigate around file by file. I haven't had much success using coding tools like cursor against big repos, I usually need to give them a tighter context to get any good output. Though to be fair I'm usually focused on generating code, maybe just reading/understanding code would be easier? |
+1, mileage varies a ton for the "agent mode" stuff like Cursor and Copilot Agent. I have the same experience with them as you. Limiting to tests or small refactors appears to work well. Everything else tends to fold, and produce bad architecture. For my purposes, MCP is going to be a big deal, so I've been poking at it a little more than others might. Thanks for entertaining my random observations while I poke! Happy to help out at some point, but not certain how you and other maintainers feel about stop-gaps like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much @samvrlewis and @twitchax for your contributions!
Thank you for doing all the heavy lifting by hand typing types and testing. Appreciate all the hard work!
Your design choice to nest types inside types::responses
is a good one!
* feat: Add responses API Adds support for the OpenAI responses API * feat: Add custom input item There's a lot of possible input items in the responses APIs. Ideally it'd be nice to have strict types, but for now we can use a custom user defined json value. (cherry picked from commit c2f3a6c)
Adds support for the OpenAI responses API.
Doesn't have support for streaming yet.
Due to types being reexported from the root of the crate, there's a few types with existing names (but different shapes) that I've exported with(edit: realised later it would probably be cleaner just to not export all the types at the root, as there's too much duplication).Responses
prefixes to avoid making ambiguous types. For example:ResponsesFilePath
,ResponsesRole
). Happy for feedback if there's a different way that this would be preferred.