-
-
Notifications
You must be signed in to change notification settings - Fork 530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GPT-4-Turbo/Vision + Updated GPT-3.5-Turbo models #406
Add GPT-4-Turbo/Vision + Updated GPT-3.5-Turbo models #406
Conversation
In addition to the models, I'd love to see the "response_format" and "seed" properties added to the ChatCompletionCreateRequest class in order to use these new features. That may require some conditional code to allow it only for those models that support it if that is the case. |
I've added the |
Cool. I'll do some digging to see if I can help. |
Looks like Incompatible models return an Is conditional logic still necessary in this case? If so, would the preferred approach be to throw an exception informing the user that the model is incompatible with this property? I've also added a
|
I mean, the FunctionCall property in ChatMessage is always defined too and throws a exception if it's null, which is literally all of the time except when GPT is calling a function, which is the property's sole reason for existing, so I don't see why doing it in an different way, the library mainly uses defined properties that throw an exception if it's not needed unless the sole existence of the property makes difficulties |
Why not make this an enum with two values: Unspecified (default) My reasoning is there is likely to be additional format types in the future, and it avoids the magic string proliferation. |
I made some changes, if everyone is happy I will merge this tomorrow. |
I messed up a bit with resolving merge conflicts (actually, I blame Visual Studio). Please let me know if you see something odd. |
No description provided.