-
-
Notifications
You must be signed in to change notification settings - Fork 530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding the issue of calling the Gpt_4_vision_preview model #426
Comments
Vision API haven't implemented yet @ betalgo/openai SDK. You have to wait for it.. |
Thank you so much for responding to my issue. I really appreciate your help and the work you're doing on this open-source project. |
Why your code does not work? Because in the currently implemented chat completion request handles the content property as a text (string) and you want to handle it as a json array. In the request to openAI , your content will not be a json array but a simple string contains a lot of weird characters. |
Oh, I see. Then I'll wait for the API update. |
When I use the Models.Gpt_4_vision_preview model, the responses I get are always strange. For example, when I ask "Are you a vision model?", the reply I get is "No, I am not, I am a large language model." I suspect that although I specified Models.Gpt_4_vision_preview, it is not the model that is ultimately being called.
The text was updated successfully, but these errors were encountered: