Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding the issue of calling the Gpt_4_vision_preview model #426

Closed
yolo-lxf opened this issue Nov 16, 2023 · 7 comments
Closed

Regarding the issue of calling the Gpt_4_vision_preview model #426

yolo-lxf opened this issue Nov 16, 2023 · 7 comments
Labels
Ready for next version This issue solved and waiting for next release
Milestone

Comments

@yolo-lxf
Copy link

When I use the Models.Gpt_4_vision_preview model, the responses I get are always strange. For example, when I ask "Are you a vision model?", the reply I get is "No, I am not, I am a large language model." I suspect that although I specified Models.Gpt_4_vision_preview, it is not the model that is ultimately being called.

@belaszalontai
Copy link
Contributor

Have you tried this?

image

I think Vision model is introduced for analyzing images not chat. The response is definitely coming from the requested vision model even if we don't like the answer.

@belaszalontai
Copy link
Contributor

And if you provide image to this model then even the question was not related the answer looks promising :)

image

@yolo-lxf
Copy link
Author

Yes, I have tried it, but the photos I pass through the method never correspond to the analysis results it gives. Could it be that there is an error in the way I pass the photos?
This is my code block:
image

@belaszalontai
Copy link
Contributor

belaszalontai commented Nov 16, 2023

Vision API haven't implemented yet @ betalgo/openai SDK. You have to wait for it..

@yolo-lxf
Copy link
Author

Thank you so much for responding to my issue. I really appreciate your help and the work you're doing on this open-source project.

@belaszalontai
Copy link
Contributor

Why your code does not work? Because in the currently implemented chat completion request handles the content property as a text (string) and you want to handle it as a json array. In the request to openAI , your content will not be a json array but a simple string contains a lot of weird characters.

@yolo-lxf
Copy link
Author

Oh, I see. Then I'll wait for the API update.

@kayhantolga kayhantolga added the Ready for next version This issue solved and waiting for next release label Dec 5, 2023
@kayhantolga kayhantolga added this to the 7.4.2 milestone Dec 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ready for next version This issue solved and waiting for next release
Projects
None yet
Development

No branches or pull requests

3 participants