-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI API v1/models returns nothing that v1/internal/model/list does #5675
Comments
For me going to v1/models returns chatgpt3.5 even though I'm loading mixtral8x7b through exllamav2-hf loader |
For me, no matter what model is loaded that /v1/models always returns the following: {"object":"list","data":[{"id":"gpt-3.5-turbo","object":"model","created":0,"owned_by":"user"},{"id":"text-embedding-ada-002","object":"model","created":0,"owned_by":"user"}]} |
Looks like the controller is calling |
Fixes bug oobabooga#5675 that lists dummy models instead of loaded models in OpenAI extension. v1/models.
Is that really an issue? Seems more like intentional behavior to me. IMHO it is intentional that the endpoint here mocks OpenAI's model response. There is an internal endpoint for a list of (internal) models, which was also mentioned here. I've also asked myself why that is the case. Otherwise they could perhaps have simply combined them there? |
For true OpenAI api support it is necessary. When using continue.dev, it will query the OpenAI endpoint and add those 2 dummy models which do not work. With this changed it grabs the real models and adds them to continue.dev ready to go. So I am not 100% sure it is necessary as it can be worked around, just not sure why you would include the endpoint at all if it all it does is include dummy models. |
A reason it wouldn’t be an issue is if using an integration requires the OpenAI model names (the dummy ones) from that endpoint for the integration to work. |
This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment. |
Unless that has actually been fixed (or there's a legit reason for the apparent misbehavior of the API), that bot has no business being here; can that useless thing be disabled please? |
Fixes bug oobabooga#5675 that lists dummy models instead of loaded models in OpenAI extension. v1/models.
Describe the bug
This is a bug, right?
Is there an existing issue for this?
Reproduction
Try it on http://127.0.0.1:5000/docs and compare the returned results; or access the API thru scripts or whatever.
The Continue VSCode/Codium extension for example is completely blind to what models I actually have when I try the auto-detect option; had to add things manually in it's config.json
Screenshot
No response
Logs
.
System Info
.
The text was updated successfully, but these errors were encountered: