Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI API v1/models returns nothing that v1/internal/model/list does #5675

Closed
1 task done
TiagoTiago opened this issue Mar 9, 2024 · 8 comments
Closed
1 task done
Labels
bug Something isn't working stale

Comments

@TiagoTiago
Copy link

Describe the bug

This is a bug, right?

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

Try it on http://127.0.0.1:5000/docs and compare the returned results; or access the API thru scripts or whatever.

The Continue VSCode/Codium extension for example is completely blind to what models I actually have when I try the auto-detect option; had to add things manually in it's config.json

Screenshot

No response

Logs

.

System Info

.
@TiagoTiago TiagoTiago added the bug Something isn't working label Mar 9, 2024
@egarma
Copy link

egarma commented Mar 10, 2024

For me going to v1/models returns chatgpt3.5 even though I'm loading mixtral8x7b through exllamav2-hf loader

@egSat
Copy link

egSat commented Mar 11, 2024

For me, no matter what model is loaded that /v1/models always returns the following:

{"object":"list","data":[{"id":"gpt-3.5-turbo","object":"model","created":0,"owned_by":"user"},{"id":"text-embedding-ada-002","object":"model","created":0,"owned_by":"user"}]}

@jonnysowards
Copy link

Looks like the controller is calling response = OAImodels.list_dummy_models() on line 148 of extensions/openai/script.py it should be calling response = OAImodels.list_models(). I am AFK at the moment but I should be able to open a PR to fix when I have a minute.

jonnysowards added a commit to jonnysowards/text-generation-webui that referenced this issue Mar 18, 2024
Fixes bug oobabooga#5675 that lists dummy models instead of loaded models in OpenAI extension. v1/models.
@ghost
Copy link

ghost commented Mar 19, 2024

Is that really an issue? Seems more like intentional behavior to me. IMHO it is intentional that the endpoint here mocks OpenAI's model response. There is an internal endpoint for a list of (internal) models, which was also mentioned here. I've also asked myself why that is the case. Otherwise they could perhaps have simply combined them there?

@jonnysowards
Copy link

jonnysowards commented Mar 19, 2024

For true OpenAI api support it is necessary. When using continue.dev, it will query the OpenAI endpoint and add those 2 dummy models which do not work. With this changed it grabs the real models and adds them to continue.dev ready to go. So I am not 100% sure it is necessary as it can be worked around, just not sure why you would include the endpoint at all if it all it does is include dummy models.

@jonnysowards
Copy link

A reason it wouldn’t be an issue is if using an integration requires the OpenAI model names (the dummy ones) from that endpoint for the integration to work.

@github-actions github-actions bot added the stale label May 18, 2024
Copy link

This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

@TiagoTiago
Copy link
Author

Unless that has actually been fixed (or there's a legit reason for the apparent misbehavior of the API), that bot has no business being here; can that useless thing be disabled please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

4 participants