Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: supports_vision Method Incorrectly Returns False for Vision-Capable Models #7454

Open
githubuser16384 opened this issue Dec 28, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@githubuser16384
Copy link

What happened?

Although the groq/llama-3.2-11b-vision-preview model supports vision, running print(litellm.supports_vision(model="groq/llama-3.2-11b-vision-preview")) returns False. This behavior is common across models from various providers.

Relevant log output

No response

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.55.12

Twitter / LinkedIn details

No response

@githubuser16384 githubuser16384 added the bug Something isn't working label Dec 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant