You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the following snippet that works for ollama/llava, does not work for ollama/llama3.2-vision:11b (recently added to ollama). It interprets the byte string as text.
Do you have a timeframe for when support for this will be added to litellm?
message=Message(content="It looks like you've provided a large, encoded string. Unfortunately, I don't have the capability to decode or interpret it directly.\n\nHowever, based on the structure and content of the string, it appears to be a serialized data format, possibly in a binary or text-based encoding scheme such as JSON or XML.\n\nIf you could provide more context about what this string represents (e.g., is it a compressed file, an image, or some other type of data?) or what you're trying to achieve with this data, I may be able to help you better. Alternatively, if you can decode the string yourself and provide a human-readable representation, I'd be happy to assist with any questions or tasks related to the decoded data!", role='assistant'
Motivation, pitch
Currently LiteLLM does not support all models on Ollama, as stated on your website without this change :D
The Feature
Currently, the following snippet that works for ollama/llava, does not work for ollama/llama3.2-vision:11b (recently added to ollama). It interprets the byte string as text.
Do you have a timeframe for when support for this will be added to litellm?
gives this:
Motivation, pitch
Currently LiteLLM does not support all models on Ollama, as stated on your website without this change :D
Twitter / LinkedIn details
https://www.linkedin.com/in/davidtfoster/
The text was updated successfully, but these errors were encountered: