-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High-level API for multimodality #928
Comments
I'll consider adding the multi-modal args to the For now, I can definitely implement an Obsidian chat handler and maybe abstract out some of the Llava stuff to make it easier to make this generic in the future. Also thank you for bringing that model to my attention, definitely have to give it a try now! |
This seems like an apt place to ask this; How can I supply a local image file to use with multimodal images? Looks like the example assumes an image is being hosted at a url somewhere? |
@JoshuaFurman good question, I'll update the docs to include this. It works the same as the OpenAI import base64
def image_to_base64_data_uri(file_path):
with open(file_path, "rb") as img_file:
base64_data = base64.b64encode(img_file.read()).decode('utf-8')
return f"data:image/png;base64,{base64_data}"
# Replace 'file_path.png' with the actual path to your PNG file
file_path = 'file_path.png'
data_uri = image_to_base64_data_uri(file_path) Then just pass that in place of the http url |
Oh fantastic thank you! And this can be done without running the server right? I'm looking to add this directly into an application |
Yup, you just need to pass the |
Great thanks so much! Happy thanksgiving :) |
Thanks this works |
Closed this by mistake, though will be solved in #1147 |
Is your feature request related to a problem? Please describe.
Current high-level implementation of multimodality is relying on a specific prompt format.
Describe the solution you'd like
Models like Obsidian work with llama.cpp server and have a different format. It would be nice to have a high-level API for multimodality in llama-cpp-python to be able to pass
image
/images
as an argument after initializingLlama()
with all the paths to required extra-models, without relying on a pre-defined prompt format such asLlava15ChatHandler
.Describe alternatives you've considered
Alternatively, a custom prompt format class that supports images can be implemented, where prompt string is passed as an argument.
The text was updated successfully, but these errors were encountered: