-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MultiModal does not work with Next.js Frontend and FastAPI backend #65
Comments
@Laktus by coincidence, I just added a vercel/ai compatible StreamingResponse in the last release, see https://github.com/run-llama/create-llama/blob/main/templates/types/streaming/fastapi/app/api/routers/vercel_response.py |
@marcusschiesser That looks awesome thanks! I think someone still needs to modify the template to work with it though. Or is it already integrated in the last release? The template is missing being able to handle the incoming data from |
@Laktus The template is part of create-llama since npx create-llama@0.1.0 - It was just updated in npx create-llama@0.1.1 - what are you missing? |
@marcusschiesser I will try integrating my changes to the backend for handling the data parameter and then will add a message if it works, thanks for the update! |
@marcusschiesser Thanks for any help. |
@Laktus, the problem is that in Python, you have to use the So I would start replacing Details about using it are here: If you like, you're welcome to post a diff of your code here. |
@marcusschiesser But this saves the images into the vector DB or not? I don't want to populate the vector DB with the image information, but only want to attach the image to one message. In the Vision Docs of OpenAI (https://platform.openai.com/docs/guides/vision) you can see the following possibilities of the completion API
The |
@Laktus yes, this is a current issue of the Python version. We're working on aligning the multi-modal capabilities of the Python and the Typescript version. Once that's done, we will add image upload support to the FastAPI backend |
@marcusschiesser Is there any update on when this will be implemented? |
Not yet, we'll need multi-modal support in the Python framework first |
Hi,
I wanted to implement custom evaluating logic. Realizing that only the python implemention of LLamaIndex supports
QuestionGenerator
i thought that it would be more reasonable to the FastAPI backend + Next.js Frontend setup.I managed to pass the data for images to the backend extending the handleSubmit of useChat for vercel/ai#725. I however don't know how to duplicate the functionality of StreamData in the FastAPI backend.
Can you make this example work out of the box or provide some further documentation of how to implement this? Currently the multi modality does not work, without multiple changes.
Thanks for taking your time and reading my request.
The text was updated successfully, but these errors were encountered: