-
Notifications
You must be signed in to change notification settings - Fork 656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use Gemini with txtai #843
Comments
Hello, first off thank you for the kind words! What kind of error are you receiving? |
Before jumping into the error I'm facing, let me share my setup. I use a Docker container running Python 3.12.2 on MacOS.
After setting everything up and running
In short, it outputs the following error message: Exception: Invalid Message passed in - {'content': "Where is one place you'd go in Washington, DC?", 'role': 'prompt'}. File an issue https://github.com/BerriAI/litellm/issues |
Looks like you're running into the same issue as this: #841 The original LLM pipeline was designed to work with raw prompts (i.e. with chat templates manually applied). Over time this has changed where chat messages are more the default. The issue above added a new Nonetheless, here are the options in the meantime.
llm([{"role": "user", "content": "Where is one place you'd go in Washington, DC?"}])
llm("Where is one place you'd go in Washington, DC?", defaultrole="user") Now with the I'm not exactly sure if Gemini models generate embeddings and/or if they are supported by LiteLLM though. |
Absolutely; thank you so much for the helpful hint! 🙏 🕺 The resource shared in the previous comment was a game-changer. It successfully guided me through running the Gemini LLM pipeline and generating embeddings using Gemini. Here's a snippet of the code that worked seamlessly with the Gemini LLM model:
Additionally, I’ve prepared four more samples for other models: VertexAI, Mistral, Cohere, and AWS Bedrock. Anyone searching for examples of these models can find them right here.
Once again, thank you for your support and guidance! 🙌 🎉 |
I'm glad this worked! One minor thing, you shouldn't need I appreciate you documenting how to do this with all the model endpoints you did, thank you! |
This issue has not only helped improve the documentation for the Below is a proposed PR to add these embedding usages in the documentation. |
I've been exploring the possibilities of using Google Gemini with
txtai
, but I haven't found any references to Gemini in the documentation yet.Is there a way to embed text in
txtai
using Gemini? The documentation references other LLMs, but Gemini seems missing.Here's a snippet of what I've attempted using the
litellm
method, though I haven't had any success so far:I'd appreciate the guidance if anyone has insights or knows of any documentation or examples on using Gemini with
txtai
.The text was updated successfully, but these errors were encountered: