-
Notifications
You must be signed in to change notification settings - Fork 656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhanced Docs: LLM Embedding Examples #844
Conversation
Since it's my first PR for this library, I'm open to suggestions or improvements. If everything looks good as it is, that's awesome! Feel free to share thoughts on organizing or listing the examples on the Examples page. |
Hello. Thank you for wanting to improve the documentation here! This is great! What if this was a notebook instead? Then we could also publish a corresponding blog article like the others, which would give it more visibility. It could be titled something like "Getting started with LLM APIs", with a subtitle of "Generate embeddings and run LLMs with Gemini, VertexAI, Mistral, Cohere and AWS Bedrock". Would it make sense to add OpenAI and Claude given those are the two most popular? In a notebook, I believe the common code could also be consolidated into a two functions, one for the LLM and one for the embeddings. Then there could be a section per provide that passes the model path and initializes any keys / other calls. Additionally, I don't believe any of the |
I've moved it already to a Notebook, and you can check it out here: While I plan to add other models like OpenAI, Claude, and Groq later, here's a sneak peek at the current Table of Contents for the Notebook. |
Looking great! Did you want to wait to merge until the other models were added? |
One other thing with Groq, does it have an embeddings API? I see it's using a local embeddings model. |
I've got another version of the Notebook. Feel free to check it out here: Interestingly, Groq and Claude don't have their embedding models. Instead, they refer to other models:
In the config, I've defaulted both options to Interestingly, the final outputs show how different text embedding searches can be across models. The notebook provides a handy comparison, helping us understand which text embedding best suits a context. |
What if for the two that don't have embeddings APIs, you just use Voyage? Or there can be a note saying there is no embeddings API for this provider but they suggest "...". The other notebooks all install txtai from GitHub vs PyPI. I see the note about using a specific version but the flip side to that is that limiting code to a specific version misses important security updates. I've always taken the approach to always upgrade and fix the problems as they arise (usually it's caught during the GitHub actions build). Once again this is really cool and I really appreciate it. Once this PR is merged, I'll get a corresponding article up with a note crediting you. Would you prefer me to link to your GitHub profile or LinkedIn profile in those articles? |
Here is the updated version of Notebook: A couple of notes:
|
All sounds great and thanks again! I'll merge and create an article crediting you (linking to your GitHub profile). I may make a few minor edits but otherwise everything is looking good. I'll follow up here once everything is ready. It will take me a few days before I get to this though. |
@igorlima Thank you for this contribution! This PR has been merged and the following articles published. https://dev.to/neuml/getting-started-with-llm-apis-2j89 |
I saw this post on LinkedIn, too, and I must say that I liked it! Thanks for sharing.
|
Thank you for putting this together. I'm sure it will gain more and more traction as time goes on. It's a solid resource! |
This PR enriches the documentation with an array of embedding examples. These examples illustrate how to generate embeddings using the power of renowned LLMs within the
txtai
library.This documentation enhancement was sparked by an issue seeking guidance on integrating Gemini with
txtai
. The idea is to hope these examples make someone else's journey smoother and more enjoyable!