Replies: 1 comment
-
(This is all "as I understand"...) An "embedding" model cuts up your plaintext notes into little chunks, then turns the result into vectors (long numbers, sort of), which allows for fancy searching. I say fancy, because I don't really understand it. But through some combination of the embedding model (what you've seen), your text, and the other text around the text being examined, a database of these vectors is made. Then, when you search for "road rage," for example, that, too, is turned into a vector so that the previously generated database can be searched. The results returned are those vectors in the database that are nearest to your "road rage" search, say, "angry driver." Next, the "Smart Chat" models are the generative AI-side of things, like ChatGPT's GPT-4o. This is distinct from the embedding models. These models are provided "context" - so, your previous chat history, your last question, and maybe some of your notes, then generate a response based on absurd statistics I really don't understand. The connection between the embedding model and the generative AI models is that the vector database the embedding model created is used to search for context which is then provided to the AI model. So, say you ask "Using my notes, when did that road rage incident occur?" The embedding model vectorizes your question, searches its database, find some number of relevant-looking notes, then passes it off to the generative AI model. It uses this context and tries to come back with a reasonable response, like, "You spoke of encountering an angry driver May 3rd." Gemini is not an embedding model, which is why you don't see it under "Notes embedding model." It is the large language model that will make sense of the context provided by the embedding model used. |
Beta Was this translation helpful? Give feedback.
-
Hello, Please forgive me if this is a stupid question to most of you, but I don't understand how to setup the plugin. I read that this works with Gemini. I was looking for Gemini in the "Notes Embedding Model" and "Blocks Embedding Model" but I don't see it listed there as an option. I do see it listed under "Smart Chat" though. So why are there different models listed for different functions? And if I understand correctly, I can only use Gemini in this plugin to chat with, but not for any of the other functions, is that right?
Beta Was this translation helpful? Give feedback.
All reactions