-
Notifications
You must be signed in to change notification settings - Fork 367
Hi, is there any plans for word embeddings? #56
Comments
Hi, I would like to add llama.cpp PR here for reference. just noticed they merged the embedding function |
Hi @hlhr202! 👋 Thanks for bringing this to our attention. The code here doesn't look hard at all to port! We will add it to the repo since it makes sense to have a way for people to extract embeddings. But I'd like to understand (just to satisfy my curiosity). Why are the LLaMA embeddings useful? Is this the same thing as regular word embeddings from any other model? That is, capture the semantics of a word as a vector to allow computing similarity metrics? Do you have a use case for extracting the embeddings that would help us understand the possibilities better? 😄 Not saying this is a requirement for the PR, I just want to learn if there are different use cases for this that I'm not aware of. |
Please check out #72. I implemented some code to extract embeddings, but we still need to validate if the results are correct, and how to best expose this to our different levels of API. |
yes, computing semantic similarity is quite useful in many cases. it allow us to search sentences in similar semantic by using natural language query. |
@setzer22
it looks like everything works well, but the resulting similarity is quite different from openai's text-embedding-ada-002. |
It seems llama.cpp have not done embeddings yet. I try to print the embedding vectors, but got size 0. |
@setzer22 sorry I reopened this ticket cuz I have noticed some changes from llama.cpp. And still I have tested a few examples on 7B alpaca but the results not very accurate (not sure if it is caused by small model size). what I v noticed from llama.cpp is that they were not using any end token as representation of sentence embedding, they put all prompt tokens into eval function, but always get a fixed length of vectors. |
another tricks i found, but i m not sure if their implementation make sense... I guess they just remove additional vector items and I even dont know if they drop the part correctly, quite weird. I will continue follow the issue in the following few weeks. I m going to have a test on 30B model to see if semantic accuracy is better than 7B alpaca. |
This should now be sorted / understandable with #273. Let me know if there's anything else. |
Noob here, excuse me for my stupid feature request.
I noticed that someone in llama.cpp is working on word embedding from hidden layers. I m just asking is there any possibility to implement an embedding mode for llama-rs? thx
What i found is this commit
The text was updated successfully, but these errors were encountered: