-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Embedding Related Functionality #133
Conversation
…ms to be done with them atm)
…emory for sequence ids
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks good. one question about consolidating some unsafe.
Hm, the doc-test that I 'fixed' fails here. EDIT: Ah, seems to be a Windows vs. Linux issue: rust-lang/rust-bindgen#1966 |
fix that and I'll add windows to the test CI so I don't break it again. ( |
thanks for the quick turnaround. Just cut a new release. |
Recently
llama.cpp
has added the ability to generate embeddings using BERT related models. There have been some issues, namely differences between the embeddings generated in Python and the ones coming fromllama.cpp
, but with ggerganov/llama.cpp#5796 being merged it seems to finally be stable.This PR implements the functionality to allow a user to make use of these new features, it adds an example based loosely on the
llama.cpp
embeddings example. Most of the code was translated from eitherllama.cpp
orllama-cpp-python
(specifically, theadd_sequence
method inLLamaBatch
).A few additional changes snuck in as well, replacing a
println!
call with atracing
call to allow users to filter it out, alongside a fix for a failing doc-test.