-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SEHException on Tokenize model. #791
Comments
What if using |
The same error. I found this thread: ggerganov/llama.cpp#6007 |
Maybe it's an upstream issue. Does the problem appear when you use another model? |
@AsakusaRinne Sorry for delay. I check with this model leliuga/all-MiniLM-L12-v2-GGUF and it works. |
I think now we can close this. And wait support for this model from llama.cpp. Thanks for help @AsakusaRinne! |
Description
Hello! Thanks for the great project!
I faced with issue then I try tokenize text with new line chartacter '\n'.
Have no idea how to debug this (have no expirience with debugging native code).
Maybe this issue need route to llama.cpp project?
I get this then I try make a pipeline from PDF document -> textual representation-> (error). Maybe where is any workarund for fix this?
Thanks!
There is a console log:
The text was updated successfully, but these errors were encountered: