-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When using llama-cpp-rs alongside the whisper-rs crate in the same project, the application crashes during model loading. #263
Comments
I wonder if this is the GGUF backend not being shared. I don't think I'll be fixing this in the foreseeable future, but thank you for the report! Would you know if the two libraries can be used together from cpp? |
I'm not very familiar with these underlying technical implementations. I'm trying out other solutions, and if I find out the problem, I'll share it here. |
I noticed that whisper.cpp has a talk-llama example, so they should be able to be used together. https://github.com/ggerganov/whisper.cpp/blob/master/examples/talk-llama |
@jiabochao Did you find a solution ? ..this would be of much interest to me as well. |
No, not yet, but I think a different whisper rust library might solve the problem, and I found another whisper rust library that looks like it might be better maintained, but I haven't tested it yet. |
When using llama-cpp-rs alongside the whisper-rs crate in the same project, the application crashes during model loading.
The issue still occurs even when the whisper model and llama model are not being executed at the same time.
The program output is as follows:
The text was updated successfully, but these errors were encountered: