-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Xcode projects no longer build with both whisper.cpp and llama.cpp #1887
Comments
@1-ashraful-islam curious if you have an idea on the easiest path since you were doing this previously #1701 I was thinking you could fork llama.cpp and whisper.cpp, modify the swift package dependencies and exclusions such that they both reference the same set of ggml sources, but is there an easier path of building whisper and llama frameworks independently? I haven't wrapped my head around the SPM / Xcode ecosystem.. |
Sorry for the late reply, I don't know of a better way to resolve this issue. I banged my head around this problem before and gotten nowhere until I separated ggml as dependency in both whisper and llama. I would suggest doing the fork and revert the mentioned commits - until someone figures out a better approach. |
I actually just found a better way!! Here's how you can do it:
Repeat the steps for whisper: @ggerganov do you think this info would be useful to include somewhere? |
Wow, thanks! This solved the duplicate symbol errors for me. Being new to the whole Apple/Swift landscape, I don't think I would have figured this solution out any time soon. |
Hm, great! I haven't tried it, but since it seems to work for @RoryMB then this might be the way to do it. We can add a link to your comment in all relevant examples in |
One thing to note here: it seems like GGMLMetalClass is selected from either whisper.framework or llama.framework randomly. This is what I get when I run the application target:
At the moment I haven't seen this to be an issue for either transcription or llm use. If I run into any issue in the future, I will add notes here. |
@1-ashraful-islam: |
I have the same problem. Project compiled with CMake llama.cpp depedency works perfectly. So does project with whisper.cpp only dependency. But when compiled with both dependencies simultaneously LLM functionality breaks at runtime. Using different models cause different errors for example loading
running exactly the same code just after adding whisper.cpp as library. Using other model I am getting error:
|
As of these two commits:
3ffc83d
ggerganov/llama.cpp@df334a1
Xcode projects that depend on both whisper.cpp and llama.cpp fail to build with the following error:
Based on the comments in the accompanying pull requests I see that there is good reason for the commits, so I wonder if there is any alternative solution?
Thanks
The text was updated successfully, but these errors were encountered: