Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Xcode projects no longer build with both whisper.cpp and llama.cpp #1887

Open
RoryMB opened this issue Feb 22, 2024 · 8 comments
Open

Xcode projects no longer build with both whisper.cpp and llama.cpp #1887

RoryMB opened this issue Feb 22, 2024 · 8 comments

Comments

@RoryMB
Copy link

RoryMB commented Feb 22, 2024

As of these two commits:
3ffc83d
ggerganov/llama.cpp@df334a1

Xcode projects that depend on both whisper.cpp and llama.cpp fail to build with the following error:

duplicate symbol '_ggml_map_custom2_inplace_f32' in:
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/whisper.o
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/llama.o
duplicate symbol '_ggml_backend_buft_supports_backend' in:
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/whisper.o
    /Users/rmbutler/Library/Developer/Xcode/DerivedData/Test-enhfbqpszgsyebgummvnergeyfis/Build/Products/Debug-iphoneos/llama.o
...etc

ld: 533 duplicate symbols
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Based on the comments in the accompanying pull requests I see that there is good reason for the commits, so I wonder if there is any alternative solution?

Thanks

@to3d
Copy link

to3d commented Feb 25, 2024

@1-ashraful-islam curious if you have an idea on the easiest path since you were doing this previously #1701

I was thinking you could fork llama.cpp and whisper.cpp, modify the swift package dependencies and exclusions such that they both reference the same set of ggml sources, but is there an easier path of building whisper and llama frameworks independently? I haven't wrapped my head around the SPM / Xcode ecosystem..

@1-ashraful-islam
Copy link
Contributor

1-ashraful-islam commented Mar 5, 2024

Sorry for the late reply, I don't know of a better way to resolve this issue. I banged my head around this problem before and gotten nowhere until I separated ggml as dependency in both whisper and llama. I would suggest doing the fork and revert the mentioned commits - until someone figures out a better approach.

@1-ashraful-islam
Copy link
Contributor

I actually just found a better way!!
You can include the packages as framework (I got the idea from mlx-swift-examples)

Here's how you can do it:

  1. File > New > Target > Multiplatform > Framework
    image
  2. Set the product name to llama for llama.cpp. Set the other settings appropriately
    image
  3. Now in your "Targets", select llama (icon should be yellow for framework). Then add llama.cpp package dependency Under General > Frameworks and Libraries
    image
  4. Remove llama.cpp package dependency from your original target (its now included through the framework)

Repeat the steps for whisper:
In step 2 set the Product name towhisper for whisper.cpp
In step 3 add whisper.cpp as package dependency

@ggerganov do you think this info would be useful to include somewhere?

1-ashraful-islam referenced this issue in ggerganov/llama.cpp Mar 6, 2024
* Revert "swift : update Package.swift to use ggml as dependency (#4691)"

This reverts commit ece9a45.

* spm : add ggml headers
@RoryMB
Copy link
Author

RoryMB commented Mar 6, 2024

Wow, thanks! This solved the duplicate symbol errors for me. Being new to the whole Apple/Swift landscape, I don't think I would have figured this solution out any time soon.

@ggerganov
Copy link
Owner

Hm, great! I haven't tried it, but since it seems to work for @RoryMB then this might be the way to do it. We can add a link to your comment in all relevant examples in llama.cpp and whisper.cpp

@1-ashraful-islam
Copy link
Contributor

One thing to note here: it seems like GGMLMetalClass is selected from either whisper.framework or llama.framework randomly. This is what I get when I run the application target:

objc[40428]: Class GGMLMetalClass is implemented in both /Users/.../whisper.framework/Versions/A/whisper (0x1020083c0) and /Users/.../llama.framework/Versions/A/llama (0x101e143c0). One of the two will be used. Which one is undefined.

At the moment I haven't seen this to be an issue for either transcription or llm use. If I run into any issue in the future, I will add notes here.

@gavin1818
Copy link
Contributor

@1-ashraful-islam:
Thank you for the instructions on importing Whistler and Llama into a project. While I was able to import them successfully, I encountered issues when trying to run both models simultaneously. Whistler operates as expected, but Llama does not produce any response. Did you expereince a similar issue, any clue why that could happen? Thank you.

@kikaitachi
Copy link

I have the same problem. Project compiled with CMake llama.cpp depedency works perfectly. So does project with whisper.cpp only dependency. But when compiled with both dependencies simultaneously LLM functionality breaks at runtime. Using different models cause different errors for example loading Meta-Llama-3.1-8B-Instruct-IQ4_XS.gguf fails with:

llama_model_load: error loading model: invalid model: tensor '' is duplicated

running exactly the same code just after adding whisper.cpp as library. Using other model I am getting error:

terminate called after throwing an instance of 'std::out_of_range'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants