Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove #if arch(arm) check in Swift Package Manager #1561

Merged
merged 1 commit into from
Dec 5, 2023

Conversation

finnvoor
Copy link
Contributor

I assume the #if arch(arm) || arch(arm64) is meant to prevent older Intel macs from using Metal, however checking this here doesn't really work. This check is executed on the machine compiling the package, and it is very likely that the machine compiling the package is not the machine that a compiled app will run on.

Current Behaviour:

  • iOS app compiled on an Apple Silicon mac == uses Metal
  • iOS app compiled on an Intel mac == won't use Metal
  • macOS universal app compiled on Apple Silicon mac == both Intel and Apple Silicon will use metal
  • macOS universal app compiled on Intel mac == neither Intel nor Apple Silicon will use metal
    This is especially problematic for me as Xcode Cloud builds are currently done on Intel machines, so Metal will not be used in built apps.

This PR enables Metal no matter what the compiling architecture is. In the case of running a universal app on an Intel mac, there is still the issue that Metal will be enabled when it probably shouldn't be (maybe a runtime check should be added?), however I think it is much more likely that you will want Metal enabled when using SPM (all iOS devices are Apple Silicon, and more and more macs are Apple Silicon)

@bobqianic
Copy link
Collaborator

@sindresorhus

@sindresorhus
Copy link
Contributor

Should be done in https://github.com/ggerganov/llama.cpp/blob/master/Package.swift too.

// @kchro3 @jhen0409

@bobqianic bobqianic merged commit f0efd02 into ggerganov:master Dec 5, 2023
37 checks passed
@kchro3
Copy link

kchro3 commented Dec 5, 2023

Opened a PR for llama.cpp

landtanin pushed a commit to landtanin/whisper.cpp that referenced this pull request Dec 16, 2023
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Dec 20, 2023
* origin/master:
  bench.py : add different large models (ggerganov#1655)
  wchess : update README.md
  release : v1.5.2
  wchess : update readme
  wchess : whisper assisted chess (ggerganov#1595)
  sync : ggml (Metal fixes, new ops, tests) (ggerganov#1633)
  cmake : target windows 8 or above for prefetchVirtualMemory in llama-talk (ggerganov#1617)
  cmake : Fix bug in httplib.h for mingw (ggerganov#1615)
  metal : fix `ggml_metal_log` vargs (ggerganov#1606)
  whisper.objc : disable timestamps for real-time transcription
  whisper : more debug messages + fix fallback logic
  metal : fix soft_max kernel src1 argument (ggerganov#1602)
  sync : ggml (new ops, new backend, etc) (ggerganov#1602)
  server : pass max-len argument to the server (ggerganov#1574)
  ios : Remove `#if arch(arm)` check for using Metal (ggerganov#1561)
  ggml : Fix 32-bit compiler warning (ggerganov#1575)
  ggml : re-enable blas for src0 != F32 (ggerganov#1583)
iThalay pushed a commit to iThalay/whisper.cpp that referenced this pull request Sep 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants