Skip to content

musa: refine compute capability #12493

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 22, 2025
Merged

Conversation

yeahdongcn
Copy link
Contributor

Make sure to read the contributing guidelines before submitting a PR

This PR improves the handling of compute capabilities for MUSA devices with the following updates:

  1. Adjusted Compute Capability Offset
    • GGML_CUDA_CC_OFFSET_MTHREADS is now positioned between NVIDIA and AMD.
  2. Boundary Check Update
    • Updated the boundary check for NVIDIA compute capability tests using !GGML_CUDA_CC_IS_MTHREADS(cc).
  3. Preserved Feature Availability Checks
    • Ensured that NVIDIA and AMD feature availability tests remain unchanged.

Testing Done

  • ./build/bin/test-backend-ops
  • ./build/bin/llama-cli -m ~/models/deepseek-r1_7b_q4_0.gguf -ngl 999
# ./build/bin/test-backend-ops
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 MUSA devices:
  Device 0: MTT S80, compute capability 2.1, VMM: yes
Testing 2 devices

Backend 1/2: MUSA0
  Device description: MTT S80
  Device memory: 16297 MB (16292 MB free)

  ABS(type=f16,ne_a=[128,2,2,2],v=0): �[1;32mOK�[0m
  ...
  4634/4634 tests passed
  Backend MUSA0: �[1;32mOK�[0m

Backend 2/2: CPU
  Skipping CPU backend
2/2 backends passed
�[1;32mOK�[0m

Sorry, something went wrong.

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Mar 21, 2025
@yeahdongcn
Copy link
Contributor Author

This is the initial PR. @fishingfly and I will collaborate to evaluate the features on MTT S80 and MTT S4000.

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
@yeahdongcn
Copy link
Contributor Author

I re-ran the tests, and all passed.

@yeahdongcn
Copy link
Contributor Author

Hi @JohannesGaessler Do you know how to retrigger the CI without pushing or force-pushing? Thanks.

@JohannesGaessler
Copy link
Collaborator

There's a button you can press as a collaborator.

@JohannesGaessler JohannesGaessler merged commit fac63a3 into ggml-org:master Mar 22, 2025
89 of 90 checks passed
Ivy233 pushed a commit to Ivy233/llama.cpp that referenced this pull request Mar 23, 2025

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
* musa: refine compute capability

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Address review comments

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants