Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Vulkan backend fail to run basic test on adreno 690 #9452

Closed
liangzelang opened this issue Sep 12, 2024 · 1 comment
Closed

Bug: Vulkan backend fail to run basic test on adreno 690 #9452

liangzelang opened this issue Sep 12, 2024 · 1 comment
Labels
bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss) stale

Comments

@liangzelang
Copy link

What happened?

Compile llama.cpp with Vulkan backend, and push libggml.so/libllama.so to android device which gpu is qcom adreno 690.
But when excute the test got error like below:

./test-backend-ops perf -o GGML_ADD -b vulkan
ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: Adreno (TM) 690 (Qualcomm Technologies Inc. Adreno Vulkan Driver) | uma: 1 | fp16: 1 | warp size: 64
libc++abi: terminating due to uncaught exception of type vk::UnknownError: vk::Device::createComputePipeline: ErrorUnknown
Aborted

Name and Version

./llama-cli --version
version: 3732 (8db003a)
built with Android (12285214, +pgo, +bolt, +lto, +mlgo, based on r522817b) clang version 18.0.2 (https://android.googlesource.com/toolchain/llvm-project d8003a456d14a3deb8054cdaa529ffbf02d9b262) for x86_64-unknown-linux-gnu

What operating system are you seeing the problem on?

No response

Relevant log output

./test-backend-ops perf -o GGML_ADD -b vulkan
ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: Adreno (TM) 690 (Qualcomm Technologies Inc. Adreno Vulkan Driver) | uma: 1 | fp16: 1 | warp size: 64
libc++abi: terminating due to uncaught exception of type vk::UnknownError: vk::Device::createComputePipeline: ErrorUnknown
Aborted
@liangzelang liangzelang added bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss) labels Sep 12, 2024
@github-actions github-actions bot added the stale label Oct 13, 2024
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss) stale
Projects
None yet
Development

No branches or pull requests

1 participant