Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Images exit with exitcode 132 on AMD systems #940

Closed
LLukas22 opened this issue Apr 13, 2023 · 0 comments
Closed

Docker Images exit with exitcode 132 on AMD systems #940

LLukas22 opened this issue Apr 13, 2023 · 0 comments

Comments

@LLukas22
Copy link

The llama.cpp:light docker image exits with exitcode 132 when loading the model on both my AMD based systems. Hinting at a missing cpu instruction. If i try to run the container on an Intel based system i own it works as expected.

Command used: docker run -v [MODELPATH]:/models ghcr.io/ggerganov/llama.cpp:light -m /models/ggjt-model.bin -p "Building a website can be done in 10 simple steps:" -n 512

Output:

2023-04-13 09:56:26 main: seed = 1681372586
2023-04-13 09:56:26 llama.cpp: loading model from /models/ggjt-model.bin
EXITED (132)

Im using relatively new hardware so avx and avx-2 support shouldn't be a problem (Ryzen 7 3700X & Ryzen 7 5700U)
If i build the images locally they run as expected without the instruction set error.
I also tried playing around a bit with the QEMU settings in the docker build process but had no success as mentioned in this issue abetlen/llama-cpp-python#70.

Deadsg pushed a commit to Deadsg/llama.cpp that referenced this issue Dec 19, 2023
jeroen-mostert pushed a commit to jeroen-mostert/llama.cpp that referenced this issue Aug 30, 2024
LCPP Default is set to 4, which is a bit too much in my opinion.
Saves VRAM (0.5-1%?), some compute and some electricity if set to 2, at the expense of some potential performance (prompt processing?), that I do not notice in usage. 2 is thus my own setting.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants