You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The llama.cpp:light docker image exits with exitcode 132 when loading the model on both my AMD based systems. Hinting at a missing cpu instruction. If i try to run the container on an Intel based system i own it works as expected.
Command used: docker run -v [MODELPATH]:/models ghcr.io/ggerganov/llama.cpp:light -m /models/ggjt-model.bin -p "Building a website can be done in 10 simple steps:" -n 512
Output:
2023-04-13 09:56:26 main: seed = 1681372586
2023-04-13 09:56:26 llama.cpp: loading model from /models/ggjt-model.bin
EXITED (132)
Im using relatively new hardware so avx and avx-2 support shouldn't be a problem (Ryzen 7 3700X & Ryzen 7 5700U)
If i build the images locally they run as expected without the instruction set error.
I also tried playing around a bit with the QEMU settings in the docker build process but had no success as mentioned in this issue abetlen/llama-cpp-python#70.
The text was updated successfully, but these errors were encountered:
LCPP Default is set to 4, which is a bit too much in my opinion.
Saves VRAM (0.5-1%?), some compute and some electricity if set to 2, at the expense of some potential performance (prompt processing?), that I do not notice in usage. 2 is thus my own setting.
The
llama.cpp:light
docker image exits with exitcode 132 when loading the model on both my AMD based systems. Hinting at a missing cpu instruction. If i try to run the container on an Intel based system i own it works as expected.Command used:
docker run -v [MODELPATH]:/models ghcr.io/ggerganov/llama.cpp:light -m /models/ggjt-model.bin -p "Building a website can be done in 10 simple steps:" -n 512
Output:
Im using relatively new hardware so avx and avx-2 support shouldn't be a problem (Ryzen 7 3700X & Ryzen 7 5700U)
If i build the images locally they run as expected without the instruction set error.
I also tried playing around a bit with the QEMU settings in the docker build process but had no success as mentioned in this issue abetlen/llama-cpp-python#70.
The text was updated successfully, but these errors were encountered: