Replies: 3 comments
-
the argument is |
Beta Was this translation helpful? Give feedback.
-
CUDA For Jetson user, if you have Jetson Orin, you can try this: Offical Support. If you are using an old model(nano/TX2), need some additional operations before compiling. Using make: make GGML_CUDA=1 cmake -B build -DGGML_CUDA=ON
|
Beta Was this translation helpful? Give feedback.
-
I can complete compilation with worse hardware so it can't be a hardware issue |
Beta Was this translation helpful? Give feedback.
-
My previous July version of LLAMACPP was compiled successfully. The system environment is WIN10+RTX3080+10G, and the compilation can be completed normally. This time I want to update to the latest version of LLAMACPP, but the compilation error is reported, and the prompt is as follows:
llama.cpp-b4095\ggml\src\ggml-cuda\common.cuh(392): catastrophic error : out of memory
I did not change any compilation parameters, only changed
GGML_CUDA=1, does the latest version have such high hardware requirements when compiling?
What hardware environment does the new version require? In addition, can I reduce the requirement for video memory capacity at compile time by modifying the configuration?
Beta Was this translation helpful? Give feedback.
All reactions