We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
--n-gpu-layers
--gpu-layers
1 parent 1a8c879 commit f3040beCopy full SHA for f3040be
README.md
@@ -279,7 +279,7 @@ In order to build llama.cpp you have three different options.
279
On MacOS, Metal is enabled by default. Using Metal makes the computation run on the GPU.
280
To disable the Metal build at compile time use the `LLAMA_NO_METAL=1` flag or the `LLAMA_METAL=OFF` cmake option.
281
282
-When built with Metal support, you can explicitly disable GPU inference with the `--gpu-layers|-ngl 0` command-line
+When built with Metal support, you can explicitly disable GPU inference with the `--n-gpu-layers|-ngl 0` command-line
283
argument.
284
285
### MPI Build
0 commit comments