-
Notifications
You must be signed in to change notification settings - Fork 64
Error: Missing field nGpuLayers
#80
Comments
I don't think the MPT models work with Llama.cpp at this point? ggml-org/llama.cpp#1333 I know there is ggml-js. Maybe there are others. I'm using Python and ctransformers to try out new ggml models. I have a boilerplate for it here: https://huggingface.co/spaces/matthoffner/ggml-ctransformers-fastapi |
Can't speak to whether MPT models work, but to address that error message directly, this is the config I am using
note the missing nGpuLayers field. nGpuLayers can be set to 0 if you don't want to use cuBLAS or if you have not compiled with BLAS |
This is the First example in the documentation. |
sorry guys i m in vacation this week and i havnt update example yet. nGpuLayer is for cuda build only. you will just need to pass 0 if your are not using cuda. |
add documentation for first example. |
Thank you for your support guys, it's working now |
Hello guys, i try to run mpt-7b model , and i am getting this code, i appreciate any help, here is the detail
Node.js v19.5.0
node_modules\llama-node\dist\llm\llama-cpp.cjs:82
this.instance = yield import_llama_cpp.LLama.load(path, rest, enableLogging);
^
Error: Missing field `nGpuLayers` at LLamaCpp.<anonymous> (<path>\node_modules\llama-node\dist\llm\llama-cpp.cjs:82:52) at Generator.next (<anonymous>) at <path>\node_modules\llama-node\dist\llm\llama-cpp.cjs:50:61 at new Promise (<anonymous>) at __async (<path>\node_modules\llama-node\dist\llm\llama-cpp.cjs:34:10) at LLamaCpp.load (<path>\node_modules\llama-node\dist\llm\llama-cpp.cjs:80:12) at LLM.load (<path>\node_modules\llama-node\dist\index.cjs:52:21) at run (file:///<path>/index.mjs:27:17) at file:///<path>/index.mjs:42:1 at ModuleJob.run (node:internal/modules/esm/module_job:193:25) { code: 'InvalidArg' }
folder structure

index.mjs
thank you for your time
The text was updated successfully, but these errors were encountered: