Skip to content

Commit

Permalink
talk-llama : fix n_gpu_layers usage again (#1442)
Browse files Browse the repository at this point in the history
  • Loading branch information
jhen0409 authored Nov 7, 2023
1 parent 0c91aef commit 75dc800
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion examples/talk-llama/talk-llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ int main(int argc, char ** argv) {

auto lmparams = llama_model_default_params();
if (!params.use_gpu) {
lcparams.lmparams = 0;
lmparams.n_gpu_layers = 0;
}

struct llama_model * model_llama = llama_load_model_from_file(params.model_llama.c_str(), lmparams);
Expand Down

0 comments on commit 75dc800

Please sign in to comment.