Skip to content
This repository has been archived by the owner on Sep 12, 2024. It is now read-only.

fatal runtime error: Rust cannot catch foreign exceptions #109

Open
HolmesDomain opened this issue Jul 26, 2023 · 5 comments
Open

fatal runtime error: Rust cannot catch foreign exceptions #109

HolmesDomain opened this issue Jul 26, 2023 · 5 comments

Comments

@HolmesDomain
Copy link

index.js

import { infer } from "./services/chatService.js";

infer("Who is the current President of the United States");

chatService.js

import { LLM } from "llama-node";
import { LLamaCpp } from "llama-node/dist/llm/llama-cpp.js";

const model = "../llama-2-7b-chat/llama-2-7b-chat.ggmlv3.q5_1.bin";
const llama = new LLM(LLamaCpp);

const config = {
    modelPath: model,
    enableLogging: false,
    nCtx: 1024,
    seed: 0,
    f16Kv: false,
    logitsAll: false,
    vocabOnly: false,
    useMlock: false,
    embedding: false,
    useMmap: true,
    nGpuLayers: 0
};

export async function infer(prompt) {
    await llama.load(config);

    await llama.createCompletion({
        prompt,
        nThreads: 4,
        nTokPredict: 200,
        topK: 40,
        topP: 0.1,
        temp: 0.1,
        repeatPenalty: 0.1,
    }, (response) => {
        process.stdout.write(response.token);
    });
}

Apple M2 Pro
Node v20.4.0

@agrigory777
Copy link

I have same issue
Apple M1 Pro
Node v16.15.0

@agrigory777
Copy link

agrigory777 commented Aug 26, 2023

after converting the models as per instructions it throws now a different error:

llama.cpp: loading model from /Users/agrigory/ml/llama.cpp/models/7B/ggml-model-f16.gguf
error loading model: unknown (magic, version) combination: 46554747, 00000001; is this really a GGML file?
llama_init_from_file: failed to load model
Waiting for the debugger to disconnect...
node:internal/process/promises:279
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[Error: Failed to initialize LLama context from file: ../models/7B/ggml-model-f16.gguf] {
  code: 'GenericFailure'
}

Process finished with exit code 1

@nigel-daniels
Copy link

I just tried running the example with a recently generated llama model and got the same result. The llama.cpp based model I am using was built following:

python3 convert.py --outfile models/7B/ggml-model-f16.bin --outtype f16 ../../llama2/llama/llama-2-7b --vocab-dir ../../llama2/llama/llama-2-7b
./quantize  ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin q4_0

config and completion params are as above.

@cupofjoakim
Copy link

I also have the issue described above. Using the model llama-2-7b-chat.ggmlv3.q4_0, and the basic typescript abortable example found here: https://github.com/Atome-FE/llama-node/blob/main/example/ts/llama-cpp/abortable.ts

Apple M2 Pro
Node v20.8.0

@Morty135
Copy link

Morty135 commented Dec 6, 2023

I got the same error on Windows, but after some more research, I just tried reinstalling Rust https://www.rust-lang.org/tools/install and that fixed the error for me.

Node v18.15.0

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants