Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LlamaCpp throwing "TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined." #7024

Closed
5 tasks done
PeterTucker opened this issue Oct 18, 2024 · 10 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@PeterTucker
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

import { LlamaCpp } from "@langchain/community/llms/llama_cpp";
import fs from "fs";

let llamaPath = "../project/data/llm-models/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf"

const question = "Where do Llamas come from?";


if (fs.existsSync(llamaPath)) {
  console.log(`Model found at ${llamaPath}`);

  const model = new LlamaCpp({ modelPath: llamaPath});

  console.log(`You: ${question}`);
  const response = await model.invoke(question);
  console.log(`AI : ${response}`);
} else {
  console.error(`Model not found at ${llamaPath}`);
}

Error Message and Stack Trace (if applicable)

TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined.
    at new LlamaModel (file:///C:/Users/User/Project/langchain-test/node_modules/node-llama-cpp/dist/evaluator/LlamaModel/LlamaModel.js:42:144)
    at createLlamaModel (file:///C:/Users/User/Project/langchain-test/node_modules/@langchain/community/dist/utils/llama_cpp.js:13:12)
    at new LlamaCpp (file:///C:/Users/User/Project/langchain-test/node_modules/@langchain/community/dist/llms/llama_cpp.js:87:23)
    at file:///C:/Users/User/Project/langchain-test/src/server.js:15:17

Description

Trying load a model, and chat with it. Throws errors:

TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined.

System Info


langchain@0.3.2 | MIT | deps: 12 | versions: 297
Typescript bindings for langchain
https://github.com/langchain-ai/langchainjs/tree/main/langchain/

keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

dist
.tarball: https://registry.npmjs.org/langchain/-/langchain-0.3.2.tgz
.shasum: aec3e679d3d6c36f469448380affa475c92fbd86
.integrity: sha512-kd2kz1cS/PIVrLEDFlrZsAasQfPLbY1UqCZbRKa3/QcpB33/n6xPDvXSMfBuKhvNj0bjW6MXDR9HZTduXjJBgg==
.unpackedSize: 2.9 MB

dependencies:
@langchain/openai: >=0.1.0 <0.4.0        jsonpointer: ^5.0.1                      uuid: ^10.0.0
@langchain/textsplitters: >=0.0.0 <0.2.0 langsmith: ^0.1.56-rc.1                  yaml: ^2.2.1
js-tiktoken: ^1.0.12                     openapi-types: ^12.1.3                   zod-to-json-schema: ^3.22.3
js-yaml: ^4.1.0                          p-retry: 4                               zod: ^3.22.4

maintainers:
- nfcampos <nuno@boringbits.io>
- jacoblee93 <jacoblee93@gmail.com>
- andrewnguonly <andrewnguonly@gmail.com>
- davidduong <david@duong.cz>
- hwchase17 <hw.chase.17@gmail.com>
- basproul <braceasproul@gmail.com>

dist-tags:
latest: 0.3.2                              next: 0.3.2-rc.0                           tag-for-publishing-older-releases: 0.2.20
Copy link

dosubot bot commented Oct 18, 2024

I found a similar open issue that might be related to your problem: Error encountered when using LlamaCpp class. The issue suggests that the problem could be related to the file format of the model being used. The example in the discussion uses a .bin file, while your code uses a .gguf file. It is recommended to try converting the model to a .bin file format, as the node-llama-cpp library might have compatibility issues with .gguf files [1].

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Oct 18, 2024
@PeterTucker
Copy link
Author

@dosu aren't you using node-llama-cpp under the hood? These same models work with that library.

Copy link

dosubot bot commented Oct 18, 2024

Yes, LangChain.js uses the node-llama-cpp library under the hood, specifically in the LlamaCpp class. If the same models work with node-llama-cpp but not with LangChain.js, it might be due to differences in how the models are initialized or invoked within the LangChain.js framework. Ensure that all required parameters are correctly configured and that the model binary is compatible with the expected input and output formats in LangChain.js. Also, verify that you have the node-llama-cpp module installed with a minimum version of 2.0.0 and a locally built version of Llama2, as these are prerequisites for using the LlamaCpp class [1][2].

To continue talking to Dosu, mention @dosu.

@PeterTucker
Copy link
Author

@dosu how would I initialize a .guff file as opposed to a .bin file?

Copy link

dosubot bot commented Oct 19, 2024

The LangChain.js framework supports loading models from .gguf files using the LlamaCpp class, similar to .bin files. You can initialize a model with a .gguf file by specifying the modelPath parameter with the path to your .gguf file. Here's an example:

import { LlamaCpp } from "@langchain/community/llms/llama_cpp";

const llamaPath = "/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin";
const question = "Where do Llamas come from?";

const model = new LlamaCpp({ modelPath: llamaPath });

console.log(`You: ${question}`);
const response = await model.invoke(question);
console.log(`AI : ${response}`);

Ensure that the node-llama-cpp module is installed with a minimum version of 2.0.0, as it is required for using the LlamaCpp class with .gguf files [1][2].

To continue talking to Dosu, mention @dosu.

@PeterTucker
Copy link
Author

@dosu still throwing error:

TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined.

Copy link

dosubot bot commented Oct 19, 2024

The error you're encountering, "TypeError: Cannot destructure property '_llama' of 'undefined' as it is undefined," suggests that there might be an issue with how the _llama property is being initialized or accessed within the LlamaCpp class. This could be due to a few reasons:

  1. Model Path: Ensure that the modelPath is correctly set and points to a valid .gguf file. Double-check the path and file permissions.

  2. Environment Variable: Make sure the LLAMA_PATH environment variable is set correctly if you're using it to specify the model path.

  3. Dependencies: Verify that you have the node-llama-cpp module installed with a minimum version of 2.0.0, as this is required for the LlamaCpp class to function properly [1][2].

  4. Initialization: Check if there are any additional initialization steps or parameters required by the LlamaCpp class that might not be documented. Sometimes, optional parameters can affect the initialization process.

If these steps don't resolve the issue, it might be helpful to look into the specific implementation of the LlamaCpp class to see how the _llama property is being set and if there are any conditions that could lead to it being undefined.

To continue talking to Dosu, mention @dosu.

@PeterTucker
Copy link
Author

@dosu

  1. Path is fine.
  2. Not using LLAMA_PATH
  3. Dependencies in package.json:
    "dependencies": {
        "@langchain/community": "^0.3.6",
        "@langchain/core": "^0.3.13",
        "node-llama-cpp": "^3.1.1"
    },
  1. I'm using a direct example from your documentation.

Copy link

dosubot bot commented Oct 19, 2024

Given that you're using the correct dependencies and a direct example from the documentation, the issue might be related to how the LlamaCpp class interacts with the node-llama-cpp library. Here are a few additional steps you can try:

  1. Check for Updates: Ensure that all your packages are up to date. Sometimes, issues are resolved in newer versions.

  2. Verify Model Compatibility: Double-check that the .gguf file is compatible with the node-llama-cpp library. There might be specific requirements or limitations with certain model files.

  3. Debugging: Add some logging or debugging statements in your code to see if the model is being loaded correctly and if the _llama property is being set at any point.

  4. Alternative Initialization: Try initializing the model with minimal parameters to see if the issue persists. Sometimes, reducing complexity can help identify the root cause.

If these steps don't resolve the issue, it might be necessary to look deeper into the node-llama-cpp library's handling of .gguf files or consult any available documentation or community discussions specific to that library.

To continue talking to Dosu, mention @dosu.

@PeterTucker
Copy link
Author

PeterTucker commented Oct 19, 2024

Answer from Langchain dev, "use version 2 not 3", package.json:

    "dependencies": {
        "node-llama-cpp": "^2"
    },

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

1 participant