Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connect langchainjs to a local llm installed in Jan AI (llama) #7540

Open
getzelus opened this issue Jan 17, 2025 · 6 comments
Open

Connect langchainjs to a local llm installed in Jan AI (llama) #7540

getzelus opened this issue Jan 17, 2025 · 6 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@getzelus
Copy link

Discussed in #7539

Originally posted by getzelus January 17, 2025
Hello guys,
Im trying to use langchainjs with a local llm (llama3.2-1b-instruct) that i installed in Jan AI and I start its server.

I can do a simple call without langchainjs and its working.
But when I try with langchainjs, i have an error.
Its difficult to find the solution in the documentation, forum or chatgpt.

Here is the code working without langchainjs :

import axios from 'axios';

const url = 'http://127.0.0.1:1337/v1/chat/completions';
const model = 'llama3.2-1b-instruct';

async function query(system, prompt) {
    try {
        const response = await axios.post(
            url,
            {
                model: model,
                messages: [
                    { role: 'system', content: system },
                    { role: 'user', content: prompt }
                ],
            },
            {
                headers: {
                    'Content-Type': 'application/json',
                },
            }
        );
        const res = response.data.choices[0].message.content;
        return res;
    } catch (error) {
        console.error('Error querying Cortex:', error.response ? error.response.data : error.message);
    }
}

async function start() {
    let res = await query('your an assistant', 'whats the capital of Japan ?')
    console.log(res)
}

start()

Here is the code for langchainjs that doesnt work :

import { ChatOllama } from "@langchain/ollama";

const llm = new ChatOllama({
    model: "llama3.2-1b-instruct",
    baseUrl: "http://127.0.0.1:1337/v1/chat/completions",
    temperature: 0,
});

try {
    const res = await llm.invoke("Whats the capital of Japan");
    console.log(res);
} catch (error) {
    console.error("Error invoking LLM:", error);
}

I tried many options, like changing the url, the name of the model, using chatopenAI package etc...

the error i got :

Getting text from response
Error invoking LLM: ResponseError: <html>
<head><title>404 Not Found</title></head>
<body bgcolor="white" text="black">
<center><h1>404 Not Found</h1></center>
<hr><center>drogon/1.9.7</center>
</body>
</html>

Thank you for your attention

System Info

Windows 11
Node 20.14
"@langchain/core": "^0.3.30",
"@langchain/ollama": "^0.1.4",
"@langchain/openai": "^0.3.17",
"axios": "^1.7.9",
"dotenv": "^16.4.7",
"langchain": "^0.3.11"

Copy link

dosubot bot commented Jan 17, 2025

I found a similar discussion regarding the issue of connecting LangChainJS to a local LLM installed in Jan AI (llama) [1]. The user in that discussion also encountered a 404 error when attempting to use LangChainJS with a local LLM, despite being able to make successful API calls using Axios.

To address the 404 error when using the ChatOllama class in LangChainJS, consider the following:

  1. Base URL Configuration: Ensure that the baseUrl in your ChatOllama configuration does not include the specific endpoint path (/v1/chat/completions). The ChatOllama class constructs its own endpoint URL, so the baseUrl should typically be just the server address, like http://127.0.0.1:1337.

  2. Endpoint Path: The ChatOllama class uses a default endpoint path, which might not match the one your server expects. Verify the expected endpoint path for your LLM server and adjust the baseUrl accordingly, or modify the LangChainJS source if necessary to match your server's endpoint.

  3. Error Handling: The ChatOllama class logs a warning and attempts to use a legacy API version if a 404 error is encountered. Ensure your server is compatible with the expected API version or adjust the LangChainJS implementation to handle your server's API correctly [2].

  4. Headers and Options: Double-check any headers or options that might be required by your server but are missing in the LangChainJS request. You can specify custom headers in the ChatOllama configuration if needed [3].

By ensuring the baseUrl and endpoint paths are correctly configured and checking for any required headers, you should be able to resolve the 404 error.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jan 17, 2025
@getzelus
Copy link
Author

@dosu

As mentionned, i tried other urls and it doesnt work :

Error invoking LLM: ResponseError: Not Found
    at checkOk (file:///C:/dev/ai/myagent/node_modules/.pnpm/ollama@0.5.12/node_modules/ollama/dist/browser.mjs:70:9)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async post (file:///C:/dev/ai/myagent/node_modules/.pnpm/ollama@0.5.12/node_modules/ollama/dist/browser.mjs:116:3)
    at async Ollama.processStreamableRequest 
}

Copy link

dosubot bot commented Jan 17, 2025

To resolve the "ResponseError: Not Found" error when using the ChatOllama class, ensure that your baseUrl is correctly formatted. The baseUrl should not include specific endpoint paths like /v1/chat/completions. Instead, it should be just the server address, such as http://127.0.0.1:1337. The ChatOllama class constructs its own endpoint URL, so including the endpoint path in the baseUrl can lead to errors.

Additionally, if you are running the service locally, make sure to use http://127.0.0.1: instead of http://localhost: due to known issues with Node 18 resolving "localhost" [1].

Verify that the server is running and accessible at the specified base URL, and ensure that the expected endpoint paths are correctly set up on your server.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@ucev
Copy link
Contributor

ucev commented Jan 18, 2025

Ollama's REST API for chat is http://localhost:11434/api/chat, and Jan AI's server api for chat is http://127.0.0.1:1337/v1/chat/completions. They are not the same. Setting baseUrl will not help, because when running, OllamaChat will call func chat, which call processStreamableRequest with chat as endpoint. And in processStreamableRequest, it will use the baseUrl and endPoint to construct a final url.
so the url that your config generates will be http://127.0.0.1:1337/v1/chat/completions/api/chat, which will generate 404.

I think using Ollama to run local LLM server will help.

@getzelus
Copy link
Author

getzelus commented Jan 18, 2025

@ucev oh i see thank you for your answer
so there is no way to connect to a local llm installed via gpt4all, llmstudio or jan ai ?

or i can create a wrapper to call the llm with the arguments and chain after...

@ucev
Copy link
Contributor

ucev commented Jan 19, 2025

@getzelus yeah, i think you can contribute your own LLM wrapper. Most chat models need apiKey or token when initialized, which is not needed for local LLMs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants