Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom 'apiVersion' and 'baseUrl' parameters not being respected #5482

Closed
5 tasks done
caiorg opened this issue May 20, 2024 · 6 comments
Closed
5 tasks done

Custom 'apiVersion' and 'baseUrl' parameters not being respected #5482

caiorg opened this issue May 20, 2024 · 6 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@caiorg
Copy link

caiorg commented May 20, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

The following code from langchain-nextjs-template, updated to use Google's Gemini 1.5 Pro does not work as intended when "querying" the model and waiting for a response.

FILE: langchain-nextjs-template/app/api/chat/route.ts

import { NextRequest, NextResponse } from "next/server";
import { Message as VercelChatMessage, StreamingTextResponse } from "ai";

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { HttpResponseOutputParser } from "langchain/output_parsers";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

export const runtime = "edge";

const formatMessage = (message: VercelChatMessage) => {
  return `${message.role}: ${message.content}`;
};

const TEMPLATE = `You are a pirate named Patchy. All responses must be extremely verbose and in pirate dialect.

Current conversation:
{chat_history}

User: {input}
AI:`;

/**
 * This handler initializes and calls a simple chain with a prompt,
 * chat model, and output parser. See the docs for more information:
 *
 * https://js.langchain.com/docs/guides/expression_language/cookbook#prompttemplate--llm--outputparser
 */
export async function POST(req: NextRequest) {
  try {
    const body = await req.json();
    const messages = body.messages ?? [];
    const formattedPreviousMessages = messages.slice(0, -1).map(formatMessage);
    const currentMessageContent = messages[messages.length - 1].content;
    const prompt = PromptTemplate.fromTemplate(TEMPLATE);

    /**
     * You can also try e.g.:
     *
     * import { ChatAnthropic } from "langchain/chat_models/anthropic";
     * const model = new ChatAnthropic({});
     *
     * See a full list of supported models at:
     * https://js.langchain.com/docs/modules/model_io/models/
     */
    // const model = new ChatOpenAI({
    //   temperature: 0.8,
    //   modelName: "gpt-3.5-turbo-1106",
    // });

    const model = new ChatGoogleGenerativeAI({
      model: "gemini-1.5-pro-latest",
      baseUrl: "https://generativelanguage.googleapis.com",
      apiVersion: "v1beta",
      apiKey: "MY_API_KEY",
    });
    
    /**
     * Chat models stream message chunks rather than bytes, so this
     * output parser handles serialization and byte-encoding.
     */
    const outputParser = new HttpResponseOutputParser();

    /**
     * Can also initialize as:
     *
     * import { RunnableSequence } from "@langchain/core/runnables";
     * const chain = RunnableSequence.from([prompt, model, outputParser]);
     */
    const chain = prompt.pipe(model).pipe(outputParser);

    const stream = await chain.stream({
      chat_history: formattedPreviousMessages.join("\n"),
      input: currentMessageContent,
    });

    return new StreamingTextResponse(stream);
  } catch (e: any) {
    return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
  }
}

Error Message and Stack Trace (if applicable)

After timining out, I'm presented a toaster with the contents of the following api error response:

{
    "error": "[GoogleGenerativeAI Error]: Error fetching from https://generativeai.googleapis.com/v1/models/gemini-1.5-pro-latest:streamGenerateContent?alt=sse: [404 Not Found] "
}

Description

I'm trying to use apiVersion and baseUrl parameters to update the url, but those are not taken into account when bootstraping the client. This behaviour can be verified by using simple console.log("this", this) before and after initializing this.client variable.

As per seen at lines 280 and 281 from @langchain/google-genai lib (not shure if other libs have the same issue):

apiVersion: this.apiVersion,

and

It looks like it would need to follow the same approach at line 242 from the same file:

this.apiKey = fields?.apiKey ?? getEnvironmentVariable("GOOGLE_API_KEY");

System Info

❯ pnpm info langchain

langchain@0.2.0 | MIT | deps: 16 | versions: 271
Typescript bindings for langchain
https://github.com/langchain-ai/langchainjs/tree/main/langchain/

keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

dist
.tarball: https://registry.npmjs.org/langchain/-/langchain-0.2.0.tgz
.shasum: 555d84538962720cd7223f6c3ca4bd060978ebf3
.integrity: sha512-8c7Dg9OIPk4lFIQGyfOytXbUGLLSsxs9MV53cLODspkOGzaUpwy5FGBie30SrOxIEFJo+FDaJgpDAFO3Xi4NMw==
.unpackedSize: 3.4 MB

dependencies:
@langchain/core: ~0.2.0
@langchain/openai: ~0.0.28
@langchain/textsplitters: ~0.0.0
binary-extensions: ^2.2.0
js-tiktoken: ^1.0.12
js-yaml: ^4.1.0
jsonpointer: ^5.0.1
langchainhub: ~0.0.8
langsmith: ~0.1.7
ml-distance: ^4.0.0
openapi-types: ^12.1.3
p-retry: 4
uuid: ^9.0.0
yaml: ^2.2.1
zod-to-json-schema: ^3.22.3
zod: ^3.22.4

maintainers:
- nfcampos <nuno@boringbits.io>
- jacoblee93 <jacoblee93@gmail.com>
- andrewnguonly <andrewnguonly@gmail.com>
- davidduong <david@duong.cz>
- hwchase17 <hw.chase.17@gmail.com>
- basproul <braceasproul@gmail.com>

dist-tags:
latest: 0.2.0  next: 0.2.0    

published 3 days ago by basproul <braceasproul@gmail.com>

BTW... tried using the latest stable version too.

EDIT 1:
Forgot to mention that when testing, I was using a real Google API Key, but ommited here changing it to MY_API_KEY.

Copy link

dosubot bot commented May 20, 2024

Hello @caiorg! 🚀 I'm Dosu, a helpful bot here to assist you with bugs, answer your questions, and guide you on contributing while we wait for a human maintainer. I'm diving into your issue and will be back with a full answer shortly! 🤖

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label May 20, 2024
@yharaskrik
Copy link

Duplicate of: #5437
Fixed by: #5483

Copy link

dosubot bot commented May 20, 2024

It appears there's a discrepancy between the baseUrl and apiVersion you've configured and what the library is actually using, leading to the error. This suggests a potential bug in the library or a misconfiguration in your setup. Here are steps to troubleshoot and potentially resolve the issue:

  1. Double-check the baseUrl and apiVersion: Ensure the values you're using are correct for the Google Generative AI API. Using incorrect or unsupported values could result in the API calls failing.

  2. Review your setup: Make sure your configuration in the langchain-nextjs-template matches the expected format, especially if you're providing a custom baseUrl.

  3. Inspect the library code: Look into the libs/langchain-google-genai/src/chat_models.ts file to see how apiVersion and baseUrl are utilized. This could provide insights into whether these parameters are correctly applied to API calls.

If these steps don't resolve the issue, consider debugging the library code directly to trace how the apiVersion and baseUrl are handled during runtime. This might involve adding console logs or using a debugger to pinpoint where the custom settings are being ignored or overwritten.

This approach should help identify the root cause of the issue and guide you towards a solution. If you need further assistance, feel free to ask.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@caiorg
Copy link
Author

caiorg commented May 20, 2024

It appears there's a discrepancy between the baseUrl and apiVersion you've configured and what the library is actually using, leading to the error. This suggests a potential bug in the library or a misconfiguration in your setup. Here are steps to troubleshoot and potentially resolve the issue:

  1. Double-check the baseUrl and apiVersion: Ensure the values you're using are correct for the Google Generative AI API. Using incorrect or unsupported values could result in the API calls failing.
  2. Review your setup: Make sure your configuration in the langchain-nextjs-template matches the expected format, especially if you're providing a custom baseUrl.
  3. Inspect the library code: Look into the libs/langchain-google-genai/src/chat_models.ts file to see how apiVersion and baseUrl are utilized. This could provide insights into whether these parameters are correctly applied to API calls.

If these steps don't resolve the issue, consider debugging the library code directly to trace how the apiVersion and baseUrl are handled during runtime. This might involve adding console logs or using a debugger to pinpoint where the custom settings are being ignored or overwritten.

This approach should help identify the root cause of the issue and guide you towards a solution. If you need further assistance, feel free to ask.

Details

Sources

About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

I've already done all the steps mentioned prior to opening this issue.

I've managed to get it worki FOR TESTING PURPOSES by altering the library code at lines and file as I mentioned earlier.

@caiorg
Copy link
Author

caiorg commented May 20, 2024

Duplicate of: #5437 Fixed by: #5483

I'm sorry, I've searched the issues and did not find #5437 before opening this issue.

Btw... I've 0.2.0 installed and don't see the fixes.
As far as I understood, the fixes are just waiting to merge. I'm I right?

@bracesproul
Copy link
Member

bracesproul commented May 21, 2024

Fixed by #5490 (upgrade to v0.0.15)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

3 participants