Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad request 400: While using the LLM request through OpenAI #8

Open
ambilykk opened this issue Oct 21, 2024 · 0 comments
Open

Bad request 400: While using the LLM request through OpenAI #8

ambilykk opened this issue Oct 21, 2024 · 0 comments

Comments

@ambilykk
Copy link

ambilykk commented Oct 21, 2024

While using the LLM using OpenAI, we encountered the Bad request error.

Code Used

const capiClient = new OpenAI({
    baseURL: "https://api.githubcopilot.com/",
    apiKey: tokenForUser,
    headers: {
      "Copilot-Integration-Id": "copilot-chat" 
    },
  });
  console.log("capiclient request");
  const response = await capiClient.chat.completions.create({
    stream: false,
    model: "gpt-4o",
    messages: [{
      role: "user",
      content: "What is GitHub Copilot"}]
  });

Error Message
Image

Work-around

Once we replace the module function with fetch and hardcoded the Copilot-Integration-Id, it start working.

const copilotResponse = await fetch(
    "https://api.githubcopilot.com/chat/completions",
    {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "Authorization": `Bearer ${tokenForUser}`,
        "Copilot-Integration-Id": "vscode-chat",
      },
      body: JSON.stringify({
        messages: [{
          role: "user",
          content: "What is GitHub Copilot"}],
        max_tokens: 50,
        temperature: 0.5
      }),
    }
  );
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant