Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using JinaAIEmbedding Still Requires OPENAI_API_KEY in LlamaIndex 0.9.3 #1681

Open
1 of 14 tasks
sujeet-agrahari opened this issue Feb 26, 2025 · 2 comments
Open
1 of 14 tasks
Labels
bug Something isn't working

Comments

@sujeet-agrahari
Copy link

sujeet-agrahari commented Feb 26, 2025

Describe the bug
When using JinaAIEmbedding as the embedding model in LlamaIndex 0.9.3, the script still throws an error requiring OPENAI_API_KEY. However, since OpenAI’s embeddings are not used, this should not be required.

To Reproduce
Code to reproduce the behavior:

import {
  JinaAIEmbedding,
  VectorStoreIndex,
  Document,
  Settings
} from 'llamaindex';

// Configure JinaAIEmbedding
Settings.embedModel = new JinaAIEmbedding({
  apiKey: 'jina_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', // Dummy API Key
  model: 'jina-embeddings-v3',
});

async function testEmbedding() {
  try {
    const doc = new Document({ text: "This is a test document." });
    const index = await VectorStoreIndex.fromDocuments([doc]);
    console.log("Embedding created successfully.");
  } catch (error) {
    console.error("Error:", error);
  }
}

testEmbedding();

Error:

The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).

Expected behavior
The script should work without requiring OPENAI_API_KEY since it only relies on JinaAIEmbedding, and I am not using any LLM.

Desktop (please complete the following information):

  • OS: macOS
  • JS Runtime / Framework / Bundler (select all applicable)
  • Node.js
  • Deno
  • Bun
  • Next.js
  • ESBuild
  • Rollup
  • Webpack
  • Turbopack
  • Vite
  • Waku
  • Edge Runtime
  • AWS Lambda
  • Cloudflare Worker
  • Others (please elaborate on this)
  • Version [e.g. 22]

Additional context
I am trying to use the package in a VSCode extension development environment.

@sujeet-agrahari sujeet-agrahari added the bug Something isn't working label Feb 26, 2025
@marcusschiesser
Copy link
Collaborator

Before fixing this, we should move the jina embedding class to its own npm package

@sujeet-agrahari
Copy link
Author

Before fixing this, we should move the jina embedding class to its own npm package

+1.

Currently the llamaindex package size is huge.

Update on the issue:

This behavior occurs only when using indexStore.asQueryEngine(), but it does not happen with asRetriever().

If this is the expected behavior, we should enhance the documentation and close the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants