Skip to content

Conversation

@marcusschiesser
Copy link
Collaborator

@marcusschiesser marcusschiesser commented Jun 17, 2025

Summary by CodeRabbit

  • New Features
    • Added support for configuring the GPT-4.1 model with improved setup and selection options.
  • Refactor
    • Simplified and unified the model and embedding model selection process across all providers for a more consistent user experience.
    • Streamlined prompts to always ask for model and embedding selections, regardless of previous conditions.
    • Centralized model configuration logic for easier maintenance and improved reliability.
    • Enhanced template installation to include provider-specific settings for Python and TypeScript apps.
    • Renamed provider setup functions to initSettings for clearer initialization semantics.
    • Improved environment variable handling by consolidating model-related variables for specific templates.
  • Bug Fixes
    • Added error handling to prevent streaming failures caused by serialization issues in agent workflow events.
  • Chores
    • Removed unused parameters and types for cleaner configuration flows.
    • Updated dependencies for Gemini provider to use latest Google GenAI packages.
    • Removed deprecated end-to-end tests for reflex and streaming templates.
    • Adjusted CI and question flows to use fixed GPT-4.1 model configuration without conditional prompts.
    • Updated GitHub Actions workflows to remove streaming template from test matrices.

@changeset-bot
Copy link

changeset-bot bot commented Jun 17, 2025

🦋 Changeset detected

Latest commit: 3ae84d2

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
create-llama Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link

coderabbitai bot commented Jun 17, 2025

"""

Walkthrough

This update refactors model configuration prompts and provider setup in the codebase. It removes conditional logic and parameters from provider question functions, making model and embedding model prompts unconditional. Model config creation is centralized with a new GPT-4.1 helper. Provider-specific template setup functions are renamed from setupProvider to initSettings. Type and argument changes are propagated throughout, with some template files updated or removed, and Python provider settings modules added.

Changes

File(s) Change Summary
helpers/models.ts Added getGpt41ModelConfig helper for GPT-4.1 model configuration.
helpers/providers/anthropic.ts
helpers/providers/azure.ts
helpers/providers/gemini.ts
helpers/providers/groq.ts
helpers/providers/huggingface.ts
helpers/providers/llmhub.ts
helpers/providers/mistral.ts
helpers/providers/ollama.ts
helpers/providers/openai.ts
Simplified provider question functions: removed parameters and conditional logic; now always prompt for models and use env vars for API keys.
helpers/providers/index.ts Refactored askModelConfig to remove askModels and openAiKey parameters; simplified provider selection.
helpers/typescript.ts Updated installLlamaIndexServerTemplate to accept and use modelConfig for provider-specific template copying.
helpers/python.ts Updated gemini provider dependencies to google-genai; updated installLlamaIndexServerTemplate to accept modelConfig and copy provider files.
questions/ci.ts Replaced async askModelConfig call with synchronous getGpt41ModelConfig for CI model config.
questions/questions.ts Removed openAiKey and askModels from askModelConfig call in askProQuestions.
questions/simple.ts Centralized GPT-4.1 config creation; made modelConfig required; added conditional override prompt for model config.
templates/components/providers/typescript/anthropic/settings.ts
templates/components/providers/typescript/azure-openai/settings.ts
templates/components/providers/typescript/gemini/settings.ts
templates/components/providers/typescript/groq/settings.ts
templates/components/providers/typescript/mistral/settings.ts
templates/components/providers/typescript/ollama/settings.ts
templates/components/providers/typescript/openai/settings.ts
Renamed exported provider setup functions from setupProvider to initSettings.
templates/components/settings/typescript/settings.ts Deleted provider-agnostic initSettings file.
templates/components/providers/python/anthropic/settings.py
templates/components/providers/python/azure-openai/settings.py
templates/components/providers/python/gemini/settings.py
templates/components/providers/python/groq/settings.py
templates/components/providers/python/huggingface/settings.py
templates/components/providers/python/ollama/settings.py
templates/components/providers/python/openai/settings.py
templates/components/providers/python/t-systems/settings.py
Added new Python modules for provider-specific settings initialization with environment variable validation.
templates/types/llamaindexserver/fastapi/generate.py Replaced direct OpenAI LLM instantiation with use of Settings.llm initialized via init_settings().
python/llama-index-server/llama_index/server/api/routers/chat.py Added error handling for serialization errors in streaming agent workflow events to prevent failures.
helpers/env-variables.ts Removed EMBEDDING_DIM from model env vars; adjusted env var inclusion logic to add all model env vars only for llamaindexserver template.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant ProviderQuestion
    participant ModelConfigHelper

    User->>CLI: Start setup
    CLI->>ProviderQuestion: Prompt for API key (if missing)
    CLI->>ProviderQuestion: Prompt for LLM model
    CLI->>ProviderQuestion: Prompt for embedding model
    ProviderQuestion-->>CLI: Return model config
    CLI->>ModelConfigHelper: Use getGpt41ModelConfig (for CI/simple flows)
    ModelConfigHelper-->>CLI: Return fixed GPT-4.1 config
    CLI-->>User: Complete setup with selected config
Loading

Possibly related PRs

Poem

A rabbit hopped through fields of code,
Prompting models with a lighter load.
No more "if"s or tangled threads,
Just simple choices straight ahead.
Providers now all act the same—
"initSettings" is their new name!
🐇✨
"""

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🔭 Outside diff range comments (11)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)

4-16: Fail fast when MODEL / EMBEDDING_MODEL env vars are missing

Both fields currently fall back to the empty string, which Ollama rejects at runtime with a vague model not found error.
Guard early so mis-configuration is detected immediately:

 export function initSettings() {
   const config = {
     host: process.env.OLLAMA_BASE_URL ?? "http://127.0.0.1:11434",
   };
+
+  if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) {
+    throw new Error(
+      "Required env vars MODEL and/or EMBEDDING_MODEL are not set for Ollama"
+    );
+  }
+
   Settings.llm = new Ollama({
     model: process.env.MODEL!,
     config,
   });
   Settings.embedModel = new OllamaEmbedding({
     model: process.env.EMBEDDING_MODEL!,
     config,
   });
 }

This avoids silent misconfigurations and aligns with the stricter checks added for other providers.

packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)

5-17: Guard against missing MODEL / EMBEDDING_MODEL before non-null assertions

process.env.MODEL! and embedModelMap[process.env.EMBEDDING_MODEL!] assume the vars are always present.
If they are undefined the app starts, then explodes with an obscure error from the SDK.

 export function initSettings() {
+  if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) {
+    throw new Error(
+      "MODEL and EMBEDDING_MODEL must be set before initialising Groq provider"
+    );
+  }
   const embedModelMap: Record<string, string> = {
     "all-MiniLM-L6-v2": "Xenova/all-MiniLM-L6-v2",
     "all-mpnet-base-v2": "Xenova/all-mpnet-base-v2",
   };
@@
   Settings.embedModel = new HuggingFaceEmbedding({
     modelType: embedModelMap[process.env.EMBEDDING_MODEL!],
   });
 }

This keeps the failure surface small and messages clear.

packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)

8-18: Non-null assertions mask config errors

process.env.MODEL! and process.env.EMBEDDING_MODEL! are asserted non-null, yet nothing ensures they are.
Prefer explicit validation to prevent runtime surprises:

 export function initSettings() {
+  if (!process.env.MODEL || !process.env.EMBEDDING_MODEL) {
+    throw new Error(
+      "Anthropic provider requires MODEL and EMBEDDING_MODEL env vars"
+    );
+  }
   const embedModelMap: Record<string, string> = {

Also consider lifting embedModelMap to a shared util to avoid duplication across providers.

packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)

9-16: Guard against missing MODEL / EMBEDDING_MODEL env vars.

process.env.MODEL (and the embedding counterpart) are blindly cast with as.
If the variable is undefined the SDK will throw later at runtime, yet the compiler remains silent.

-  model: process.env.MODEL as keyof typeof ALL_AVAILABLE_MISTRAL_MODELS,
+  model: assertEnv("MODEL") as keyof typeof ALL_AVAILABLE_MISTRAL_MODELS,

Consider a small helper:

function assertEnv(name: string): string {
  const v = process.env[name];
  if (!v) throw new Error(`Environment variable ${name} must be defined`);
  return v;
}
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)

9-16: Same env-var null-safety concern as Mistral settings.

Blind casts of process.env.MODEL / EMBEDDING_MODEL may explode later.
Reuse the same assertEnv helper (or similar) to fail fast and surface configuration errors early.

packages/create-llama/questions/ci.ts (1)

18-25: async is now redundant – drop it to simplify.

getCIQuestionResults no longer awaits anything; returning a plain object wrapped in a resolved Promise is superfluous.

-export async function getCIQuestionResults(
+export function getCIQuestionResults(

and adjust the return type accordingly (QuestionResults, not Promise<QuestionResults>).
Less cognitive load and slightly faster execution.

packages/create-llama/helpers/providers/ollama.ts (1)

60-84: process.exit(1) in helper breaks consumers.

ensureModel kills the entire Node process on failure.
If this helper is ever reused in a library or inside Jest tests, it will terminate the runner unexpectedly.

Bubble the error and let the caller decide:

-      console.log(red(...));
-      process.exit(1);
+      throw new Error(red(`Model ${modelName} missing. Run 'ollama pull ${modelName}'.`));
packages/create-llama/helpers/providers/gemini.ts (1)

35-47: Prompt for API key can expose secrets in shell history.

Typing the key in an echoed prompt prints it back in clear text.
Use type: "password" so the terminal masks input.

- type: "text",
+ type: "password",
packages/create-llama/helpers/providers/groq.ts (1)

91-104: API key can still be empty after the prompt

If the user simply hits when asked for the key and no GROQ_API_KEY env var is set, we move on with an empty string.
getAvailableModelChoicesGroq(config.apiKey!) then throws, but the resulting stack trace is less user-friendly than an early validation.

 if (!config.apiKey) {
   const { key } = await prompts(
@@
   );
-  config.apiKey = key || process.env.GROQ_API_KEY;
+  config.apiKey = key || process.env.GROQ_API_KEY;
+
+  if (!config.apiKey?.trim()) {
+    console.log(
+      red(
+        "A Groq API key is required to fetch model choices. Aborting.",
+      ),
+    );
+    process.exit(1);
+  }
 }
packages/create-llama/helpers/providers/azure.ts (1)

54-64: isConfigured() always returns false – is that intentional?

For Azure the comment says the provider “can’t be fully configured”, but returning false irrespective of the presence of AZURE_OPENAI_KEY suppresses downstream checks that merely need the key (e.g., early CI validation).

-isConfigured(): boolean {
-  // the Azure model provider can't be fully configured as endpoint and deployment names have to be configured with env variables
-  return false;
-},
+isConfigured(): boolean {
+  return Boolean(config.apiKey ?? process.env.AZURE_OPENAI_KEY);
+},

If additional env variables are indeed mandatory, consider checking those explicitly so users get a precise error instead of a blanket “not configured”.

packages/create-llama/helpers/providers/openai.ts (1)

31-52: config.apiKey may be undefined in CI → getAvailableModelChoices() will throw

config.apiKey is only guaranteed to be populated when
a) the environment variable is set, or
b) the interactive prompt runs.

Inside CI (isCI === true) the prompt is skipped, so a missing OPENAI_API_KEY leads to an undefined key that is subsequently passed to getAvailableModelChoices(...) (line 58/70). The helper immediately throws:

if (!apiKey) {
  throw new Error("need OpenAI key to retrieve model choices");
}

→ Any CI job without the env-var will now fail even though interactive input is impossible.

31-32   if (!config.apiKey && !isCI) {
+31a+   // In CI we must *fail early* with a clear message *before* hitting the remote call.
+31b+   if (!config.apiKey && isCI) {
+31c+     throw new Error(
+31d+       "OPENAI_API_KEY is not set in the CI environment – required for model discovery",
+31e+     );
+31f+   }

Alternatively, short-circuit the model/embedding prompts when the key is absent in CI.

🧹 Nitpick comments (14)
packages/create-llama/helpers/models.ts (1)

9-11: isConfigured should rely on the object’s apiKey, not the captured param

isConfigured closes over the openAiKey argument.
If the returned config object is later mutated (config.apiKey = …), isConfigured() will still look at the stale captured value and give the wrong answer.

-  isConfigured(): boolean {
-    return !!openAiKey;
-  },
+  isConfigured(): boolean {
+    return !!this.apiKey;
+  },

This keeps the checker truthful and avoids surprising behaviour.

packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)

4-16: Handle parseInt result to avoid passing NaN as dimensions

If EMBEDDING_DIM is set but not a valid integer, parseInt returns NaN, which propagates silently to the OpenAI SDK.

   Settings.embedModel = new OpenAIEmbedding({
     model: process.env.EMBEDDING_MODEL,
-    dimensions: process.env.EMBEDDING_DIM
-      ? parseInt(process.env.EMBEDDING_DIM)
-      : undefined,
+    dimensions: (() => {
+      if (!process.env.EMBEDDING_DIM) return undefined;
+      const dim = Number.parseInt(process.env.EMBEDDING_DIM, 10);
+      if (Number.isNaN(dim)) {
+        throw new Error("EMBEDDING_DIM must be an integer");
+      }
+      return dim;
+    })(),
   });

Explicit validation prevents hard-to-trace SDK errors.

packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)

9-16: Return type is implicit – add it for clarity.

A tiny nit: initSettings has no return value; declaring (): void makes the intent explicit and avoids accidental future misuse.

packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)

9-16: Add explicit void return type for initSettings.

packages/create-llama/questions/ci.ts (1)

1-1: The import statement pulls the whole helpers file just for one function.

If getGpt41ModelConfig is the lone export, okay; if not, use a named-import path such as "../helpers/models/getGpt41ModelConfig" to keep bundle size down in ESM tree-shaking scenarios.
Not critical, but worth tracking.

packages/create-llama/helpers/providers/ollama.ts (1)

20-28: config declared with const but mutated later – prefer let or freeze.

While mutating properties of a const object is legal, it sends mixed signals.
Either:

  1. Declare with let and mutate, or
  2. Keep const and build a new object per step ({ ...config, model }).

Consistency aids maintainability.

packages/create-llama/helpers/providers/gemini.ts (1)

18-33: config mutability / isConfigured closure caveat.

isConfigured closes over config; later mutations (model / embeddingModel) are fine, but apiKey may be updated after the method is read by callers, yielding stale truthiness.

Assign isConfigured after all mutations or compute lazily:

isConfigured() {
  return !!this.apiKey || !!process.env.GOOGLE_API_KEY;
}
packages/create-llama/helpers/providers/huggingface.ts (1)

34-44: Skip the prompt when there is only one available LLM model

Because MODELS currently holds a single hard-coded entry, the user is forced through an unnecessary prompt. Eliminating the prompt when MODELS.length === 1 keeps the simple-mode flow truly “simple”.

-const { model } = await prompts(
-  {
-    type: "select",
-    name: "model",
-    message: "Which Hugging Face model would you like to use?",
-    choices: MODELS.map(toChoice),
-    initial: 0,
-  },
-  questionHandlers,
-);
-config.model = model;
+if (MODELS.length === 1) {
+  config.model = MODELS[0];
+} else {
+  const { model } = await prompts(
+    {
+      type: "select",
+      name: "model",
+      message: "Which Hugging Face model would you like to use?",
+      choices: MODELS.map(toChoice),
+      initial: 0,
+    },
+    questionHandlers,
+  );
+  config.model = model;
+}
packages/create-llama/helpers/providers/groq.ts (1)

118-133: Duplicate logic across providers – consider extracting a shared embedding-model prompt

The embedding-model prompt block is identical in at least HuggingFace, Anthropic, Azure, Groq, …
A tiny helper such as promptForEmbeddingModel(EMBEDDING_MODELS) would remove ~10 repeated lines per provider and make future changes (e.g., adding a “custom” option) one-shot.

packages/create-llama/helpers/providers/index.ts (1)

50-76: Replace long switch with a provider-function map

The growing switch is starting to look unmaintainable; every new provider touches this file. A mapping keeps the logic declarative and avoids forgotten breaks.

-  let modelConfig: ModelConfigParams;
-  switch (modelProvider) {
-    case "ollama":
-      modelConfig = await askOllamaQuestions();
-      break;
-    case "groq":
-      modelConfig = await askGroqQuestions();
-      break;
-    ...
-    default:
-      modelConfig = await askOpenAIQuestions();
-  }
+  const providerToFn: Record<string, () => Promise<ModelConfigParams>> = {
+    openai: askOpenAIQuestions,
+    groq: askGroqQuestions,
+    ollama: askOllamaQuestions,
+    anthropic: askAnthropicQuestions,
+    gemini: askGeminiQuestions,
+    mistral: askMistralQuestions,
+    "azure-openai": askAzureQuestions,
+    "t-systems": askLLMHubQuestions,
+    huggingface: askHuggingfaceQuestions,
+  };
+
+  const fn = providerToFn[modelProvider] ?? askOpenAIQuestions;
+  const modelConfig = await fn();
packages/create-llama/helpers/providers/anthropic.ts (2)

51-62: Whitespace key → invalid key

prompts returns an empty string when the user just presses space(s).
isConfigured() would then wrongly regard " " as a valid API key. Trim before assignment.

-config.apiKey = key || process.env.ANTHROPIC_API_KEY;
+const trimmed = key?.trim();
+config.apiKey = trimmed ? trimmed : process.env.ANTHROPIC_API_KEY;

64-91: Shared code duplication – extract common helper

Same observation as in Groq: embedding-model prompt and dimension lookup are duplicated across providers. A helper such as

export async function promptForEmbedding<T extends Record<string, { dimensions:number }>>(
  models: T,
  message = "Which embedding model would you like to use?",
) {
  const { embeddingModel } = await prompts(
    {
      type: "select",
      name: "embeddingModel",
      message,
      choices: Object.keys(models).map(toChoice),
      initial: 0,
    },
    questionHandlers,
  );
  return { name: embeddingModel, dimensions: models[embeddingModel].dimensions };
}

would shrink each provider implementation to three lines.

packages/create-llama/helpers/typescript.ts (1)

39-48: Provider settings are copied twice – consider DRYing the logic

installLlamaIndexServerTemplate() now copies
components/providers/typescript/<provider>/** into src/app (here), while installLegacyTSTemplate() performs an almost identical copy into <engine> (lines 262-266). If both flows are exercised for the same project structure this creates duplicate files and maintenance overhead.

Suggestion: extract a shared helper, or decide on a single destination (engine vs src/app) based on template type to avoid redundant copies.

packages/create-llama/helpers/providers/mistral.ts (1)

34-45: Minor: redundant prompt execution guard

config.apiKey is initialised from process.env.MISTRAL_API_KEY.
Because of that, if (!config.apiKey) already prevents the prompt when the env-var is set. The secondary check inside the prompt message (“leave blank to use … env variable”) is therefore never reached.

No functional problem – just noting the redundant branch for future cleanup.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a221bc6 and 9d7778d.

📒 Files selected for processing (23)
  • packages/create-llama/helpers/models.ts (1 hunks)
  • packages/create-llama/helpers/providers/anthropic.ts (2 hunks)
  • packages/create-llama/helpers/providers/azure.ts (3 hunks)
  • packages/create-llama/helpers/providers/gemini.ts (2 hunks)
  • packages/create-llama/helpers/providers/groq.ts (2 hunks)
  • packages/create-llama/helpers/providers/huggingface.ts (2 hunks)
  • packages/create-llama/helpers/providers/index.ts (2 hunks)
  • packages/create-llama/helpers/providers/llmhub.ts (3 hunks)
  • packages/create-llama/helpers/providers/mistral.ts (2 hunks)
  • packages/create-llama/helpers/providers/ollama.ts (2 hunks)
  • packages/create-llama/helpers/providers/openai.ts (4 hunks)
  • packages/create-llama/helpers/typescript.ts (3 hunks)
  • packages/create-llama/questions/ci.ts (2 hunks)
  • packages/create-llama/questions/questions.ts (0 hunks)
  • packages/create-llama/questions/simple.ts (3 hunks)
  • packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1 hunks)
  • packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1 hunks)
  • packages/create-llama/templates/components/settings/typescript/settings.ts (0 hunks)
💤 Files with no reviewable changes (2)
  • packages/create-llama/questions/questions.ts
  • packages/create-llama/templates/components/settings/typescript/settings.ts
🧰 Additional context used
🧬 Code Graph Analysis (16)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)
  • initSettings (4-49)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)
  • initSettings (4-16)
packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
  • initSettings (8-19)
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
  • initSettings (4-17)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)
  • initSettings (4-49)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)
  • initSettings (5-18)
packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
  • initSettings (8-19)
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
  • initSettings (4-17)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (6)
packages/create-llama/templates/components/providers/typescript/groq/settings.ts (1)
  • initSettings (5-18)
packages/create-llama/templates/components/providers/typescript/ollama/settings.ts (1)
  • initSettings (4-16)
packages/create-llama/templates/components/providers/typescript/anthropic/settings.ts (1)
  • initSettings (8-19)
packages/create-llama/templates/components/providers/typescript/gemini/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/mistral/settings.ts (1)
  • initSettings (9-16)
packages/create-llama/templates/components/providers/typescript/openai/settings.ts (1)
  • initSettings (4-17)
packages/create-llama/questions/ci.ts (1)
packages/create-llama/helpers/models.ts (1)
  • getGpt41ModelConfig (3-12)
packages/create-llama/helpers/models.ts (1)
packages/create-llama/helpers/types.ts (1)
  • ModelConfig (14-21)
packages/create-llama/helpers/providers/ollama.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/huggingface.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/mistral.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/azure.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/groq.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/openai.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/providers/llmhub.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/questions/simple.ts (3)
packages/create-llama/helpers/models.ts (1)
  • getGpt41ModelConfig (3-12)
packages/create-llama/helpers/types.ts (1)
  • ModelConfig (14-21)
packages/create-llama/helpers/providers/index.ts (1)
  • askModelConfig (20-81)
packages/create-llama/helpers/providers/gemini.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
packages/create-llama/helpers/typescript.ts (2)
packages/create-llama/helpers/types.ts (1)
  • InstallTemplateArgs (96-116)
packages/create-llama/helpers/copy.ts (1)
  • copy (13-49)
packages/create-llama/helpers/providers/anthropic.ts (1)
packages/create-llama/helpers/providers/index.ts (1)
  • ModelConfigParams (18-18)
⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: lint
🔇 Additional comments (3)
packages/create-llama/templates/components/providers/typescript/azure-openai/settings.ts (1)

4-4: Consistent naming: renaming setupProvider to initSettings
This change aligns with other provider templates (e.g., Gemini, Anthropic) and standardizes the initialization entry point.

packages/create-llama/helpers/providers/llmhub.ts (1)

152-155: Embedding-dimension mapping differs from OpenAI helper – double-check correctness

getDimensions() returns 768 only for "text-embedding-004" and 1536 for everything else, whereas the OpenAI counterpart maps "text-embedding-3-large"1024. If LLMHub forwards requests to the same OpenAI models, this discrepancy will silently produce the wrong dimension count (e.g. 1536 instead of 1024 for text-embedding-3-large).

Please confirm the dimensionality for each LLMHub embedding model and align the helper functions for consistency.

packages/create-llama/questions/simple.ts (1)

185-190: modelConfig override path is clear – good job

The fallback to GPT-4.1 and the optional interactive override are neatly separated; the code is easy to follow.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (4)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)

196-201: Graceful fallback on serialization failure could leak silent data loss

Silently skipping unknown events may hide important information from the client and make debugging harder. Consider at least emitting a lightweight notice (e.g., "event_type": "<unserialisable>") so consumers know something was omitted.

packages/create-llama/templates/types/llamaindexserver/fastapi/generate.py (1)

62-65: LLM selection hard-coded to global Settings.llm

Relying on a globally initialised LLM makes the function non-reusable for alternative providers in the same process. Accept an optional llm parameter defaulting to Settings.llm to keep the helper flexible.

packages/create-llama/templates/components/providers/python/openai/settings.py (1)

8-14: Minor: warn instead of raising when API key missing during local tooling

Hard-failing on missing OPENAI_API_KEY blocks even CLI utilities that don’t hit the network. Consider logging a clear warning and leaving Settings.llm unset instead of raising, so downstream code can decide how to react.

packages/create-llama/templates/components/providers/python/t-systems/settings.py (1)

40-50: Remove deprecated EMBEDDING_DIM plumbing

The TypeScript helper no longer emits EMBEDDING_DIM, yet this module still
expects it. Falling back to the (internal) DEFAULT_EMBEDDING_DIM constant
works but the env var path is now dead code – consider deleting it to avoid
confusion.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9d7778d and d7805cb.

📒 Files selected for processing (13)
  • packages/create-llama/helpers/env-variables.ts (1 hunks)
  • packages/create-llama/helpers/providers/gemini.ts (3 hunks)
  • packages/create-llama/helpers/python.ts (4 hunks)
  • packages/create-llama/templates/components/providers/python/anthropic/settings.py (1 hunks)
  • packages/create-llama/templates/components/providers/python/azure-openai/settings.py (1 hunks)
  • packages/create-llama/templates/components/providers/python/gemini/settings.py (1 hunks)
  • packages/create-llama/templates/components/providers/python/groq/settings.py (1 hunks)
  • packages/create-llama/templates/components/providers/python/huggingface/settings.py (1 hunks)
  • packages/create-llama/templates/components/providers/python/ollama/settings.py (1 hunks)
  • packages/create-llama/templates/components/providers/python/openai/settings.py (1 hunks)
  • packages/create-llama/templates/components/providers/python/t-systems/settings.py (1 hunks)
  • packages/create-llama/templates/types/llamaindexserver/fastapi/generate.py (2 hunks)
  • python/llama-index-server/llama_index/server/api/routers/chat.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/create-llama/templates/components/providers/python/huggingface/settings.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/create-llama/helpers/providers/gemini.ts
🧰 Additional context used
🪛 Pylint (3.3.7)
packages/create-llama/templates/components/providers/python/t-systems/settings.py

[refactor] 14-14: Too few public methods (0/2)

(R0903)

⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: lint
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
🔇 Additional comments (4)
packages/create-llama/helpers/env-variables.ts (1)

595-602: Dropping model env-vars for non-llamaindexserver templates is a silent breaking change

MODEL_PROVIDER, MODEL, EMBEDDING_MODEL, … are now written only when
template === "llamaindexserver".
All other templates lost access to these vars, yet the runtime Python helpers you copy into those projects still expect them – leading to KeyError/NoneType bugs at startup.

Either:

  1. keep getModelEnvs(opts.modelConfig) in both branches, or
  2. make every non-server template read its config elsewhere.

Please double-check all call-sites of os.getenv("MODEL"), MODEL_PROVIDER, etc. before merging.

packages/create-llama/helpers/python.ts (3)

212-218: Confirm the new Google GenAI package names & version ranges exist on PyPI

The “gemini” branch now pulls in llama-index-llms-google-genai / llama-index-embeddings-google-genai (>=0.2.0,<0.3.0).
Please double-check that:

  1. Those exact package names exist on PyPI (spelling frequently changes while the feature is alpha).
  2. Version 0.2.x is actually published – otherwise poetry install / uv will fail for users on day-one.

If the packages or versions are not yet live, loosen the upper bound or bump to the currently released minor.


574-578: Signature extended with modelConfig – ✔️ internal callers updated

The extra parameter is correctly typed and the sole in-file caller (line 691 ff) is already updated, so no compile break inside this package.
Nice incremental change.


690-696: Only this call site passes modelConfig; validate other helpers

installLlamaIndexServerTemplate now mandates modelConfig, and this call is updated.
Search the codebase for any additional invocations (e.g., tests or future helpers) to avoid silent runtime failures.

#!/bin/bash
rg -n "installLlamaIndexServerTemplate" --type ts

If the search returns only this location, all good.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
.github/workflows/e2e.yml (1)

108-108: Simplify TS E2E run step now that streaming is removed
The conditional branch for "streaming" is dead code—matrix only contains llamaindexserver. Replace the if/else with a single command.

Proposed diff:

-      - name: Run Playwright tests for TypeScript
-        run: |
-          if [ "${{ matrix.template-types }}" == "streaming" ]; then
-            pnpm run e2e:ts:streaming
-          else
-            pnpm run e2e:ts:server
-          fi
+      - name: Run Playwright tests for TypeScript
+        run: pnpm run e2e:ts:server
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4dd67a2 and 3ae84d2.

📒 Files selected for processing (3)
  • .github/workflows/e2e.yml (2 hunks)
  • packages/create-llama/e2e/shared/reflex_template.spec.ts (0 hunks)
  • packages/create-llama/e2e/shared/streaming_template.spec.ts (0 hunks)
💤 Files with no reviewable changes (2)
  • packages/create-llama/e2e/shared/reflex_template.spec.ts
  • packages/create-llama/e2e/shared/streaming_template.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (30)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: lint
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
🔇 Additional comments (1)
.github/workflows/e2e.yml (1)

26-26: Limit Python E2E matrix to llamaindexserver
Removing "streaming" aligns with the deleted streaming/reflex suites and focuses CI on the llamaindexserver template.

@marcusschiesser marcusschiesser deleted the ms/readd-ask-models branch June 19, 2025 02:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants