Skip to content

Conversation

@are-ces
Copy link
Contributor

@are-ces are-ces commented Nov 25, 2025

Description

The OpenAI model used for e2e tests can now be defined as GH variable instead of hardcoding it.

This allows us to quickly switch between models.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: (e.g., Claude, CodeRabbit, Ollama, etc., N/A if not used)
  • Generated by: (e.g., tool name and version; N/A if not used)

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Chores
    • Enhanced end-to-end testing infrastructure by parameterizing model configuration through environment variables
    • Updated CI/CD workflows and deployment configurations for improved flexibility and maintainability

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 25, 2025

Walkthrough

Added environment variable E2E_OPENAI_MODEL across CI workflows and test configurations to parameterize OpenAI model selection. Replaced hard-coded "gpt-4-turbo" model references with environment variable substitution and removed OPENAI_API_KEY injection from the workflow.

Changes

Cohort / File(s) Summary
CI/Workflow Configuration
.github/workflows/e2e_tests.yaml
Added E2E_OPENAI_MODEL environment variable; removed sed step that injected OPENAI_API_KEY into container environment.
Container Configuration
docker-compose.yaml
Added E2E_OPENAI_MODEL to llama-stack service environment.
E2E Test Configuration
tests/e2e/configs/run-ci.yaml
Parameterized OpenAI model references: replaced hard-coded "gpt-4-turbo" with ${env.E2E_OPENAI_MODEL} in shield provider and models section.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify consistency of environment variable name (E2E_OPENAI_MODEL) across all files
  • Confirm variable substitution syntax ${env.E2E_OPENAI_MODEL} is correctly applied in test config
  • Validate removal of OPENAI_API_KEY injection doesn't break existing workflows

Possibly related PRs

Suggested reviewers

  • radofuchs
  • tisnik

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately describes the main change: parametrizing the OpenAI model for e2e tests through environment variables instead of hardcoding values.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 087655c and 010281b.

📒 Files selected for processing (3)
  • .github/workflows/e2e_tests.yaml (2 hunks)
  • docker-compose.yaml (1 hunks)
  • tests/e2e/configs/run-ci.yaml (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
  • GitHub Check: e2e_tests (azure)
🔇 Additional comments (5)
docker-compose.yaml (1)

15-15: LGTM. Environment variable addition follows the established pattern for service configuration.

tests/e2e/configs/run-ci.yaml (2)

150-153: Models configuration update looks good.

Both model_id and provider_model_id correctly reference the parameterized ${env.E2E_OPENAI_MODEL} variable, enabling flexible model selection. Ensure the environment variable is always set before container startup to avoid null/empty model references.

Verify that the CI environment always populates E2E_OPENAI_MODEL before this config is used.


134-137: The review comment is incorrect.

The provider_shield_id field is intentionally designed to reference the model identifier used for safety checks, not a shield identifier. This is evident from:

  1. Explicit documentation in run.yaml: provider_shield_id: "gpt-3.5-turbo" # Model to use for safety checks
  2. Consistent pattern across all configs: All configurations (run-ci.yaml, run-rhelai.yaml, run-rhaiis.yaml, run-azure.yaml) use model identifiers for provider_shield_id
  3. Test validation confirms this: The test in tests/e2e/features/steps/info.py (line 109) validates that provider_resource_id equals the expected model identifier
  4. Intentional environment variable usage: E2E_OPENAI_MODEL is correctly used as both the inference model and the safety check model

The configuration follows the established design pattern and is correct as-is. No changes are required.

Likely an incorrect or invalid review comment.

.github/workflows/e2e_tests.yaml (2)

14-15: Verify GitHub variable E2E_OPENAI_MODEL is configured.

The workflow references ${{ vars.E2E_OPENAI_MODEL }} on line 15, using the correct vars.* syntax for GitHub variables (not secrets). However, if this variable is not configured in your GitHub repository settings, it will resolve to an empty/null value, causing test failures without clear errors.

Confirm that the GitHub variable E2E_OPENAI_MODEL has been defined in repository → Settings → Variables with an appropriate default model value (e.g., gpt-4-turbo). Consider adding a validation step or default fallback to prevent silent failures if the variable is unset.


140-144: Inconsistency between AI summary and actual code changes.

The AI-generated summary claims that the sed step for injecting OPENAI_API_KEY was removed, but the provided code snippet shows only db_path transformations (lines 141–142). The workflow still defines OPENAI_API_KEY in the environment on line 14.

Please clarify: Was a sed command that previously injected OPENAI_API_KEY into the container environment actually removed in this PR? If so, verify that OPENAI_API_KEY is still being passed correctly through the container environment (e.g., via docker-compose or an alternative mechanism).

Also, line 143 has trailing whitespace that should be cleaned up.

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik tisnik merged commit f14db5d into lightspeed-core:main Nov 25, 2025
21 of 25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants