Skip to content

Conversation

@mdelapenya
Copy link
Member

What does this PR do?

Use the Run function in ollama

Why is it important?

Migrate modules to the new API, improving consistency and leveraging the latest testcontainers functionality.

Related issues

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@mdelapenya mdelapenya requested a review from a team as a code owner October 7, 2025 15:08
@netlify
Copy link

netlify bot commented Oct 7, 2025

Deploy Preview for testcontainers-go ready!

Name Link
🔨 Latest commit d5942dd
🔍 Latest deploy log https://app.netlify.com/projects/testcontainers-go/deploys/68e52cd70762e50008d67573
😎 Deploy Preview https://deploy-preview-3420--testcontainers-go.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link

coderabbitai bot commented Oct 7, 2025

Summary by CodeRabbit

  • New Features

    • Option-based configuration for running Ollama, including support for local execution with customizable settings.
    • Conditional GPU enablement when appropriate (skipped for local runs).
  • Bug Fixes

    • More reliable startup by improving wait strategy handling.
    • Clearer, standardized error messages when run fails.
  • Refactor

    • Streamlined initialization flow to build runs from modular options, improving flexibility and consistency across local and containerized executions.

Walkthrough

Refactors ollama module to use option-driven container startup. Updates local process run signature to accept image and customizer options, builds and validates requests from options, applies env and wait strategy post-validation, and adjusts wait-strategy combination. Module Run now detects local process usage, conditionally adds GPU, and invokes either local or testcontainers.Run.

Changes

Cohort / File(s) Summary
Local process API and flow
modules/ollama/local.go
Changes localProcess.run signature to run(ctx, img, opts); constructs GenericContainerRequest from img and opts, validates request, applies WaitingFor and Env to process state, and refines wait strategy composition for MultiStrategy.
Module Run refactor to options
modules/ollama/ollama.go
Replaces inline request construction with moduleOpts using ContainerCustomizers; detects localProcess in opts and routes to local.run; conditionally appends GPU option only for non-local runs; standardizes error messages; wraps non-local testcontainers.Run result in OllamaContainer.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Ollama as Ollama.Run
  participant Local as localProcess
  participant TCRun as testcontainers.Run

  User->>Ollama: Run(ctx, img, opts...)
  Ollama->>Ollama: Build moduleOpts (ports, wait)
  Ollama->>Ollama: Detect localProcess in opts?
  alt local detected
    Ollama->>Local: run(ctx, img, moduleOpts)
    note right of Local: Build GenericContainerRequest from options<br/>Validate request<br/>Apply Env & WaitingFor<br/>Start process
    Local-->>Ollama: *OllamaContainer or error*
  else no local
    Ollama->>TCRun: Run(ctx, img, moduleOpts...)
    TCRun-->>Ollama: Container or error
    Ollama->>Ollama: Wrap as OllamaContainer
  end
  Ollama-->>User: *OllamaContainer or error*
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested labels

chore

Poem

I nudge the knobs, options in a row,
Hop-hop! pick local or the clouds to go.
Waits align, envs neatly spun,
GPU if needed—else I’m done.
Carrot-shaped flags wave at run—
Thump-thump, the container’s spun! 🥕🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The title “chore(ollama): use Run function” clearly summarizes the primary change of migrating the ollama module to the new Run function API and is concise and specific enough for team members to understand the core update at a glance.
Description Check ✅ Passed The description identifies that the PR switches the ollama module to use the Run function, explains the importance of migrating to the new API for consistency and leveraging updated functionality, and references the related issue, which directly aligns with the changes in the diff.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a1e747c and d5942dd.

📒 Files selected for processing (2)
  • modules/ollama/local.go (2 hunks)
  • modules/ollama/ollama.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
modules/ollama/local.go (5)
options.go (1)
  • ContainerCustomizer (22-24)
modules/ollama/ollama.go (1)
  • OllamaContainer (21-23)
generic.go (1)
  • GenericContainerRequest (21-27)
container.go (1)
  • ContainerRequest (131-171)
wait/all.go (1)
  • MultiStrategy (16-23)
modules/ollama/ollama.go (3)
options.go (3)
  • ContainerCustomizer (22-24)
  • WithExposedPorts (454-459)
  • WithWaitStrategy (366-368)
wait/host_port.go (1)
  • ForListeningPort (67-69)
modules/neo4j/neo4j.go (1)
  • Run (39-77)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test (1.25.x, modules/ollama) / test: modules/ollama/1.25.x
  • GitHub Check: test (1.24.x, modules/ollama) / test: modules/ollama/1.24.x
  • GitHub Check: Analyze (go)
🔇 Additional comments (15)
modules/ollama/ollama.go (11)

77-80: LGTM! Clean modular options initialization.

The base module options are well-structured with exposed ports and wait strategy, following the pattern from other modules like neo4j.


82-89: LGTM! Efficient local process detection.

The loop correctly identifies a localProcess in the options and breaks early. This allows conditional logic downstream based on whether local execution is requested.


91-96: LGTM! Correct conditional GPU handling.

GPU is only appended when NOT using a local process, which is the appropriate behavior since local processes don't require GPU device requests. The final merge of all options into moduleOpts ensures all customizers are applied.


98-101: LGTM! Clear execution path for local process.

When a local process is detected, the code correctly delegates to local.run() with the assembled module options, avoiding the container runtime entirely.


103-113: LGTM! Consistent error handling and container wrapping.

The non-local path uses testcontainers.Run() with proper nil-checks and error wrapping that matches the pattern used in other modules. The "run ollama" error prefix improves debugging.


77-80: LGTM! Clean migration to option-driven API.

The refactoring successfully adopts the new option-driven approach with moduleOpts, using WithExposedPorts and WithWaitStrategy. The wait strategy using ForListeningPort with a 60-second timeout is appropriate for the Ollama service startup. This pattern aligns well with other modules like neo4j.


82-89: LGTM! Efficient local process detection.

The loop correctly identifies a local process from the options using type assertion and breaks early once found. The comment on line 82 clearly explains the intent.


91-96: LGTM! Proper conditional GPU handling.

The conditional GPU addition correctly ensures that GPU resources are only requested for containerized execution (not local process mode). The logic flow properly isolates GPU handling from the local process path since:

  • GPU is added to opts only when local == nil (line 93)
  • Local execution path (lines 99-101) only runs when local != nil
  • This ensures GPU is only included in moduleOpts for testcontainers.Run() calls

98-101: LGTM! Clean separation of execution paths.

The conditional execution correctly branches between local process mode and containerized mode. The local process path receives moduleOpts which includes all the necessary customizers (exposed ports, wait strategy, and user options) without GPU.


103-107: LGTM! Successful migration to testcontainers.Run.

The migration to testcontainers.Run() is implemented correctly, passing the consolidated moduleOpts slice. The defensive nil check before wrapping in OllamaContainer ensures proper error handling even when the container creation partially succeeds.


110-110: LGTM! Improved error messaging.

The standardized error prefix "run ollama" improves error clarity and debugging experience compared to generic error messages.

modules/ollama/local.go (4)

94-106: LGTM! Clean request construction from options.

The new signature correctly builds a GenericContainerRequest from the image and applies each ContainerCustomizer in sequence with proper error handling. This aligns with the modular options pattern.


108-120: LGTM! Proper request validation and state transfer.

After validating the request, the code correctly applies WaitingFor and Env to the local process state, resetting and rebuilding the environment variables while capturing the log file name.


122-132: LGTM! Consistent error handling and container construction.

The start sequence and error handling follow the same pattern as the original implementation, with proper wrapping of errors using "start ollama" prefix.


730-734: Verify strategy replacement in MultiStrategy
Current code at modules/ollama/local.go replaces any non-empty MultiStrategy with just logStrategy, dropping user-provided strategies. Other modules (e.g., Pulsar) merge defaults and customs via append. Confirm if this reset is intentional or if we should preserve and append to existing strategies.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

req.WaitingFor = logStrategy
} else {
req.WaitingFor = wait.ForAll(req.WaitingFor, logStrategy)
if w, ok := req.WaitingFor.(*wait.MultiStrategy); ok && len(w.Strategies) == 0 {
Copy link
Member Author

@mdelapenya mdelapenya Oct 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fix is for the case where the wait.Multistrategy's internal slice of strategies gets cleared but it's not setting the strategy to nil. We probably need to fix it in the Walk strategy

cc/ @stevenh

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a bug - are you sure it's not supposed to be len(w.Strategies) > 0? Otherwise any existing strategies are lost...

@mdelapenya mdelapenya self-assigned this Oct 7, 2025
@mdelapenya mdelapenya added the chore Changes that do not impact the existing functionality label Oct 7, 2025
@mdelapenya mdelapenya merged commit 3697410 into testcontainers:main Oct 7, 2025
17 checks passed
@mdelapenya mdelapenya deleted the use-run-claude-ollama branch October 7, 2025 15:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

chore Changes that do not impact the existing functionality

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants