Skip to content

Add server connection flags to docker model commands#35

Draft
Copilot wants to merge 4 commits intomainfrom
copilot/add-options-to-docker-model
Draft

Add server connection flags to docker model commands#35
Copilot wants to merge 4 commits intomainfrom
copilot/add-options-to-docker-model

Conversation

Copy link

Copilot AI commented Oct 16, 2025

This PR adds flexible server connection options to the Docker model CLI commands, enabling users to connect to different model-runner instances and OpenAI-compatible endpoints.

Changes

Pull and Push Commands

Added --host and --port flags to docker model pull and docker model push commands for connecting to model-runner instances at custom addresses:

# Pull from a remote model-runner instance
docker model pull --host 192.168.1.100 --port 8080 ai/gemma3

# Push to a custom port
docker model push --port 9000 myorg/mymodel

Run and List Commands

Added comprehensive connection options to docker model run and docker model list commands:

Model-Runner Endpoints:

  • --host and --port - Connect to model-runner at custom addresses

OpenAI-Compatible Endpoints:

# Use llama.cpp server directly
docker model run --llamacpp ai/gemma3 "Tell me a joke"

# Connect to Ollama
docker model list --ollama

# Use OpenRouter (requires OPENAI_API_KEY)
export OPENAI_API_KEY=your-api-key
docker model run --openrouter gpt-4 "What is AI?"

# Use a custom endpoint
docker model run --url http://my-server.com:8080/v1 my-model "Hello"

Implementation Details

  • Smart path routing: The implementation automatically adjusts API paths for external OpenAI endpoints, making /chat/completions and /models calls directly instead of routing through model-runner's proxy
  • Conflict detection: Proper error handling prevents invalid flag combinations (e.g., using --dmr and --llamacpp together)
  • API key support: The OPENAI_API_KEY environment variable is supported for OpenAI-compatible endpoints (required for OpenRouter, optional for others)
  • Backward compatible: All existing functionality remains unchanged; new flags are optional

Testing

  • Added comprehensive test suite with 14 test scenarios for server URL resolution logic
  • Added flag validation tests for all modified commands
  • Verified backend validation and API key handling
  • All existing tests continue to pass (27 tests total)

Files Changed

  • cmd/cli/commands/pull.go, push.go, run.go, list.go - Added connection flags
  • cmd/cli/commands/backend.go - Added server URL resolution logic
  • cmd/cli/commands/utils.go - Added context override helpers
  • cmd/cli/desktop/context.go - Added external endpoint support
  • cmd/cli/desktop/desktop.go - Updated API call routing for external endpoints
  • cmd/cli/commands/backend_test.go (new) - Comprehensive tests
  • cmd/cli/commands/run_test.go - Additional flag tests
Original prompt

Add the following options to "docker model pull" and "docker model push":

--host HOST Host address to bind Docker Model Runner (default "127.0.0.1")
--port PORT Docker container port for Docker Model Runner (default: 12434)

And add the following options to "docker model run" and "docker model list":

--host HOST Host address to bind Docker Model Runner (default "127.0.0.1")
--port PORT Docker container port for Docker Model Runner (default: 12434)
--url URL Base URL for the model API
--dmr Use docker model runner (default: http://127.0.0.1:12434/engines/llama.cpp/v1)
--llamacpp Use llama.cpp server (default: http://127.0.0.1:8080/v1)
--ollama Use ollama server (default: http://127.0.0.1:11434/v1)
--openrouter Use openrouter server (default: https://openrouter.ai/api/v1)

"--host" and "--port" should be used to connect to model-runner endpoints, all the other options should be used to connect to openai endpoints. It should be possible to set api keys for openai endpoints via OPENAI_API_KEY environment variable also.


💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.

Copilot AI and others added 2 commits October 16, 2025 10:39
Co-authored-by: ericcurtin <1694275+ericcurtin@users.noreply.github.com>
Co-authored-by: ericcurtin <1694275+ericcurtin@users.noreply.github.com>
Copilot AI changed the title [WIP] Add options for docker model commands Add server connection flags to docker model commands Oct 16, 2025
Copilot AI requested a review from ericcurtin October 16, 2025 10:49
Co-authored-by: ericcurtin <1694275+ericcurtin@users.noreply.github.com>
Signed-off-by: Eric Curtin <eric.curtin@docker.com>
@ericcurtin ericcurtin force-pushed the copilot/add-options-to-docker-model branch from 57ef7d2 to 161b444 Compare October 16, 2025 11:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants