Skip to content

A transparent Ollama proxy with model deployment aware routing which auto-manages multiple Ollama instances in a given network.

License

Notifications You must be signed in to change notification settings

nomyo-ai/nomyo-router

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NOMYO Router

is a transparent proxy for Ollama with model deployment aware routing.

Screenshot_NOMYO_Router_0-2-2_Dashboard

It runs between your frontend application and Ollama backend and is transparent for both, the front- and backend.

arch

Installation

Copy/Clone the repository, edit the config.yaml by adding your Ollama backend servers and the max_concurrent_connections setting per endpoint. This equals to your OLLAMA_NUM_PARALLEL config settings.

# config.yaml
endpoints:
  - http://ollama0:11434
  - http://ollama1:11434
  - http://ollama2:11434
  - https://api.openai.com/v1

# Maximum concurrent connections *per endpoint‑model pair*
max_concurrent_connections: 2

# API keys for remote endpoints
# Set an environment variable like OPENAI_KEY
# Confirm endpoints are exactly as in endpoints block
api_keys:
  "http://192.168.0.50:11434": "ollama"
  "http://192.168.0.51:11434": "ollama"
  "http://192.168.0.52:11434": "ollama"
  "https://api.openai.com/v1": "${OPENAI_KEY}"

Run the NOMYO Router in a dedicated virtual environment, install the requirements and run with uvicorn:

python3 -m venv .venv/router
source .venv/router/bin/activate
pip3 install -r requirements.txt

on the shell do:

export OPENAI_KEY=YOUR_SECRET_API_KEY

finally you can

uvicorn router:app --host 127.0.0.1 --port 12434

Routing

NOMYO Router accepts any Ollama request on the configured port for any Ollama endpoint from your frontend application. It then checks the available backends for the specific request. When the request is embed(dings), chat or generate the request will be forwarded to a single Ollama server, answered and send back to the router which forwards it back to the frontend.

If another request for the same model config is made, NOMYO Router is aware which model runs on which Ollama server and routes the request to an Ollama server where this model is already deployed.

If at the same time there are more than max concurrent connections than configured, NOMYO Router will route this request to another Ollama server serving the requested model and having the least connections for fastest completion.

This way the Ollama backend servers are utilized more efficient than by simply using a wheighted, round-robin or least-connection approach.

routing

NOMYO Router also supports OpenAI API compatible v1 backend servers.

About

A transparent Ollama proxy with model deployment aware routing which auto-manages multiple Ollama instances in a given network.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •