Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 70 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,36 @@ Wait for the model to download, then visit [http://localhost:3000](http://localh
docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8b
```

#### Using an External Ollama Instance

If you already have Ollama running on your host machine (outside Docker), you need to configure the `OLLAMA_URL` to use `host.docker.internal` instead of `localhost`:

```bash
# Docker Desktop (macOS/Windows)
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d

# Linux (add extra_hosts or use host IP)
docker compose -f docker-compose.prod.yml up -d # Then set OLLAMA_URL to your host's IP
```

**Why?** When running inside Docker, `localhost` refers to the container itself, not your host machine. `host.docker.internal` is a special DNS name that resolves to the host.

For Linux users, you can either:
- Use your host machine's actual IP address (e.g., `http://192.168.1.100:11434`)
- Add `extra_hosts: ["host.docker.internal:host-gateway"]` to the simstudio service in your compose file

#### Using vLLM

Sim also supports [vLLM](https://docs.vllm.ai/) for self-hosted models with OpenAI-compatible API:

```bash
# Set these environment variables
VLLM_BASE_URL=http://your-vllm-server:8000
VLLM_API_KEY=your_optional_api_key # Only if your vLLM instance requires auth
```

When running with Docker, use `host.docker.internal` if vLLM is on your host machine (same as Ollama above).

### Self-hosted: Dev Containers

1. Open VS Code with the [Remote - Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
Expand Down Expand Up @@ -190,6 +220,46 @@ Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set `COPILOT_API_KEY` environment variable in your self-hosted apps/sim/.env file to that value

## Environment Variables

Key environment variables for self-hosted deployments (see `apps/sim/.env.example` for full list):

| Variable | Required | Description |
|----------|----------|-------------|
| `DATABASE_URL` | Yes | PostgreSQL connection string with pgvector |
| `BETTER_AUTH_SECRET` | Yes | Auth secret (`openssl rand -hex 32`) |
| `BETTER_AUTH_URL` | Yes | Your app URL (e.g., `http://localhost:3000`) |
| `NEXT_PUBLIC_APP_URL` | Yes | Public app URL (same as above) |
| `ENCRYPTION_KEY` | Yes | Encryption key (`openssl rand -hex 32`) |
| `OLLAMA_URL` | No | Ollama server URL (default: `http://localhost:11434`) |
| `VLLM_BASE_URL` | No | vLLM server URL for self-hosted models |
| `COPILOT_API_KEY` | No | API key from sim.ai for Copilot features |

## Troubleshooting

### Ollama models not showing in dropdown (Docker)

If you're running Ollama on your host machine and Sim in Docker, change `OLLAMA_URL` from `localhost` to `host.docker.internal`:

```bash
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d
```

See [Using an External Ollama Instance](#using-an-external-ollama-instance) for details.

### Database connection issues

Ensure PostgreSQL has the pgvector extension installed. When using Docker, wait for the database to be healthy before running migrations.

### Port conflicts

If ports 3000, 3002, or 5432 are in use, configure alternatives:

```bash
# Custom ports
NEXT_PUBLIC_APP_URL=http://localhost:3100 POSTGRES_PORT=5433 docker compose up -d
```

## Tech Stack

- **Framework**: [Next.js](https://nextjs.org/) (App Router)
Expand Down
3 changes: 2 additions & 1 deletion apps/docs/content/docs/en/meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@
"variables",
"execution",
"permissions",
"sdks"
"sdks",
"self-hosting"
],
"defaultOpen": false
}
150 changes: 150 additions & 0 deletions apps/docs/content/docs/en/self-hosting/docker.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
---
title: Docker
description: Deploy Sim Studio with Docker Compose
---

import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Callout } from 'fumadocs-ui/components/callout'

## Quick Start

```bash
# Clone and start
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d
```

Open [http://localhost:3000](http://localhost:3000)

## Production Setup

### 1. Configure Environment

```bash
# Generate secrets
cat > .env << EOF
DATABASE_URL=postgresql://postgres:postgres@db:5432/simstudio
BETTER_AUTH_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
INTERNAL_API_SECRET=$(openssl rand -hex 32)
NEXT_PUBLIC_APP_URL=https://sim.yourdomain.com
BETTER_AUTH_URL=https://sim.yourdomain.com
NEXT_PUBLIC_SOCKET_URL=https://sim.yourdomain.com
EOF
```

### 2. Start Services

```bash
docker compose -f docker-compose.prod.yml up -d
```

### 3. Set Up SSL

<Tabs items={['Caddy (Recommended)', 'Nginx + Certbot']}>
<Tab value="Caddy (Recommended)">
Caddy automatically handles SSL certificates.

```bash
# Install Caddy
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy
```

Create `/etc/caddy/Caddyfile`:
```
sim.yourdomain.com {
reverse_proxy localhost:3000

handle /socket.io/* {
reverse_proxy localhost:3002
}
}
```

```bash
sudo systemctl restart caddy
```
</Tab>
<Tab value="Nginx + Certbot">
```bash
# Install
sudo apt install nginx certbot python3-certbot-nginx -y

# Create /etc/nginx/sites-available/sim
server {
listen 80;
server_name sim.yourdomain.com;

location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}

location /socket.io/ {
proxy_pass http://127.0.0.1:3002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

# Enable and get certificate
sudo ln -s /etc/nginx/sites-available/sim /etc/nginx/sites-enabled/
sudo certbot --nginx -d sim.yourdomain.com
```
</Tab>
</Tabs>

## Ollama

```bash
# With GPU
docker compose -f docker-compose.ollama.yml --profile gpu --profile setup up -d

# CPU only
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -d
```

Pull additional models:
```bash
docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.2
```

### External Ollama

If Ollama runs on your host machine (not in Docker):

```bash
# macOS/Windows
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d

# Linux - use your host IP
OLLAMA_URL=http://192.168.1.100:11434 docker compose -f docker-compose.prod.yml up -d
```

<Callout type="warning">
Inside Docker, `localhost` refers to the container, not your host. Use `host.docker.internal` or your host's IP.
</Callout>

## Commands

```bash
# View logs
docker compose -f docker-compose.prod.yml logs -f simstudio

# Stop
docker compose -f docker-compose.prod.yml down

# Update
docker compose -f docker-compose.prod.yml pull && docker compose -f docker-compose.prod.yml up -d

# Backup database
docker compose -f docker-compose.prod.yml exec db pg_dump -U postgres simstudio > backup.sql
```
87 changes: 87 additions & 0 deletions apps/docs/content/docs/en/self-hosting/environment-variables.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
---
title: Environment Variables
description: Configuration reference for Sim Studio
---

import { Callout } from 'fumadocs-ui/components/callout'

## Required

| Variable | Description |
|----------|-------------|
| `DATABASE_URL` | PostgreSQL connection string |
| `BETTER_AUTH_SECRET` | Auth secret (32 hex chars): `openssl rand -hex 32` |
| `BETTER_AUTH_URL` | Your app URL |
| `ENCRYPTION_KEY` | Encryption key (32 hex chars): `openssl rand -hex 32` |
| `INTERNAL_API_SECRET` | Internal API secret (32 hex chars): `openssl rand -hex 32` |
| `NEXT_PUBLIC_APP_URL` | Public app URL |
| `NEXT_PUBLIC_SOCKET_URL` | WebSocket URL (default: `http://localhost:3002`) |

## AI Providers

| Variable | Provider |
|----------|----------|
| `OPENAI_API_KEY` | OpenAI |
| `ANTHROPIC_API_KEY_1` | Anthropic Claude |
| `GEMINI_API_KEY_1` | Google Gemini |
| `MISTRAL_API_KEY` | Mistral |
| `OLLAMA_URL` | Ollama (default: `http://localhost:11434`) |

<Callout type="info">
For load balancing, add multiple keys with `_1`, `_2`, `_3` suffixes (e.g., `OPENAI_API_KEY_1`, `OPENAI_API_KEY_2`). Works with OpenAI, Anthropic, and Gemini.
</Callout>

<Callout type="info">
In Docker, use `OLLAMA_URL=http://host.docker.internal:11434` for host-machine Ollama.
</Callout>

### Azure OpenAI

| Variable | Description |
|----------|-------------|
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key |
| `AZURE_OPENAI_ENDPOINT` | Azure OpenAI endpoint URL |
| `AZURE_OPENAI_API_VERSION` | API version (e.g., `2024-02-15-preview`) |

### vLLM (Self-Hosted)

| Variable | Description |
|----------|-------------|
| `VLLM_BASE_URL` | vLLM server URL (e.g., `http://localhost:8000/v1`) |
| `VLLM_API_KEY` | Optional bearer token for vLLM |

## OAuth Providers

| Variable | Description |
|----------|-------------|
| `GOOGLE_CLIENT_ID` | Google OAuth client ID |
| `GOOGLE_CLIENT_SECRET` | Google OAuth client secret |
| `GITHUB_CLIENT_ID` | GitHub OAuth client ID |
| `GITHUB_CLIENT_SECRET` | GitHub OAuth client secret |

## Optional

| Variable | Description |
|----------|-------------|
| `API_ENCRYPTION_KEY` | Encrypts stored API keys (32 hex chars): `openssl rand -hex 32` |
| `COPILOT_API_KEY` | API key for copilot features |
| `ADMIN_API_KEY` | Admin API key for GitOps operations |
| `RESEND_API_KEY` | Email service for notifications |
| `ALLOWED_LOGIN_DOMAINS` | Restrict signups to domains (comma-separated) |
| `ALLOWED_LOGIN_EMAILS` | Restrict signups to specific emails (comma-separated) |
| `DISABLE_REGISTRATION` | Set to `true` to disable new user signups |

## Example .env

```bash
DATABASE_URL=postgresql://postgres:postgres@db:5432/simstudio
BETTER_AUTH_SECRET=<openssl rand -hex 32>
BETTER_AUTH_URL=https://sim.yourdomain.com
ENCRYPTION_KEY=<openssl rand -hex 32>
INTERNAL_API_SECRET=<openssl rand -hex 32>
NEXT_PUBLIC_APP_URL=https://sim.yourdomain.com
NEXT_PUBLIC_SOCKET_URL=https://sim.yourdomain.com
OPENAI_API_KEY=sk-...
```

See `apps/sim/.env.example` for all options.
50 changes: 50 additions & 0 deletions apps/docs/content/docs/en/self-hosting/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
title: Self-Hosting
description: Deploy Sim Studio on your own infrastructure
---

import { Card, Cards } from 'fumadocs-ui/components/card'
import { Callout } from 'fumadocs-ui/components/callout'

Deploy Sim Studio on your own infrastructure with Docker or Kubernetes.

## Requirements

| Resource | Minimum | Recommended |
|----------|---------|-------------|
| CPU | 2 cores | 4+ cores |
| RAM | 12 GB | 16+ GB |
| Storage | 20 GB SSD | 50+ GB SSD |
| Docker | 20.10+ | Latest |

## Quick Start

```bash
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d
```

Open [http://localhost:3000](http://localhost:3000)

## Deployment Options

<Cards>
<Card title="Docker" href="/self-hosting/docker">
Deploy with Docker Compose on any server
</Card>
<Card title="Kubernetes" href="/self-hosting/kubernetes">
Deploy with Helm on Kubernetes clusters
</Card>
<Card title="Cloud Platforms" href="/self-hosting/platforms">
Railway, DigitalOcean, AWS, Azure, GCP guides
</Card>
</Cards>

## Architecture

| Component | Port | Description |
|-----------|------|-------------|
| simstudio | 3000 | Main application |
| realtime | 3002 | WebSocket server |
| db | 5432 | PostgreSQL with pgvector |
| migrations | - | Database migrations (runs once) |
Loading