Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 63 additions & 0 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# AutoCoder Development Roadmap

This roadmap breaks work into clear phases so you can pick the next most valuable items quickly.

## Phase 0 — Baseline (ship ASAP)
- **PR discipline:** Enforce branch protection requiring “PR Check” (already configured in workflows; ensure GitHub rule is on).
- **Secrets hygiene:** Move all deploy secrets into repo/environment secrets; prohibit `.env` commits via pre-commit hook.
- **Smoke tests:** Keep `/health` and `/readiness` endpoints green; add UI smoke (landing page loads) to CI.

## Phase 1 — Reliability & Observability
- **Structured logging:** Add JSON logging for FastAPI (uvicorn access + app logs) with request IDs; forward to stdout for Docker/Traefik.
- **Error reporting:** Wire Sentry (or OpenTelemetry + OTLP) for backend exceptions and front-end errors.
- **Metrics:** Expose `/metrics` (Prometheus) for FastAPI; Traefik already exposes metrics option—enable when scraping is available.
- **Tracing:** Add OTEL middleware to FastAPI; propagate trace IDs through to Claude/Gemini calls when possible.

## Phase 2 — Platform & DevX
- **Local dev parity:** Add `docker-compose.dev.yml` with hot-reload for FastAPI + Vite UI; document one-command setup.
- **Makefile/taskfile:** Common commands (`make dev`, `make test`, `make lint`, `make format`, `make seed`).
- **Pre-commit:** Ruff, mypy, black (if adopted), eslint/prettier for `ui/`.
- **Typed APIs:** Add mypy strict mode to `server/` and type `schemas.py` fully (Pydantic v2 ConfigDict).

## Phase 3 — Product & Agent Quality
- **Model selection UI:** Let users choose assistant provider (Claude/Gemini) in settings; display active provider badge in chat.
- **Tooling guardrails:** For Gemini (chat-only), show “no tools” notice in UI and fallback logic to Claude when tools needed.
- **Conversation persistence:** Add pagination/search over assistant history; export conversation to file.
- **Feature board:** Surface feature stats/graph from MCP in the UI (read-only dashboard).

## Phase 4 — Security & Compliance
- **AuthN/AuthZ:** Add optional login (JWT/OIDC) gate for UI/API; role for “admin” vs “viewer” at least.
- **Rate limiting:** Enable per-IP rate limits at Traefik and per-token limits in FastAPI.
- **Audit trails:** Log agent actions and feature state changes with user identity.
- **Headers/HTTPS:** HSTS via Traefik, content-security-policy header from FastAPI.

## Phase 5 — Performance & Scale
- **Caching:** CDN/Traefik static cache for UI assets; server-side cache for model list/status endpoints.
- **Worker separation:** Optionally split agent runner from API via separate services and queues (e.g., Redis/RQ or Celery).
- **Background jobs:** Move long-running tasks to scheduler/worker with backoff and retries.

## Phase 6 — Testing & Quality Gates
- **Backend tests:** Add pytest suite for key routers (`/api/setup/status`, assistant chat happy-path with mock Claude/Gemini).
- **Frontend tests:** Add Vitest + React Testing Library smoke tests for core pages (dashboard loads, settings save).
- **E2E:** Playwright happy-path (login optional, start agent, view logs).
- **Coverage:** Fail CI if coverage drops below threshold (start at 60–70%).

## Phase 7 — Deployment & Ops
- **Blue/green deploy:** Add image tagging `:sha` + `:latest` (already for CI) with Traefik service labels to toggle.
- **Backups:** Snapshot `~/.autocoder` data volume; document restore.
- **Runbooks:** Add `RUNBOOK.md` for common ops (restart, rotate keys, renew certs, roll back).

## Phase 8 — Documentation & Onboarding
- **Getting started:** Short path for “run locally in 5 minutes” (scripted).
- **Config matrix:** Document required/optional env vars (Claude, Gemini, DuckDNS, Traefik, TLS).
- **Architecture:** One-page diagram: UI ↔ FastAPI ↔ Agent subprocess ↔ Claude/Gemini; MCP servers; Traefik front.

## Stretch Ideas
- **Telemetry-driven tuning:** Auto-select model/provider based on latency/cost SLA.
- **Cost controls:** Show per-run token/cost estimates; configurable budgets.
- **Offline/edge mode:** Ollama provider toggle with cached models.

## How to use this roadmap
- Pick the next phase that unblocks your current goal (reliability → platform → product).
- Keep PRs small and scoped to one bullet.
- Update this document when a bullet ships or is reprioritized.
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,13 @@ You need one of the following:
- **Claude Pro/Max Subscription** - Use `claude login` to authenticate (recommended)
- **Anthropic API Key** - Pay-per-use from https://console.anthropic.com/

### Optional: Gemini API (assistant chat only)
- `GEMINI_API_KEY` (required)
- `GEMINI_MODEL` (optional, default `gemini-1.5-flash`)
- `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta/openai`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The default GEMINI_BASE_URL is incorrect. According to Google's documentation for the OpenAI compatibility layer, the base URL should be https://generativelanguage.googleapis.com/v1beta. The /openai suffix is not part of the official endpoint and will cause connection errors with the default configuration.

Suggested change
- `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta/openai`)
- `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta`)


Notes: Gemini is used for assistant chat when configured; coding agents still run on Claude/Anthropic (tools are not available in Gemini mode).

---

## Quick Start
Expand Down
40 changes: 40 additions & 0 deletions docker-compose.traefik.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
version: "3.9"

services:
traefik:
image: traefik:v3.1
command:
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --certificatesresolvers.le.acme.httpchallenge=true
- --certificatesresolvers.le.acme.httpchallenge.entrypoint=web
- --certificatesresolvers.le.acme.email=${LETSENCRYPT_EMAIL}
- --certificatesresolvers.le.acme.storage=/letsencrypt/acme.json
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./letsencrypt:/letsencrypt
networks:
- traefik-proxy

autocoder:
networks:
- traefik-proxy
labels:
- traefik.enable=true
- traefik.http.routers.autocoder.rule=Host(`${DOMAIN}`)
- traefik.http.routers.autocoder.entrypoints=websecure
- traefik.http.routers.autocoder.tls.certresolver=le
- traefik.http.services.autocoder.loadbalancer.server.port=${APP_PORT:-8888}
- traefik.http.routers.autocoder-web.rule=Host(`${DOMAIN}`)
- traefik.http.routers.autocoder-web.entrypoints=web
- traefik.http.routers.autocoder-web.middlewares=redirect-to-https
- traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https

networks:
traefik-proxy:
external: true
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ aiofiles>=24.0.0
apscheduler>=3.10.0,<4.0.0
pywinpty>=2.0.0; sys_platform == "win32"
pyyaml>=6.0.0
openai>=1.52.0

# Dev dependencies
ruff>=0.8.0
Expand Down
133 changes: 133 additions & 0 deletions scripts/deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
#!/usr/bin/env bash

# One-click Docker deploy for AutoCoder on a VPS with DuckDNS + Traefik + Let's Encrypt.
# Prompts for domain, DuckDNS token, email, repo, branch, and target install path.

set -euo pipefail

if [[ $EUID -ne 0 ]]; then
echo "Please run as root (sudo)." >&2
exit 1
fi

prompt_required() {
local var_name="$1" prompt_msg="$2"
local value
while true; do
read -r -p "$prompt_msg: " value
if [[ -n "$value" ]]; then
printf -v "$var_name" '%s' "$value"
export "$var_name"
return
fi
echo "Value cannot be empty."
done
}

echo "=== AutoCoder VPS Deploy (Docker + Traefik + DuckDNS + Let's Encrypt) ==="

prompt_required DOMAIN "Enter your DuckDNS domain (e.g., myapp.duckdns.org)"
prompt_required DUCKDNS_TOKEN "Enter your DuckDNS token"
prompt_required LETSENCRYPT_EMAIL "Enter email for Let's Encrypt notifications"

read -r -p "Git repo URL [https://github.com/heidi-dang/autocoder.git]: " REPO_URL
REPO_URL=${REPO_URL:-https://github.com/heidi-dang/autocoder.git}

read -r -p "Git branch to deploy [main]: " DEPLOY_BRANCH
DEPLOY_BRANCH=${DEPLOY_BRANCH:-main}

read -r -p "Install path [/opt/autocoder]: " APP_DIR
APP_DIR=${APP_DIR:-/opt/autocoder}

read -r -p "App internal port (container) [8888]: " APP_PORT
APP_PORT=${APP_PORT:-8888}

echo
echo "Domain: $DOMAIN"
echo "Repo: $REPO_URL"
echo "Branch: $DEPLOY_BRANCH"
echo "Path: $APP_DIR"
echo
read -r -p "Proceed? [y/N]: " CONFIRM
if [[ "${CONFIRM,,}" != "y" ]]; then
echo "Aborted."
exit 1
fi

ensure_packages() {
echo "Installing Docker & prerequisites..."
apt-get update -y
apt-get install -y ca-certificates curl git gnupg
install -m 0755 -d /etc/apt/keyrings
if [[ ! -f /etc/apt/keyrings/docker.gpg ]]; then
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" > /etc/apt/sources.list.d/docker.list
apt-get update -y
fi
apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker
}

configure_duckdns() {
echo "Configuring DuckDNS..."
local cron_file="/etc/cron.d/duckdns"
cat > "$cron_file" <<EOF
*/5 * * * * root curl -fsS "https://www.duckdns.org/update?domains=$DOMAIN&token=$DUCKDNS_TOKEN&ip=" >/var/log/duckdns.log 2>&1
EOF
chmod 644 "$cron_file"
# Run once immediately
curl -fsS "https://www.duckdns.org/update?domains=$DOMAIN&token=$DUCKDNS_TOKEN&ip=" >/var/log/duckdns.log 2>&1 || true
}

clone_repo() {
if [[ -d "$APP_DIR/.git" ]]; then
echo "Repo already exists, pulling latest..."
git -C "$APP_DIR" fetch --all
git -C "$APP_DIR" checkout "$DEPLOY_BRANCH"
git -C "$APP_DIR" pull --ff-only origin "$DEPLOY_BRANCH"
else
echo "Cloning repository..."
mkdir -p "$APP_DIR"
git clone --branch "$DEPLOY_BRANCH" "$REPO_URL" "$APP_DIR"
fi
}

write_env() {
echo "Writing deploy env (.env.deploy)..."
cat > "$APP_DIR/.env.deploy" <<EOF
DOMAIN=$DOMAIN
LETSENCRYPT_EMAIL=$LETSENCRYPT_EMAIL
APP_PORT=$APP_PORT
EOF
echo "DuckDNS token stored in /etc/cron.d/duckdns (not in repo)."
}

prepare_ssl_storage() {
mkdir -p "$APP_DIR/letsencrypt"
touch "$APP_DIR/letsencrypt/acme.json"
chmod 600 "$APP_DIR/letsencrypt/acme.json"
}

run_compose() {
echo "Bringing up stack with Traefik reverse proxy and TLS..."
cd "$APP_DIR"
docker network inspect traefik-proxy >/dev/null 2>&1 || docker network create traefik-proxy
docker compose --env-file .env.deploy -f docker-compose.yml -f docker-compose.traefik.yml pull || true
docker compose --env-file .env.deploy -f docker-compose.yml -f docker-compose.traefik.yml up -d --build
}

ensure_packages
configure_duckdns
clone_repo
write_env
prepare_ssl_storage
run_compose

echo
echo "Deployment complete."
echo "Check: http://$DOMAIN (will redirect to https after cert is issued)."
echo "Logs: docker compose -f docker-compose.yml -f docker-compose.traefik.yml logs -f"
echo "To update: rerun this script; it will git pull and restart."
80 changes: 80 additions & 0 deletions server/gemini_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
"""
Lightweight Gemini API client (OpenAI-compatible endpoint).

Uses Google's OpenAI-compatible Gemini endpoint:
https://generativelanguage.googleapis.com/v1beta/openai

Environment variables:
- GEMINI_API_KEY (required)
- GEMINI_MODEL (optional, default: gemini-1.5-flash)
- GEMINI_BASE_URL (optional, default: official OpenAI-compatible endpoint)
"""

import os
from typing import AsyncGenerator, Iterable, Optional

from openai import AsyncOpenAI

# Default OpenAI-compatible base URL for Gemini
DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The DEFAULT_GEMINI_BASE_URL is incorrect. The correct base URL for the Gemini API's OpenAI compatibility layer is https://generativelanguage.googleapis.com/v1beta. Using the current URL will result in failed API requests.

Suggested change
DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai"
DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta"

DEFAULT_GEMINI_MODEL = os.getenv("GEMINI_MODEL", "gemini-1.5-flash")


def is_gemini_configured() -> bool:
"""Return True if a Gemini API key is available."""
return bool(os.getenv("GEMINI_API_KEY"))


def _build_client() -> AsyncOpenAI:
api_key = os.getenv("GEMINI_API_KEY")
if not api_key:
raise RuntimeError("GEMINI_API_KEY is not set")

base_url = os.getenv("GEMINI_BASE_URL", DEFAULT_GEMINI_BASE_URL)
return AsyncOpenAI(api_key=api_key, base_url=base_url)


async def stream_chat(
user_message: str,
*,
system_prompt: Optional[str] = None,
model: Optional[str] = None,
extra_messages: Optional[Iterable[dict]] = None,
) -> AsyncGenerator[str, None]:
"""
Stream a chat completion from Gemini.

Args:
user_message: Primary user input
system_prompt: Optional system prompt to prepend
model: Optional model name; defaults to GEMINI_MODEL env or fallback constant
extra_messages: Optional prior messages (list of {"role","content"})
Yields:
Text chunks as they arrive.
"""
client = _build_client()
messages = []

if system_prompt:
messages.append({"role": "system", "content": system_prompt})

if extra_messages:
messages.extend(extra_messages)

messages.append({"role": "user", "content": user_message})

completion = await client.chat.completions.create(
model=model or DEFAULT_GEMINI_MODEL,
messages=messages,
stream=True,
)

async for chunk in completion:
for choice in chunk.choices:
delta = choice.delta
if delta and delta.content:
# delta.content is a list of content parts
for part in delta.content:
text = getattr(part, "text", None) or part.get("text") if isinstance(part, dict) else None
if text:
yield text
Comment on lines +75 to +80

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for processing the streaming response is incorrect. For a streaming chat completion with the openai library, delta.content is a string containing the next chunk of text, not a list of parts. The current implementation iterates over the characters of this string but fails to extract any text, which will result in no output from the stream. This should be simplified to directly yield the content.

            if delta and delta.content:
                # The content from the streaming delta is a string, not a list of parts.
                yield delta.content

7 changes: 6 additions & 1 deletion server/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,11 @@ async def setup_status():

# If GLM mode is configured via .env, we have alternative credentials
glm_configured = bool(os.getenv("ANTHROPIC_BASE_URL") and os.getenv("ANTHROPIC_AUTH_TOKEN"))
credentials = has_claude_config or glm_configured

# Gemini configuration (OpenAI-compatible Gemini API)
gemini_configured = bool(os.getenv("GEMINI_API_KEY"))

credentials = has_claude_config or glm_configured or gemini_configured

# Check for Node.js and npm
node = shutil.which("node") is not None
Expand All @@ -215,6 +219,7 @@ async def setup_status():
credentials=credentials,
node=node,
npm=npm,
gemini=gemini_configured,
)


Expand Down
1 change: 1 addition & 0 deletions server/schemas.py
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,7 @@ class SetupStatus(BaseModel):
credentials: bool
node: bool
npm: bool
gemini: bool = False


# ============================================================================
Expand Down
Loading
Loading