Skip to content

Installation

TenzinGayche edited this page Dec 15, 2025 · 1 revision

Installation

This guide covers setting up the LangGraph Translation API for development, testing, and production deployment.


πŸ“‹ Prerequisites

Required

  • Python 3.10+ (3.11 or 3.12 recommended)
  • pip or uv package manager
  • At least one LLM provider API key:
    • Anthropic (Claude)
    • Google (Gemini)
    • OpenAI (GPT)

Optional

  • Docker for containerized deployment
  • LangSmith account for tracing/debugging

πŸš€ Quick Start

1. Clone the Repository

git clone https://github.com/OpenPecha/langraph-api.git
cd langraph-api

2. Create Virtual Environment

# Using venv
python -m venv venv
source venv/bin/activate  # Linux/macOS
# or
.\venv\Scripts\activate   # Windows

# Using uv (faster)
uv venv
source .venv/bin/activate

3. Install Dependencies

pip install -r requirements.txt

# Or with uv
uv pip install -r requirements.txt

4. Configure Environment

Create a .env file in the project root:

# .env

# === LLM Provider API Keys ===
# At least one is required

# Anthropic (Claude models)
ANTHROPIC_API_KEY=sk-ant-api03-...

# Google (Gemini models)
GEMINI_API_KEY=AIzaSy...

# OpenAI (GPT models)
OPENAI_API_KEY=sk-...

# === Optional: Dharmamitra Integration ===
DHARMAMITRA_TOKEN=your-token
DHARMAMITRA_PASSWORD=your-password

# === Optional: LangSmith Tracing ===
LANGSMITH_API_KEY=ls-...
LANGSMITH_PROJECT=Translation
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT=https://api.smith.langchain.com

# === Server Configuration ===
API_HOST=0.0.0.0
API_PORT=8001
DEFAULT_MODEL=claude-sonnet-4-20250514
MAX_BATCH_SIZE=50
DEFAULT_BATCH_SIZE=5

5. Start the Server

# Using uvicorn directly
uvicorn src.translation_api.api:app --reload --port 8001

# Or using the main entry point
python main.py

# Or using the start script
python start_server.py

6. Verify Installation

# Quick health check
curl http://localhost:8001/health

Expected response:

{
  "status": "healthy",
  "version": "1.0.0",
  "available_models": {
    "claude-sonnet-4-20250514": {...}
  }
}

πŸ“¦ Dependencies

Core Dependencies

Package Purpose
fastapi Web framework
uvicorn ASGI server
pydantic Data validation
pydantic-settings Environment configuration
langgraph Workflow orchestration
langchain-core LLM abstractions
langchain-anthropic Claude integration
langchain-openai OpenAI integration
langchain-google-genai Gemini integration
sse-starlette Server-Sent Events
httpx Async HTTP client

Full requirements.txt

fastapi>=0.100.0
uvicorn>=0.23.0
pydantic>=2.0.0
pydantic-settings>=2.0.0
langgraph>=0.0.30
langchain-core>=0.1.0
langchain-anthropic>=0.1.0
langchain-openai>=0.1.0
langchain-google-genai>=0.0.10
sse-starlette>=1.6.0
httpx>=0.24.0
python-dotenv>=1.0.0

πŸ”§ Configuration Options

Environment Variables

Variable Default Description
ANTHROPIC_API_KEY - Anthropic API key for Claude models
GEMINI_API_KEY - Google API key for Gemini models
OPENAI_API_KEY - OpenAI API key for GPT models
DHARMAMITRA_TOKEN - Dharmamitra API token
DHARMAMITRA_PASSWORD - Dharmamitra proxy password
API_HOST 0.0.0.0 Server bind address
API_PORT 8000 Server port
DEFAULT_MODEL claude Default translation model
MAX_BATCH_SIZE 50 Maximum texts per batch
DEFAULT_BATCH_SIZE 5 Default texts per batch
LANGSMITH_API_KEY - LangSmith API key
LANGSMITH_PROJECT Translation LangSmith project name
LANGSMITH_TRACING true Enable LangSmith tracing

Settings Class

# src/translation_api/config.py

class Settings(BaseSettings):
    anthropic_api_key: Optional[str] = None
    openai_api_key: Optional[str] = None
    gemini_api_key: Optional[str] = None
    dharmamitra_password: Optional[str] = None
    dharmamitra_token: Optional[str] = None
    
    langsmith_api_key: Optional[str] = None
    langsmith_project: str = "Translation"
    langsmith_tracing: bool = True
    
    api_host: str = "0.0.0.0"
    api_port: int = 8000
    default_model: str = "claude"
    max_batch_size: int = 50
    default_batch_size: int = 5
    
    class Config:
        env_file = ".env"
        case_sensitive = False

🐳 Docker Deployment

Using Docker

# Dockerfile
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8001

CMD ["uvicorn", "src.translation_api.api:app", "--host", "0.0.0.0", "--port", "8001"]
# Build
docker build -t langraph-api .

# Run
docker run -d \
  --name langraph-api \
  -p 8001:8001 \
  -e ANTHROPIC_API_KEY=your-key \
  -e GEMINI_API_KEY=your-key \
  langraph-api

Using Docker Compose

# docker-compose.yml
version: '3.8'

services:
  api:
    build: .
    ports:
      - "8001:8001"
    environment:
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      - GEMINI_API_KEY=${GEMINI_API_KEY}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    restart: unless-stopped
# Start
docker-compose up -d

# View logs
docker-compose logs -f

# Stop
docker-compose down

☁️ Cloud Deployment

Render.com

  1. Create a new Web Service
  2. Connect your GitHub repository
  3. Configure:
    • Build Command: pip install -r requirements.txt
    • Start Command: uvicorn src.translation_api.api:app --host 0.0.0.0 --port $PORT
  4. Add environment variables in the dashboard

Railway.app

  1. Create new project from GitHub
  2. Add environment variables
  3. Railway auto-detects Python and deploys

Fly.io

# Install flyctl
curl -L https://fly.io/install.sh | sh

# Login
flyctl auth login

# Launch
flyctl launch

# Set secrets
flyctl secrets set ANTHROPIC_API_KEY=your-key
flyctl secrets set GEMINI_API_KEY=your-key

# Deploy
flyctl deploy

πŸ§ͺ Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=src --cov-report=html

# Run specific test file
pytest tests/test_api.py

# Run with verbose output
pytest -v

Test Configuration

# pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
python_functions = test_*
asyncio_mode = auto

πŸ” Troubleshooting

Common Issues

"ANTHROPIC_API_KEY is required for Claude models"

Solution: Ensure your .env file exists and contains a valid API key:

echo "ANTHROPIC_API_KEY=sk-ant-api03-..." > .env

"Model 'xxx' is not available"

Solution: Check which models are available based on your API keys:

curl http://localhost:8001/models

Port already in use

Solution: Use a different port:

uvicorn src.translation_api.api:app --port 8002

Module not found errors

Solution: Ensure you're in the correct directory and virtual environment:

cd langraph-api
source venv/bin/activate
pip install -r requirements.txt

Slow startup

Solution: First request may be slow due to model initialization. This is normal.

Debug Mode

Enable debug logging:

uvicorn src.translation_api.api:app --reload --log-level debug

LangSmith Tracing

For detailed debugging, enable LangSmith:

  1. Create account at https://smith.langchain.com
  2. Get API key from settings
  3. Add to .env:
    LANGSMITH_API_KEY=ls-...
    LANGSMITH_PROJECT=Translation
    LANGSMITH_TRACING=true
    
  4. View traces at https://smith.langchain.com

πŸ“Š Monitoring

Health Endpoint

# Basic health check
curl http://localhost:8001/health

# Check available models
curl http://localhost:8001/models

Logging

Application logs are written to stdout. In production, configure your deployment platform to capture logs.

# Custom logging configuration
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

πŸ”’ Security Considerations

API Keys

  • Never commit .env files to version control
  • Use environment variables in production
  • Rotate keys regularly

CORS Configuration

Default configuration allows all origins. For production:

# src/translation_api/api.py
app.add_middleware(
    CORSMiddleware,
    allow_origins=["https://yourdomain.com"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

Rate Limiting

Consider adding rate limiting for production:

from slowapi import Limiter
from slowapi.util import get_remote_address

limiter = Limiter(key_func=get_remote_address)

@app.post("/translate")
@limiter.limit("10/minute")
async def translate_texts(request: Request, ...):
    ...

πŸ”— See Also

Clone this wiki locally