-
Notifications
You must be signed in to change notification settings - Fork 0
Installation
This guide covers setting up the LangGraph Translation API for development, testing, and production deployment.
- Python 3.10+ (3.11 or 3.12 recommended)
- pip or uv package manager
- At least one LLM provider API key:
- Anthropic (Claude)
- Google (Gemini)
- OpenAI (GPT)
- Docker for containerized deployment
- LangSmith account for tracing/debugging
git clone https://github.com/OpenPecha/langraph-api.git
cd langraph-api# Using venv
python -m venv venv
source venv/bin/activate # Linux/macOS
# or
.\venv\Scripts\activate # Windows
# Using uv (faster)
uv venv
source .venv/bin/activatepip install -r requirements.txt
# Or with uv
uv pip install -r requirements.txtCreate a .env file in the project root:
# .env
# === LLM Provider API Keys ===
# At least one is required
# Anthropic (Claude models)
ANTHROPIC_API_KEY=sk-ant-api03-...
# Google (Gemini models)
GEMINI_API_KEY=AIzaSy...
# OpenAI (GPT models)
OPENAI_API_KEY=sk-...
# === Optional: Dharmamitra Integration ===
DHARMAMITRA_TOKEN=your-token
DHARMAMITRA_PASSWORD=your-password
# === Optional: LangSmith Tracing ===
LANGSMITH_API_KEY=ls-...
LANGSMITH_PROJECT=Translation
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT=https://api.smith.langchain.com
# === Server Configuration ===
API_HOST=0.0.0.0
API_PORT=8001
DEFAULT_MODEL=claude-sonnet-4-20250514
MAX_BATCH_SIZE=50
DEFAULT_BATCH_SIZE=5# Using uvicorn directly
uvicorn src.translation_api.api:app --reload --port 8001
# Or using the main entry point
python main.py
# Or using the start script
python start_server.py- Web UI: http://localhost:8001/
- Swagger Docs: http://localhost:8001/docs
- Health Check: http://localhost:8001/health
# Quick health check
curl http://localhost:8001/healthExpected response:
{
"status": "healthy",
"version": "1.0.0",
"available_models": {
"claude-sonnet-4-20250514": {...}
}
}| Package | Purpose |
|---|---|
fastapi |
Web framework |
uvicorn |
ASGI server |
pydantic |
Data validation |
pydantic-settings |
Environment configuration |
langgraph |
Workflow orchestration |
langchain-core |
LLM abstractions |
langchain-anthropic |
Claude integration |
langchain-openai |
OpenAI integration |
langchain-google-genai |
Gemini integration |
sse-starlette |
Server-Sent Events |
httpx |
Async HTTP client |
fastapi>=0.100.0
uvicorn>=0.23.0
pydantic>=2.0.0
pydantic-settings>=2.0.0
langgraph>=0.0.30
langchain-core>=0.1.0
langchain-anthropic>=0.1.0
langchain-openai>=0.1.0
langchain-google-genai>=0.0.10
sse-starlette>=1.6.0
httpx>=0.24.0
python-dotenv>=1.0.0
| Variable | Default | Description |
|---|---|---|
ANTHROPIC_API_KEY |
- | Anthropic API key for Claude models |
GEMINI_API_KEY |
- | Google API key for Gemini models |
OPENAI_API_KEY |
- | OpenAI API key for GPT models |
DHARMAMITRA_TOKEN |
- | Dharmamitra API token |
DHARMAMITRA_PASSWORD |
- | Dharmamitra proxy password |
API_HOST |
0.0.0.0 |
Server bind address |
API_PORT |
8000 |
Server port |
DEFAULT_MODEL |
claude |
Default translation model |
MAX_BATCH_SIZE |
50 |
Maximum texts per batch |
DEFAULT_BATCH_SIZE |
5 |
Default texts per batch |
LANGSMITH_API_KEY |
- | LangSmith API key |
LANGSMITH_PROJECT |
Translation |
LangSmith project name |
LANGSMITH_TRACING |
true |
Enable LangSmith tracing |
# src/translation_api/config.py
class Settings(BaseSettings):
anthropic_api_key: Optional[str] = None
openai_api_key: Optional[str] = None
gemini_api_key: Optional[str] = None
dharmamitra_password: Optional[str] = None
dharmamitra_token: Optional[str] = None
langsmith_api_key: Optional[str] = None
langsmith_project: str = "Translation"
langsmith_tracing: bool = True
api_host: str = "0.0.0.0"
api_port: int = 8000
default_model: str = "claude"
max_batch_size: int = 50
default_batch_size: int = 5
class Config:
env_file = ".env"
case_sensitive = False# Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8001
CMD ["uvicorn", "src.translation_api.api:app", "--host", "0.0.0.0", "--port", "8001"]# Build
docker build -t langraph-api .
# Run
docker run -d \
--name langraph-api \
-p 8001:8001 \
-e ANTHROPIC_API_KEY=your-key \
-e GEMINI_API_KEY=your-key \
langraph-api# docker-compose.yml
version: '3.8'
services:
api:
build: .
ports:
- "8001:8001"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- GEMINI_API_KEY=${GEMINI_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
restart: unless-stopped# Start
docker-compose up -d
# View logs
docker-compose logs -f
# Stop
docker-compose down- Create a new Web Service
- Connect your GitHub repository
- Configure:
-
Build Command:
pip install -r requirements.txt -
Start Command:
uvicorn src.translation_api.api:app --host 0.0.0.0 --port $PORT
-
Build Command:
- Add environment variables in the dashboard
- Create new project from GitHub
- Add environment variables
- Railway auto-detects Python and deploys
# Install flyctl
curl -L https://fly.io/install.sh | sh
# Login
flyctl auth login
# Launch
flyctl launch
# Set secrets
flyctl secrets set ANTHROPIC_API_KEY=your-key
flyctl secrets set GEMINI_API_KEY=your-key
# Deploy
flyctl deploy# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test file
pytest tests/test_api.py
# Run with verbose output
pytest -v# pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
python_functions = test_*
asyncio_mode = autoSolution: Ensure your .env file exists and contains a valid API key:
echo "ANTHROPIC_API_KEY=sk-ant-api03-..." > .envSolution: Check which models are available based on your API keys:
curl http://localhost:8001/modelsSolution: Use a different port:
uvicorn src.translation_api.api:app --port 8002Solution: Ensure you're in the correct directory and virtual environment:
cd langraph-api
source venv/bin/activate
pip install -r requirements.txtSolution: First request may be slow due to model initialization. This is normal.
Enable debug logging:
uvicorn src.translation_api.api:app --reload --log-level debugFor detailed debugging, enable LangSmith:
- Create account at https://smith.langchain.com
- Get API key from settings
- Add to
.env:LANGSMITH_API_KEY=ls-... LANGSMITH_PROJECT=Translation LANGSMITH_TRACING=true - View traces at https://smith.langchain.com
# Basic health check
curl http://localhost:8001/health
# Check available models
curl http://localhost:8001/modelsApplication logs are written to stdout. In production, configure your deployment platform to capture logs.
# Custom logging configuration
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)- Never commit
.envfiles to version control - Use environment variables in production
- Rotate keys regularly
Default configuration allows all origins. For production:
# src/translation_api/api.py
app.add_middleware(
CORSMiddleware,
allow_origins=["https://yourdomain.com"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)Consider adding rate limiting for production:
from slowapi import Limiter
from slowapi.util import get_remote_address
limiter = Limiter(key_func=get_remote_address)
@app.post("/translate")
@limiter.limit("10/minute")
async def translate_texts(request: Request, ...):
...- Architecture - System design
- API Reference - Endpoint documentation
- Usage Guide - Examples and tutorials
- Model Router - Model configuration