diff --git a/development/backend/api/index.mdx b/development/backend/api/index.mdx index 193046f..932dceb 100644 --- a/development/backend/api/index.mdx +++ b/development/backend/api/index.mdx @@ -600,9 +600,9 @@ const REQUEST_SCHEMA = { minimum: 1, description: 'How many items (must be positive)' }, - type: { - type: 'string', - enum: ['mysql', 'sqlite'], + type: { + type: 'string', + enum: ['postgresql', 'mysql'], description: 'Database engine type' } }, @@ -649,7 +649,7 @@ const ERROR_RESPONSE_SCHEMA = { interface RequestBody { name: string; count: number; - type: 'mysql' | 'sqlite'; + type: 'postgresql' | 'mysql'; } interface SuccessResponse { @@ -824,7 +824,7 @@ if (request.body.name.length < 3) { } // BAD: Manual enum validation (redundant) -if (request.body.type !== 'mysql' && request.body.type !== 'sqlite') { +if (request.body.type !== 'postgresql' && request.body.type !== 'mysql') { return reply.status(400).send({ error: 'Invalid database type' }); } ``` diff --git a/development/backend/auth.mdx b/development/backend/auth.mdx index d1ffe17..3a6fbc9 100644 --- a/development/backend/auth.mdx +++ b/development/backend/auth.mdx @@ -13,7 +13,7 @@ The backend authentication system is built on several key components: - **[Lucia v3](https://lucia-auth.com/)** - Core session management and authentication library - **[Argon2](https://github.com/napi-rs/node-rs)** - Industry-standard password hashing - **[Arctic](https://arctic.js.org/)** - OAuth 2.0 client library for provider integration -- **Database-backed sessions** - SQLite/Turso storage for session persistence +- **Database-backed sessions** - PostgreSQL storage for session persistence - **Dual authentication** - Support for both cookie sessions and OAuth2 Bearer tokens ## Authentication Flow Types diff --git a/development/backend/database/index.mdx b/development/backend/database/index.mdx index eb6dce0..f50f6a0 100644 --- a/development/backend/database/index.mdx +++ b/development/backend/database/index.mdx @@ -1,44 +1,39 @@ --- title: Database Management -description: Multi-database support with SQLite and Turso using environment-based configuration and Drizzle ORM for DeployStack Backend development. +description: PostgreSQL database management with Drizzle ORM for DeployStack Backend development. sidebarTitle: Overview --- ## Overview -DeployStack supports multiple database types through an environment-based configuration system using Drizzle ORM. The system provides excellent performance, type safety, and a modern, developer-friendly experience with support for: +DeployStack uses PostgreSQL as its database backend, providing enterprise-grade reliability, ACID compliance, and excellent performance. The system leverages Drizzle ORM for type-safe database operations with a modern, developer-friendly experience. -- **SQLite** - Local file-based database (default for development) -- **Turso** - Distributed SQLite database with global replication - -All databases use the same SQLite syntax and schema, ensuring consistency across different deployment environments. +PostgreSQL provides: +- **ACID Compliance** - Full transactional support with rollback capabilities +- **Connection Pooling** - Efficient connection management via node-postgres +- **Native Type System** - Boolean, timestamp with timezone, JSONB, arrays, and more +- **Horizontal Scaling** - Read replicas and partitioning for production deployments ## Database Setup and Configuration -The backend uses an environment-based configuration system where database credentials are provided via environment variables, and the database type is selected through the setup API. +The backend uses an environment-based configuration system where database credentials are provided via environment variables, and the database is initialized through the setup API. > **Setup Instructions**: For step-by-step setup instructions, see the [Database Setup Guide](/self-hosted/database-setup). -> **Database-Specific Guides**: For detailed technical information about specific databases, see: -> - [SQLite Development Guide](/development/backend/database/sqlite) -> - [Turso Development Guide](/development/backend/database/turso) +> **PostgreSQL Technical Guide**: For detailed technical information, see the [PostgreSQL Development Guide](/development/backend/database/postgresql). ### Environment Variables -Configure your chosen database type by setting the appropriate environment variables: +Configure PostgreSQL by setting these environment variables: -#### SQLite Configuration ```bash -# Optional - defaults to persistent_data/database/deploystack.db -# Path is relative to services/backend/ directory -SQLITE_DB_PATH=persistent_data/database/deploystack.db -``` - -#### Turso Configuration -```bash -TURSO_DATABASE_URL=libsql://your-database-url -TURSO_AUTH_TOKEN=your_auth_token +POSTGRES_HOST=localhost +POSTGRES_PORT=5432 +POSTGRES_DATABASE=deploystack +POSTGRES_USER=your_user +POSTGRES_PASSWORD=your_password +POSTGRES_SSL=false # Set to 'true' for SSL connections ``` ### Database Status @@ -53,7 +48,7 @@ Check the current status of the database configuration and initialization: { "configured": true, "initialized": true, - "dialect": "sqlite" + "dialect": "postgresql" } ``` @@ -68,19 +63,11 @@ The initial database setup is performed through the frontend setup wizard at `/s **Note for Developers**: While you can call the API endpoint directly for testing, end-users should always use the frontend setup wizard for proper initialization. -#### Setup Examples - -**SQLite Setup:** -```json -{ - "type": "sqlite" -} -``` +#### Setup Request -**Turso Setup:** ```json { - "type": "turso" + "type": "postgresql" } ``` @@ -93,7 +80,7 @@ The setup endpoint returns a JSON response indicating success and restart requir { "message": "Database setup successful. All services have been initialized and are ready to use.", "restart_required": false, - "database_type": "sqlite" + "database_type": "postgresql" } ``` @@ -102,13 +89,13 @@ The setup endpoint returns a JSON response indicating success and restart requir { "message": "Database setup successful, but some services may require a server restart to function properly.", "restart_required": true, - "database_type": "sqlite" + "database_type": "postgresql" } ``` ### Database Selection File -The chosen database type is stored in: +The database configuration is stored in: - `services/backend/persistent_data/db.selection.json` (relative to the backend service directory) This file is automatically created and managed by the setup API when users complete the frontend setup wizard at `https:///setup`. Manual editing is not recommended. @@ -116,7 +103,7 @@ This file is automatically created and managed by the setup API when users compl Example content: ```json { - "type": "sqlite", + "type": "postgresql", "selectedAt": "2025-01-02T18:22:15.000Z", "version": "1.0" } @@ -129,27 +116,34 @@ Example content: ### Key Components - **Drizzle ORM**: Type-safe ORM with native driver support -- **Native Drivers**: - - `better-sqlite3` for SQLite - - `@libsql/client` for Turso -- **Unified Schema**: Single schema definition works across all database types +- **node-postgres (pg)**: Native PostgreSQL driver with connection pooling - **Environment Configuration**: Database credentials via environment variables +- **Automatic Migrations**: Migrations applied on server startup -### Database Drivers +### Database Driver -The system uses native Drizzle drivers for optimal performance: +The system uses the native PostgreSQL driver for optimal performance: ```typescript -// SQLite -import { drizzle } from 'drizzle-orm/better-sqlite3'; - -// Turso -import { drizzle } from 'drizzle-orm/libsql'; +import { drizzle } from 'drizzle-orm/node-postgres'; +import { Pool } from 'pg'; + +// PostgreSQL connection pool +const pool = new Pool({ + host: config.host, + port: config.port, + database: config.database, + user: config.user, + password: config.password, + ssl: config.ssl ? { rejectUnauthorized: false } : false +}); + +const db = drizzle(pool, { schema }); ``` ### Database Connection Patterns -When accessing the database in route handlers, always use `getDb()` to obtain the database connection dynamically: +When accessing the database in route handlers, always use `getDb()` to obtain the database connection: ```typescript import { getDb } from '../../../db'; @@ -166,130 +160,78 @@ export default async function yourRoute(server: FastifyInstance) { - `server.db` may be `null` during certain initialization states - `getDb()` always returns the active database connection - This ensures consistent behavior across all endpoints -- Other working endpoints already follow this pattern **Avoid:** Direct usage of `server.db` as it can cause "Cannot read properties of null" errors. -## Database Driver Compatibility - -⚠️ **Critical for Multi-Database Applications**: Understanding driver differences prevents hours of debugging! +## Database Operations -### The Problem: Different Result Property Names +### Working with Query Results -When performing database operations (INSERT, UPDATE, DELETE), different database drivers return different property names in their result objects: +PostgreSQL operations (INSERT, UPDATE, DELETE) return a result object with `rowCount` indicating the number of affected rows: ```typescript -// SQLite (better-sqlite3) result object -{ - changes: 1, // ← Number of affected rows - lastInsertRowid: "abc123" -} - -// Turso (libSQL) result object +// PostgreSQL result object { - rowsAffected: 1, // ← Number of affected rows - lastInsertRowid: "abc123" + rowCount: 1, // Number of affected rows + rows: [], // Returned rows from SELECT queries + command: 'DELETE', // SQL command type + oid: 0, + fields: [] } ``` -**Key Difference:** -- **SQLite**: Uses `result.changes` to indicate affected rows -- **Turso**: Uses `result.rowsAffected` to indicate affected rows - -### Real-World Impact - -This difference caused a production bug where DELETE operations appeared to fail in Turso but worked in SQLite development: +### Standard Patterns +**Delete Operations**: ```typescript -// ❌ WRONG: Only works with SQLite -const deleted = result.changes > 0; - -// ✅ CORRECT: Works with both SQLite and Turso -const deleted = (result.changes || result.rowsAffected || 0) > 0; -``` - -**Symptoms of this bug:** -- ✅ Works perfectly in development (SQLite) -- ❌ Fails mysteriously in production (Turso) -- 🔄 Data actually gets modified, but application thinks it failed -- 🐛 Users see error messages even though operation succeeded - -### DeployStack's Built-in Solution - -DeployStack services automatically handle this compatibility issue. For example, in `McpInstallationService.deleteInstallation()`: +export class McpInstallationService { + async deleteInstallation(id: string): Promise { + const result = await this.db + .delete(mcpServerInstallations) + .where(eq(mcpServerInstallations.id, id)); -```typescript -// DeployStack handles both drivers automatically -const deleted = (result.changes || result.rowsAffected || 0) > 0; + return (result.rowCount || 0) > 0; + } +} ``` -### Writing Compatible Database Code - -When writing custom database operations, always use the cross-compatible pattern: - +**Update Operations**: ```typescript -// ✅ CORRECT: Multi-driver compatible -export class MyService { - async deleteRecord(id: string): Promise { +export class TeamService { + async updateTeamName(id: string, name: string): Promise { const result = await this.db - .delete(myTable) - .where(eq(myTable.id, id)); + .update(teams) + .set({ name, updated_at: new Date() }) + .where(eq(teams.id, id)); - // Handle both SQLite and Turso drivers - return (result.changes || result.rowsAffected || 0) > 0; + return (result.rowCount || 0) > 0; } +} +``` - async updateRecord(id: string, data: any): Promise { +**Counting Affected Rows**: +```typescript +export class TokenService { + async revokeExpiredTokens(): Promise { const result = await this.db - .update(myTable) - .set(data) - .where(eq(myTable.id, id)); + .delete(oauthAccessTokens) + .where(lt(oauthAccessTokens.expires_at, Date.now())); - // Same pattern for updates - return (result.changes || result.rowsAffected || 0) > 0; + return result.rowCount || 0; } } ``` -### Testing Across Database Types - -To catch these issues during development: - -1. **Test with both databases**: Run your code against both SQLite and Turso -2. **Use integration tests**: Write tests that verify actual database operations -3. **Check result objects**: Log result objects during development to see the structure - -```typescript -// Debug logging to see result structure -const result = await db.delete(table).where(condition); -console.log('Delete result:', result); // Inspect the actual properties -``` - -### Why This Happens - -This difference exists because: - -- **SQLite/better-sqlite3**: Uses the native SQLite C API which returns `changes` -- **Turso/libSQL**: Uses the HTTP/WebSocket protocol which standardizes on `rowsAffected` - -Both represent the same concept (number of affected rows) but with different property names. - -### Prevention Checklist - -When writing database operations: - -- [ ] Use `(result.changes || result.rowsAffected || 0)` pattern -- [ ] Test with both SQLite and Turso if possible -- [ ] Look for existing DeployStack service patterns to follow -- [ ] Never assume specific property names exist -- [ ] Add debug logging when troubleshooting database operations - -> **💡 Pro Tip**: This pattern also future-proofs your code for additional database types that DeployStack might support later. +## Database Structure +DeployStack uses PostgreSQL-native types and features: -## Database Structure +### Schema Files -The database schema is defined in `src/db/schema.sqlite.ts`. This is the **single source of truth** for all database schema definitions and works across all supported database types. +**`src/db/schema.ts`** - PostgreSQL schema definition +- Native PostgreSQL types (`boolean`, `timestamp with timezone`, `jsonb`) +- Proper foreign key relationships and constraints +- Migration directory: `drizzle/migrations/` The schema contains: 1. Core application tables (users, teams, MCP configurations, etc.) @@ -297,25 +239,22 @@ The schema contains: 3. Plugin table definitions (populated dynamically) 4. Proper foreign key relationships and constraints -**Important**: Only `schema.sqlite.ts` should be edited for schema changes. All databases use SQLite syntax. - ## Making Schema Changes Follow these steps to add or modify database tables: -1. **Modify Schema Definition** +1. **Modify Schema Definitions** - Edit `src/db/schema.sqlite.ts` to add or modify tables: + Edit `src/db/schema-tables/[table-group].ts`: ```typescript - // Example: Adding a new projects table - export const projects = sqliteTable('projects', { + // Example: src/db/schema-tables/teams.ts + import { pgTable, text, timestamp } from 'drizzle-orm/pg-core'; + + export const projects = pgTable('projects', { id: text('id').primaryKey(), name: text('name').notNull(), - description: text('description'), - userId: text('user_id').references(() => authUser.id), - createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()), - updatedAt: integer('updated_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()), + createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(), }); ``` @@ -327,11 +266,11 @@ Follow these steps to add or modify database tables: npm run db:generate ``` - This creates SQL migration files in `drizzle/migrations_sqlite/` that work across all database types. + This generates SQL migration files in `drizzle/migrations/`. 3. **Review Migrations** - Examine the generated SQL files in `drizzle/migrations_sqlite/` to ensure they match your intended changes. + Examine the generated SQL files in `drizzle/migrations/` to ensure they match your intended changes. 4. **Apply Migrations** @@ -348,14 +287,13 @@ Follow these steps to add or modify database tables: ```typescript // Example: Using the new table in a route app.get('/api/projects', async (request, reply) => { - const projects = await request.db.select().from(schema.projects).all(); + const projects = await request.db.select().from(schema.projects); return projects; }); ``` ## Migration Management -- **Unified Migrations**: Single `migrations_sqlite` folder works for all database types - **Automatic Tracking**: Migrations tracked in `__drizzle_migrations` table - **Incremental Application**: Only new migrations are applied - **Transaction Safety**: Migrations applied in transactions for consistency @@ -363,15 +301,9 @@ Follow these steps to add or modify database tables: **Important**: Migrations cannot run until the database exists. The initial setup (via frontend wizard at `/setup`) must be completed first to create the database, then migrations will apply on subsequent server startups. -### Migration Compatibility - -All databases use SQLite syntax, ensuring migration compatibility: -- **SQLite**: Direct execution -- **Turso**: libSQL protocol with SQLite syntax - ## Global Settings Integration -During database setup, DeployStack automatically initializes global settings that configure the application. This process is database-aware and handles database-specific limitations: +During database setup, DeployStack automatically initializes global settings that configure the application: ### Automatic Initialization @@ -379,13 +311,7 @@ The global settings system: - **Loads setting definitions** from all modules in `src/global-settings/` - **Creates setting groups** for organizing configuration options - **Initializes default values** for all settings with proper encryption -- **Handles database limitations** through automatic batching - -### Database-Specific Handling - -**SQLite**: Settings are created in large batches for optimal performance - -**Turso**: Uses efficient batch operations with libSQL protocol +- **Uses efficient batch operations** with PostgreSQL connection pooling > **Global Settings Documentation**: For detailed information about global settings, see the [Global Settings Guide](/development/backend/global-settings). @@ -401,105 +327,98 @@ Key plugin database features: ## Development Workflow -1. **Environment Setup**: Configure environment variables for your chosen database +1. **Environment Setup**: Configure PostgreSQL environment variables 2. **Initial Setup**: Complete the frontend setup wizard at `/setup` (for first-time setup) - This creates `persistent_data/db.selection.json` - - Initializes the database based on your selection + - Initializes the PostgreSQL database - For development, you can also directly call `POST /api/db/setup` -3. **Schema Changes**: Modify `src/db/schema.sqlite.ts` +3. **Schema Changes**: Modify `src/db/schema-tables/` directory 4. **Generate Migrations**: Run `npm run db:generate` 5. **Apply Changes**: Restart server or run `npm run db:up` 6. **Update Code**: Use the modified schema in your application -**Backup Strategy**: Always backup the entire `services/backend/persistent_data/` directory as it contains: -- The SQLite database file (if using SQLite) -- The database selection configuration -- Any other persistent application data +## PostgreSQL-Specific Features -## Database-Specific Considerations +### Connection Pooling +- Efficient connection management via `node-postgres` +- Configurable pool size and timeout settings +- Automatic connection recycling -### SQLite -- **File Location**: `services/backend/persistent_data/database/deploystack.db` (full path from project root) -- **Performance**: Excellent for development and small to medium deployments -- **Backup**: Simple file-based backup - backup the entire `persistent_data/` directory -- **Selection File**: Database type stored in `persistent_data/db.selection.json` +### Native Types +- Boolean columns with native `boolean` type +- Timestamps with timezone support +- JSONB for efficient JSON storage +- Arrays and custom types -### Turso -- **Global Replication**: Multi-region database replication -- **Edge Performance**: Low-latency access worldwide -- **libSQL Protocol**: Enhanced SQLite with additional features -- **Scaling**: Automatic scaling based on usage +### Advanced Features +- Multi-version concurrency control (MVCC) +- Point-in-time recovery and continuous archiving +- Full-text search capabilities +- Horizontal scaling with read replicas -## Best Practices +## Inspecting the Database -### Schema Design -- Use meaningful column names and consistent naming conventions -- Add appropriate indexes for frequently queried columns -- Include proper foreign key constraints for relational data -- Always use migrations for schema changes +```bash +# Using psql CLI +psql -h localhost -U your_user -d deploystack -### Environment Management -- Keep database credentials in environment variables -- Use different databases for different environments (dev/staging/prod) -- Never commit database credentials to version control +# Common psql commands +\dt # List all tables +\d tablename # Describe table structure +\q # Quit -### Migration Safety -- Always review generated migrations before applying -- Test migrations in development before production -- Keep migrations small and focused -- Never manually edit migration files +# Using pgAdmin (GUI) +# Download from: https://www.pgadmin.org/ +``` -## Inspecting Databases +## Environment Configuration Examples -### SQLite +### Development ```bash -# Using SQLite CLI (from project root) -sqlite3 services/backend/persistent_data/database/deploystack.db - -# Or from backend directory -cd services/backend -sqlite3 persistent_data/database/deploystack.db - -# Using DB Browser for SQLite (GUI) -# Download from: https://sqlitebrowser.org/ +POSTGRES_HOST=localhost +POSTGRES_PORT=5432 +POSTGRES_DATABASE=deploystack +POSTGRES_USER=postgres +POSTGRES_PASSWORD=development_password +POSTGRES_SSL=false ``` -### Turso +### Production with SSL ```bash -# Using Turso CLI -turso db shell your-database - -# Using libSQL shell -# Available at: https://github.com/libsql/libsql +POSTGRES_HOST=production-host.example.com +POSTGRES_PORT=5432 +POSTGRES_DATABASE=deploystack +POSTGRES_USER=deploystack_user +POSTGRES_PASSWORD=secure_production_password +POSTGRES_SSL=true ``` -## Troubleshooting - -### Setup Issues -- **Configuration Error**: Verify environment variables are set correctly -- **Network Issues**: Check connectivity for Turso -- **Permissions**: Ensure API tokens have proper permissions - -### Migration Issues -- **Migration Conflicts**: Check for duplicate or conflicting migrations -- **Schema Drift**: Ensure all environments use the same migrations -- **Rollback**: Manually revert problematic migrations if needed - -### Performance Issues -- **SQLite**: Check file system performance and disk space -- **Turso**: Monitor regional performance and connection latency - -### Plugin Issues -- **Missing Tables**: Ensure plugins are loaded before database initialization -- **Schema Conflicts**: Check for table name conflicts between plugins -- **Initialization Errors**: Review plugin database extension implementations - -## Future Database Support - -The environment-based architecture makes it easy to add support for additional databases: - -- **PostgreSQL**: Planned for future release -- **MySQL**: Possible future addition -- **Other SQLite-compatible databases**: Can be added with minimal changes - -The unified schema approach ensures that adding new database types requires minimal changes to existing application code. +### Docker Compose +```yaml +services: + postgres: + image: postgres:16-alpine + environment: + POSTGRES_DB: deploystack + POSTGRES_USER: deploystack + POSTGRES_PASSWORD: your_secure_password + ports: + - "5432:5432" + volumes: + - postgres_data:/var/lib/postgresql/data + + backend: + build: ./services/backend + environment: + POSTGRES_HOST: postgres + POSTGRES_PORT: 5432 + POSTGRES_DATABASE: deploystack + POSTGRES_USER: deploystack + POSTGRES_PASSWORD: your_secure_password + POSTGRES_SSL: false + depends_on: + - postgres + +volumes: + postgres_data: +``` diff --git a/development/backend/database/postgresql.mdx b/development/backend/database/postgresql.mdx new file mode 100644 index 0000000..8c76f2a --- /dev/null +++ b/development/backend/database/postgresql.mdx @@ -0,0 +1,562 @@ +--- +title: PostgreSQL Development Guide +description: Technical implementation details and development patterns for PostgreSQL integration in DeployStack Backend. +sidebarTitle: PostgreSQL Development +--- + +## Overview + +DeployStack uses PostgreSQL as its database backend, providing enterprise-grade reliability with ACID compliance, advanced features, and horizontal scalability through read replicas and partitioning. + +> **Setup Instructions**: For initial PostgreSQL configuration, see the [Database Setup Guide](/self-hosted/database-setup). + +## Technical Architecture + +### Enterprise PostgreSQL Features + +**ACID Compliance**: +- Full transactional support with rollback capabilities +- Multi-version concurrency control (MVCC) +- Point-in-time recovery and continuous archiving + +**Connection Pooling**: +- Efficient connection management via `node-postgres` +- Configurable pool size and timeout settings +- Automatic connection recycling + +**Native Type System**: +- Boolean, timestamp with timezone, JSONB, arrays +- Custom types and enums +- Full-text search capabilities + +### Drizzle ORM Integration + +DeployStack uses the `node-postgres` driver for optimal PostgreSQL performance: + +```typescript +import { drizzle } from 'drizzle-orm/node-postgres'; +import { Pool } from 'pg'; + +// PostgreSQL connection pool +const pool = new Pool({ + host: config.host, + port: config.port, + database: config.database, + user: config.user, + password: config.password, + ssl: config.ssl ? { rejectUnauthorized: false } : false +}); + +const db = drizzle(pool, { schema }); +``` + +## Working with Query Results + +PostgreSQL operations return result objects with specific properties: + +```typescript +// PostgreSQL result object structure +{ + rowCount: 1, // Number of affected rows + rows: [], // Returned rows from SELECT queries + command: 'DELETE', // SQL command type + oid: 0, + fields: [] +} +``` + +### Standard Patterns + +**Delete Operations**: +```typescript +export class McpInstallationService { + async deleteInstallation(id: string): Promise { + const result = await this.db + .delete(mcpServerInstallations) + .where(eq(mcpServerInstallations.id, id)); + + return (result.rowCount || 0) > 0; + } +} +``` + +**Update Operations**: +```typescript +export class TeamService { + async updateTeamName(id: string, name: string): Promise { + const result = await this.db + .update(teams) + .set({ name, updated_at: new Date() }) + .where(eq(teams.id, id)); + + return (result.rowCount || 0) > 0; + } +} +``` + +**Counting Affected Rows**: +```typescript +export class TokenService { + async revokeExpiredTokens(): Promise { + const result = await this.db + .delete(oauthAccessTokens) + .where(lt(oauthAccessTokens.expires_at, Date.now())); + + return result.rowCount || 0; + } +} +``` + +## Schema Architecture + +### Schema Structure + +DeployStack uses a modular schema structure with PostgreSQL-native types: + +**File Structure**: +``` +services/backend/src/db/ + ├── schema.ts # Main schema export + ├── schema-tables/ # Individual table definitions + │ ├── auth.ts # Authentication tables + │ ├── teams.ts # Team and membership tables + │ ├── mcp.ts # MCP server configurations + │ ├── satellites.ts # Satellite management + │ └── ... + └── migrations/ # PostgreSQL migrations +``` + +### PostgreSQL Type System + +DeployStack leverages PostgreSQL's native type system: + +**Data Types**: + +| Type | PostgreSQL Implementation | Example | +|------|--------------------------|---------| +| Boolean | `boolean('col')` | `email_verified: boolean('email_verified')` | +| Timestamp | `timestamp('col', { withTimezone: true })` | `created_at: timestamp('created_at', { withTimezone: true })` | +| Default Now | `.defaultNow()` | `created_at: timestamp('created_at').defaultNow()` | +| Text/String | `text('col')` | `name: text('name')` | +| JSONB | `jsonb('col')` | `metadata: jsonb('metadata')` | +| Table Builder | `pgTable('name', { ... })` | `export const users = pgTable('users', { ... })` | + +**Example Table Definition**: + +```typescript +import { pgTable, text, boolean, timestamp } from 'drizzle-orm/pg-core'; + +export const authUser = pgTable('authUser', { + id: text('id').primaryKey(), + email: text('email').notNull().unique(), + email_verified: boolean('email_verified').notNull().default(false), + created_at: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(), + updated_at: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(), +}); +``` + +### Adding New Tables + +When adding new tables, follow this pattern: + +1. **Create table definition** in `src/db/schema-tables/[group].ts` +2. **Generate migration** using `npm run db:generate` +3. **Review and apply** migration + +**Example: Adding a "notifications" table**: + +```typescript +// File: src/db/schema-tables/teams.ts +import { pgTable, text, boolean, timestamp } from 'drizzle-orm/pg-core'; + +export const notifications = pgTable('notifications', { + id: text('id').primaryKey(), + user_id: text('user_id').notNull().references(() => authUser.id), + title: text('title').notNull(), + message: text('message').notNull(), + read: boolean('read').notNull().default(false), + created_at: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(), +}); +``` + +## Migration System + +### Migration Directory + +PostgreSQL migrations are stored in: + +``` +drizzle/ + └── migrations/ # PostgreSQL migration files + ├── 0000_create_users.sql + ├── 0001_create_teams.sql + └── meta/ # Migration metadata +``` + +### Migration SQL Structure + +PostgreSQL migrations use standard PostgreSQL SQL syntax: + +```sql +CREATE TABLE "authUser" ( + "id" TEXT PRIMARY KEY, + "email" TEXT NOT NULL UNIQUE, + "email_verified" BOOLEAN DEFAULT false NOT NULL, + "created_at" TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL, + "updated_at" TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL +); + +CREATE INDEX "idx_authUser_email" ON "authUser" ("email"); +``` + +### Generating Migrations + +```bash +# Generate migration from schema changes +npm run db:generate + +# This creates files in drizzle/migrations/ +``` + +The migration generator analyzes your schema changes and creates appropriate SQL migration files. + +### Applying Migrations + +Migrations are automatically applied on server startup: + +```typescript +// Automatic migration application +const migrationsPath = path.join(process.cwd(), 'drizzle', 'migrations'); +await migrate(db, { migrationsFolder: migrationsPath }); +``` + +You can also apply migrations manually: + +```bash +npm run db:up +``` + +## Environment Configuration + +### Required Environment Variables + +```bash +# PostgreSQL Connection Settings +POSTGRES_HOST=localhost # Database host +POSTGRES_PORT=5432 # Database port (default: 5432) +POSTGRES_DATABASE=deploystack # Database name +POSTGRES_USER=your_user # Database user +POSTGRES_PASSWORD=your_password # Database password +POSTGRES_SSL=false # Enable SSL (true/false) +``` + +### SSL Configuration + +For production deployments with SSL: + +```bash +POSTGRES_SSL=true +``` + +This enables SSL with `rejectUnauthorized: false` for self-signed certificates. For production, configure proper SSL certificates. + +### Docker Compose Example + +```yaml +services: + postgres: + image: postgres:16-alpine + environment: + POSTGRES_DB: deploystack + POSTGRES_USER: deploystack + POSTGRES_PASSWORD: your_secure_password + ports: + - "5432:5432" + volumes: + - postgres_data:/var/lib/postgresql/data + + backend: + build: ./services/backend + environment: + POSTGRES_HOST: postgres + POSTGRES_PORT: 5432 + POSTGRES_DATABASE: deploystack + POSTGRES_USER: deploystack + POSTGRES_PASSWORD: your_secure_password + POSTGRES_SSL: false + depends_on: + - postgres + +volumes: + postgres_data: +``` + +## Development Workflow + +### Local Development Setup + +1. **Install PostgreSQL**: + ```bash + # macOS (Homebrew) + brew install postgresql@16 + brew services start postgresql@16 + + # Ubuntu/Debian + sudo apt-get install postgresql-16 + + # Docker + docker run -d --name postgres \ + -e POSTGRES_PASSWORD=password \ + -p 5432:5432 \ + postgres:16-alpine + ``` + +2. **Create Database**: + ```bash + # Create database + createdb -U postgres deploystack + + # Or using psql + psql -U postgres + CREATE DATABASE deploystack; + \q + ``` + +3. **Configure Environment**: + ```bash + # services/backend/.env + POSTGRES_HOST=localhost + POSTGRES_PORT=5432 + POSTGRES_DATABASE=deploystack + POSTGRES_USER=postgres + POSTGRES_PASSWORD=password + POSTGRES_SSL=false + ``` + +4. **Setup Database**: + ```bash + # Via API + curl -X POST http://localhost:3000/api/db/setup \ + -H "Content-Type: application/json" \ + -d '{"type": "postgresql"}' + + # Or use frontend wizard at http://localhost:5173/setup + ``` + +### Testing with PostgreSQL + +```typescript +// Test configuration for PostgreSQL +const testDbConfig = { + host: process.env.TEST_POSTGRES_HOST || 'localhost', + port: parseInt(process.env.TEST_POSTGRES_PORT || '5432'), + database: `test_${Date.now()}`, + user: process.env.TEST_POSTGRES_USER || 'postgres', + password: process.env.TEST_POSTGRES_PASSWORD || 'password', +}; + +// Create test database +await createTestDatabase(testDbConfig); + +// Run tests +await runTests(); + +// Cleanup +await dropTestDatabase(testDbConfig); +``` + +## Advanced PostgreSQL Features + +### JSONB Support + +PostgreSQL's JSONB type provides efficient JSON storage with indexing: + +```typescript +export const mcpServers = pgTable('mcpServerTemplates', { + id: text('id').primaryKey(), + name: text('name').notNull(), + tags: jsonb('tags').$type(), + metadata: jsonb('metadata').$type>(), +}); + +// Query with JSONB operators +const servers = await db + .select() + .from(mcpServers) + .where(sql`${mcpServers.tags} @> '["typescript"]'`); +``` + +### Full-Text Search + +PostgreSQL provides powerful full-text search capabilities: + +```typescript +// Add tsvector column for full-text search +export const mcpServers = pgTable('mcpServerTemplates', { + id: text('id').primaryKey(), + name: text('name').notNull(), + description: text('description'), + search_vector: sql`tsvector GENERATED ALWAYS AS (to_tsvector('english', coalesce(name, '') || ' ' || coalesce(description, ''))) STORED`, +}); + +// Create GIN index for fast searches +await db.execute(sql` + CREATE INDEX idx_mcpServers_search + ON mcpServerTemplates + USING GIN(search_vector) +`); + +// Perform full-text search +const results = await db + .select() + .from(mcpServers) + .where(sql`${mcpServers.search_vector} @@ to_tsquery('english', 'database & server')`); +``` + +### Advanced Indexing + +```typescript +// Partial indexes +await db.execute(sql` + CREATE INDEX idx_active_satellites + ON satellites (status) + WHERE status = 'active' +`); + +// Composite indexes +await db.execute(sql` + CREATE INDEX idx_team_installations + ON mcpTeamInstallations (team_id, server_id) +`); + +// Expression indexes +await db.execute(sql` + CREATE INDEX idx_lowercase_email + ON authUser (LOWER(email)) +`); +``` + +### Connection Pool Tuning + +```typescript +const pool = new Pool({ + host: config.host, + port: config.port, + database: config.database, + user: config.user, + password: config.password, + ssl: config.ssl, + + // Pool configuration + max: 20, // Maximum pool size + idleTimeoutMillis: 30000, // Idle connection timeout + connectionTimeoutMillis: 2000, // Connection timeout +}); +``` + +## Query Optimization + +### Using Explain + +```typescript +// Analyze query performance +const result = await db.execute(sql` + EXPLAIN ANALYZE + SELECT * FROM mcpServerTemplates + WHERE status = 'active' + ORDER BY created_at DESC +`); +``` + +### Query Builder Performance + +```typescript +// Efficient query with proper indexing +const installations = await db + .select({ + installation: mcpTeamInstallations, + server: mcpServers, + team: teams + }) + .from(mcpTeamInstallations) + .leftJoin(mcpServers, eq(mcpTeamInstallations.server_id, mcpServers.id)) + .leftJoin(teams, eq(mcpTeamInstallations.team_id, teams.id)) + .where(eq(mcpTeamInstallations.team_id, teamId)) + .orderBy(desc(mcpTeamInstallations.created_at)); +``` + +## Backup and Recovery + +### Backup Strategies + +```bash +# Full database backup +pg_dump -h localhost -U deploystack deploystack > backup.sql + +# Compressed backup +pg_dump -h localhost -U deploystack deploystack | gzip > backup.sql.gz + +# Custom format (supports parallel restore) +pg_dump -h localhost -U deploystack -Fc deploystack > backup.dump + +# Backup with Docker +docker exec postgres pg_dump -U deploystack deploystack > backup.sql +``` + +### Restore Database + +```bash +# Restore from SQL dump +psql -h localhost -U deploystack deploystack < backup.sql + +# Restore from custom format +pg_restore -h localhost -U deploystack -d deploystack backup.dump + +# Restore with Docker +docker exec -i postgres psql -U deploystack deploystack < backup.sql +``` + +## Monitoring and Maintenance + +### Database Statistics + +```sql +-- Check table sizes +SELECT + schemaname, + tablename, + pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size +FROM pg_tables +WHERE schemaname = 'public' +ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC; + +-- Check index usage +SELECT + schemaname, + tablename, + indexname, + idx_scan as index_scans, + pg_size_pretty(pg_relation_size(indexrelid)) as index_size +FROM pg_stat_user_indexes +ORDER BY idx_scan DESC; +``` + +### Vacuum and Analyze + +```sql +-- Analyze tables for query optimization +ANALYZE; + +-- Vacuum to reclaim storage +VACUUM; + +-- Full vacuum (locks tables) +VACUUM FULL; + +-- Analyze specific table +ANALYZE mcpServerTemplates; +``` + +--- + +For more information about database management in DeployStack, see the [Database Management Guide](/development/backend/database). diff --git a/development/backend/database/sqlite.mdx b/development/backend/database/sqlite.mdx deleted file mode 100644 index db75a03..0000000 --- a/development/backend/database/sqlite.mdx +++ /dev/null @@ -1,446 +0,0 @@ ---- -title: SQLite Database Development Guide -description: Technical implementation details and best practices for SQLite integration in DeployStack Backend development. -sidebarTitle: SQLite Database ---- - - -## Overview - -SQLite is the default database for DeployStack development and small to medium deployments. It provides excellent performance, zero configuration, and a simple file-based architecture that makes it ideal for development, testing, and single-server deployments. - -> **Setup Instructions**: For initial SQLite configuration, see the [Database Setup Guide](/self-hosted/database-setup#sqlite). - -## Technical Architecture - -### File-Based Database - -SQLite stores the entire database in a single file, making it extremely portable and easy to manage: - -- **Database File**: `persistent_data/database/deploystack.db` -- **Zero Configuration**: No server setup or network configuration required -- **ACID Compliance**: Full transaction support with rollback capabilities -- **Cross-Platform**: Works identically across all operating systems - -### Direct Driver Integration - -DeployStack uses the `better-sqlite3` driver for optimal SQLite performance: - -```typescript -import { drizzle } from 'drizzle-orm/better-sqlite3'; -import Database from 'better-sqlite3'; - -// Direct file-based connection -const sqlite = new Database(dbPath); -const db = drizzle(sqlite, { schema }); -``` - -## Performance Characteristics - -### Advantages - -**Fast Local Operations**: -- No network latency for database operations -- Direct file system access for maximum speed -- Excellent read performance for concurrent operations - -**Simple Deployment**: -- Single file contains entire database -- No separate database server required -- Easy backup and restore operations - -**Development Friendly**: -- Instant startup with no configuration -- Easy to reset and recreate for testing -- Perfect for local development workflows - -### Limitations - -**Single Server Only**: -- Cannot be shared across multiple application instances -- No built-in replication or clustering -- Limited to single-server deployments - -**Concurrent Write Limitations**: -- Single writer at a time (multiple readers supported) -- Write operations are serialized -- May become a bottleneck under heavy write loads - -## Development Workflow - -### Local Development Setup - -SQLite is the recommended database for local development: - -```bash -# SQLite requires no additional setup -DB_TYPE=sqlite - -# Optional: Custom database path -SQLITE_DB_PATH=persistent_data/database/my-custom.db -``` - -### Database File Management - -**Default Location**: `services/backend/persistent_data/database/deploystack.db` - -**Directory Structure**: -``` -services/backend/ -├── persistent_data/ -│ ├── database/ -│ │ └── deploystack.db # Main database file -│ └── db.selection.json # Database type selection -``` - -### Testing with SQLite - -SQLite is excellent for testing due to its simplicity: - -```typescript -// Test setup - create temporary database -const testDb = new Database(':memory:'); // In-memory for speed -// or -const testDb = new Database('test.db'); // File-based for persistence - -// Run migrations -await migrate(drizzle(testDb), { migrationsFolder: './migrations' }); - -// Run tests -// ... - -// Cleanup -testDb.close(); -``` - -## Global Settings Integration - -### Batch Operations - -SQLite excels at batch operations and can handle large global settings initialization efficiently: - -- **Large Batches**: Can insert all 17+ global settings in a single transaction -- **No Parameter Limits**: Unlike D1, SQLite has no practical parameter limits -- **Transaction Safety**: All settings created atomically - -### Performance Benefits - -```typescript -// SQLite can handle large batch operations efficiently -await db.transaction(async (tx) => { - // Insert all settings in a single transaction - await tx.insert(globalSettings).values(allSettingsData); - await tx.insert(globalSettingGroups).values(allGroupsData); -}); -``` - -## Database Inspection and Debugging - -### SQLite CLI - -The SQLite command-line interface is the primary tool for database inspection: - -```bash -# Open database -sqlite3 services/backend/persistent_data/database/deploystack.db - -# Common commands -.tables # List all tables -.schema tablename # Show table schema -.headers on # Show column headers -.mode column # Format output in columns - -# Query examples -SELECT * FROM globalSettings LIMIT 10; -SELECT COUNT(*) FROM users; -.quit # Exit -``` - -### GUI Tools - -**DB Browser for SQLite** (Recommended): -- Download: [https://sqlitebrowser.org/](https://sqlitebrowser.org/) -- Visual table browsing and editing -- Query execution with syntax highlighting -- Schema visualization - -**Other Options**: -- **SQLiteStudio**: Cross-platform SQLite manager -- **DBeaver**: Universal database tool with SQLite support -- **VS Code Extensions**: SQLite Viewer, SQLite3 Editor - -### Programmatic Inspection - -```typescript -// Get database info -const info = db.prepare("PRAGMA database_list").all(); -const tables = db.prepare("SELECT name FROM sqlite_master WHERE type='table'").all(); - -// Check table structure -const schema = db.prepare("PRAGMA table_info(globalSettings)").all(); - -// Performance analysis -const stats = db.prepare("PRAGMA compile_options").all(); -``` - -## Backup and Recovery - -### File-Based Backup - -SQLite's file-based nature makes backup extremely simple: - -```bash -# Simple file copy (when database is not in use) -cp persistent_data/database/deploystack.db backup/deploystack-$(date +%Y%m%d).db - -# Using SQLite backup command (safe during operation) -sqlite3 persistent_data/database/deploystack.db ".backup backup/deploystack-$(date +%Y%m%d).db" -``` - -### Automated Backup Script - -```bash -#!/bin/bash -# backup-sqlite.sh - -DB_PATH="persistent_data/database/deploystack.db" -BACKUP_DIR="backup" -DATE=$(date +%Y%m%d_%H%M%S) - -mkdir -p $BACKUP_DIR - -# Create backup -sqlite3 $DB_PATH ".backup $BACKUP_DIR/deploystack-$DATE.db" - -# Keep only last 7 days of backups -find $BACKUP_DIR -name "deploystack-*.db" -mtime +7 -delete - -echo "Backup created: $BACKUP_DIR/deploystack-$DATE.db" -``` - -### Recovery - -```bash -# Restore from backup -cp backup/deploystack-20250103.db persistent_data/database/deploystack.db - -# Or using SQLite restore -sqlite3 persistent_data/database/deploystack.db ".restore backup/deploystack-20250103.db" -``` - -## Performance Optimization - -### Indexing Strategy - -SQLite benefits greatly from proper indexing: - -```sql --- Example indexes for common queries -CREATE INDEX idx_users_email ON users(email); -CREATE INDEX idx_global_settings_key ON globalSettings(key); -CREATE INDEX idx_sessions_user_id ON sessions(user_id); -CREATE INDEX idx_teams_created_at ON teams(created_at); -``` - -### PRAGMA Settings - -Optimize SQLite performance with PRAGMA settings: - -```typescript -// Performance optimizations -db.pragma('journal_mode = WAL'); // Write-Ahead Logging -db.pragma('synchronous = NORMAL'); // Balanced safety/performance -db.pragma('cache_size = 1000000'); // 1GB cache -db.pragma('temp_store = MEMORY'); // Use memory for temp tables -``` - -### Connection Pooling - -While SQLite doesn't need traditional connection pooling, you can optimize connection usage: - -```typescript -// Reuse single connection -const sqlite = new Database(dbPath, { - readonly: false, - fileMustExist: false, - timeout: 5000, - verbose: process.env.NODE_ENV === 'development' ? (message) => { - server.log.debug({ operation: 'sqlite_query' }, message); - } : undefined -}); - -// Enable WAL mode for better concurrency -sqlite.pragma('journal_mode = WAL'); -``` - -## Migration Considerations - -### SQLite-Specific Features - -SQLite has some unique characteristics for migrations: - -```sql --- SQLite doesn't support all ALTER TABLE operations --- Instead of ALTER COLUMN, you need to recreate the table - --- Example: Adding a column (supported) -ALTER TABLE users ADD COLUMN phone TEXT; - --- Example: Changing column type (not supported directly) --- Requires table recreation: -CREATE TABLE users_new ( - id TEXT PRIMARY KEY, - email TEXT NOT NULL, - name TEXT NOT NULL, - age INTEGER -- Changed from TEXT to INTEGER -); - -INSERT INTO users_new SELECT id, email, name, CAST(age AS INTEGER) FROM users; -DROP TABLE users; -ALTER TABLE users_new RENAME TO users; -``` - -### Migration Best Practices - -1. **Test Migrations**: Always test on a copy of production data -2. **Backup Before Migration**: Create backup before applying migrations -3. **Use Transactions**: Wrap migrations in transactions for rollback capability -4. **Check Constraints**: Verify foreign key constraints after table recreation - -## Troubleshooting - -### Common Issues - -**"Database is locked"** -- **Cause**: Another process has the database open -- **Solution**: Ensure only one application instance accesses the database -- **Prevention**: Use WAL mode for better concurrency - -**"No such table" errors** -- **Cause**: Migrations haven't been applied -- **Solution**: Run `npm run db:up` or restart the server -- **Check**: Verify migration files exist in `drizzle/migrations_sqlite/` - -**Poor performance** -- **Cause**: Missing indexes or suboptimal queries -- **Solution**: Add appropriate indexes and optimize queries -- **Analysis**: Use `EXPLAIN QUERY PLAN` to analyze query performance - -**File corruption** -- **Cause**: Unexpected shutdown or disk issues -- **Solution**: Restore from backup -- **Prevention**: Use WAL mode and regular backups - -### Debugging Queries - -```typescript -// Enable query logging -const db = drizzle(sqlite, { - schema, - logger: { - logQuery: (query, params) => { - server.log.debug({ operation: 'sqlite_query', query, params }, 'Executing query'); - } - } -}); - -// Analyze query performance -const explain = db.prepare('EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = ?').all('test@example.com'); -server.log.debug({ operation: 'sqlite_explain', explain }, 'Query execution plan'); -``` - -## Production Considerations - -### When to Use SQLite in Production - -**Good For**: -- Single-server applications -- Read-heavy workloads -- Small to medium datasets (< 1TB) -- Applications with predictable load patterns -- Embedded applications - -**Consider Alternatives When**: -- Multiple application servers needed -- High concurrent write requirements -- Need for real-time replication -- Distributed deployment requirements - -### Production Optimizations - -```typescript -// Production SQLite configuration -const sqlite = new Database(dbPath, { - readonly: false, - fileMustExist: true, - timeout: 10000 -}); - -// Production PRAGMA settings -sqlite.pragma('journal_mode = WAL'); -sqlite.pragma('synchronous = NORMAL'); -sqlite.pragma('cache_size = 2000000'); // 2GB cache -sqlite.pragma('mmap_size = 268435456'); // 256MB memory-mapped I/O -sqlite.pragma('optimize'); // Optimize database -``` - -### Monitoring - -```typescript -// Monitor database size and performance -const stats = { - fileSize: fs.statSync(dbPath).size, - pageCount: db.prepare('PRAGMA page_count').get(), - pageSize: db.prepare('PRAGMA page_size').get(), - walSize: fs.existsSync(dbPath + '-wal') ? fs.statSync(dbPath + '-wal').size : 0 -}; - -server.log.info({ operation: 'sqlite_monitoring', stats }, 'Database statistics'); -``` - -## Integration with DeployStack Features - -### Global Settings - -SQLite provides optimal performance for global settings: -- **Fast initialization**: All settings created in single transaction -- **No batching needed**: No parameter limits to worry about -- **Immediate consistency**: All changes immediately visible - -### Plugin System - -Plugins work seamlessly with SQLite: -- **Table creation**: Plugin tables created through standard migrations -- **Data operations**: Full SQL feature support -- **Performance**: Excellent performance for plugin data operations - -### Migration System - -SQLite migration advantages: -- **Fast execution**: Local file operations are very fast -- **Transaction safety**: Full rollback support for failed migrations -- **Simple debugging**: Easy to inspect database state during development - -## Future Considerations - -### Scaling Beyond SQLite - -When you outgrow SQLite, DeployStack makes migration easy: - -1. **Export Data**: Use SQLite's `.dump` command -2. **Transform Schema**: Convert to target database format -3. **Update Configuration**: Change database type in setup -4. **Import Data**: Load data into new database - -### Hybrid Approaches - -Consider hybrid approaches for scaling: -- **Read Replicas**: Use D1 or Turso for global read access -- **Caching Layer**: Add Redis for frequently accessed data -- **Microservices**: Split into multiple services with separate databases - ---- - -For general database concepts and cross-database functionality, see the [Database Development Guide](/development/backend/database). - -For initial setup and configuration, see the [Database Setup Guide](/self-hosted/database-setup). diff --git a/development/backend/database/turso.mdx b/development/backend/database/turso.mdx deleted file mode 100644 index 753189b..0000000 --- a/development/backend/database/turso.mdx +++ /dev/null @@ -1,480 +0,0 @@ ---- -title: Turso Database Development -description: Complete guide to using Turso distributed SQLite database with DeployStack Backend, including setup, configuration, and best practices. -sidebarTitle: Turso Database ---- - - -## Overview - -Turso is a distributed SQLite database service that provides global replication and edge performance. It's built on libSQL, an open-source fork of SQLite that adds additional features while maintaining full SQLite compatibility. - -DeployStack integrates with Turso using the official `@libsql/client` driver through Drizzle ORM, providing excellent performance and developer experience. - -## Key Features - -- **Global Replication**: Automatic multi-region database replication -- **Edge Performance**: Low-latency access from anywhere in the world -- **SQLite Compatibility**: Full compatibility with SQLite syntax and features -- **Scalability**: Automatic scaling based on usage patterns -- **libSQL Protocol**: Enhanced SQLite with additional networking capabilities - - - **Migration Compatibility**: Turso requires SQL statements to be executed individually rather than in batches. DeployStack automatically handles this requirement by intelligently splitting migration files into individual statements during execution. You don't need to modify your migrations or write them differently - we handle the complexity for you. - - -## Setup and Configuration - -### Prerequisites - -1. **Turso Account**: Sign up at [turso.tech](https://turso.tech) -2. **Turso CLI**: Install the Turso CLI tool -3. **Database Creation**: Create a Turso database instance - -### Installing Turso CLI - -```bash -# macOS (Homebrew) -brew install tursodatabase/tap/turso - -# Linux/macOS (curl) -curl -sSfL https://get.tur.so/install.sh | bash - -# Windows (PowerShell) -powershell -c "irm get.tur.so/install.ps1 | iex" -``` - -### Creating a Turso Database - -```bash -# Login to Turso -turso auth login - -# Create a new database -turso db create deploystack-dev - -# Get the database URL -turso db show deploystack-dev --url - -# Create an authentication token -turso db tokens create deploystack-dev -``` - -### Environment Configuration - -Add the following environment variables to your `.env` file: - -```bash -# Turso Configuration -TURSO_DATABASE_URL=libsql://your-database-name-your-org.turso.io -TURSO_AUTH_TOKEN=your_auth_token_here -``` - -**Important Notes:** -- The database URL should start with `libsql://` -- Keep your auth token secure and never commit it to version control -- Use different databases for different environments (dev/staging/prod) - -## Database Setup in DeployStack - -### Using the Setup API - -Once your environment variables are configured, use the DeployStack setup API: - -```bash -# Setup Turso database -curl -X POST http://localhost:3000/api/db/setup \ - -H "Content-Type: application/json" \ - -d '{"type": "turso"}' -``` - -### Verification - -Check that the database is properly configured: - -```bash -# Check database status -curl http://localhost:3000/api/db/status -``` - -Expected response: -```json -{ - "configured": true, - "initialized": true, - "dialect": "turso" -} -``` - -## Development Workflow - -### Schema Development - -Turso uses the same SQLite schema as other database types. All schema changes are made in `src/db/schema.sqlite.ts`: - -```typescript -// Example: Adding a new table -export const projects = sqliteTable('projects', { - id: text('id').primaryKey(), - name: text('name').notNull(), - description: text('description'), - userId: text('user_id').references(() => authUser.id), - createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()), - updatedAt: integer('updated_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()), -}); -``` - -### Migration Generation - -Generate migrations using the standard Drizzle commands: - -```bash -# Generate migration files -npm run db:generate - -# Apply migrations (automatic on server start) -npm run db:up -``` - -**Note**: While migrations use standard SQLite syntax with multiple statements and breakpoint markers, DeployStack automatically processes these for Turso compatibility. Each CREATE TABLE, CREATE INDEX, and other SQL statements are executed individually behind the scenes, ensuring smooth deployment without any manual intervention. - -### Database Operations - -All standard Drizzle operations work with Turso: - -```typescript -// Example: Querying data -const users = await db.select().from(schema.authUser).all(); - -// Example: Inserting data -await db.insert(schema.authUser).values({ - id: 'user_123', - username: 'john_doe', - email: 'john@example.com', - // ... other fields -}); - -// Example: Complex queries with joins -const usersWithTeams = await db - .select() - .from(schema.authUser) - .leftJoin(schema.teamMembers, eq(schema.authUser.id, schema.teamMembers.userId)) - .where(eq(schema.authUser.active, true)); -``` - -## Performance Considerations - -### Connection Management - -Turso connections are managed automatically by the libSQL client: - -- **Connection Pooling**: Automatic connection pooling for optimal performance -- **Keep-Alive**: Connections are kept alive to reduce latency -- **Automatic Reconnection**: Handles network interruptions gracefully - -### Query Optimization - -- **Prepared Statements**: Use prepared statements for repeated queries -- **Batch Operations**: Group multiple operations when possible (note: migrations are automatically split for compatibility) -- **Indexing**: Add appropriate indexes for frequently queried columns -- **Migration Performance**: Initial migrations execute individually for compatibility, but runtime queries maintain full performance - -```typescript -// Example: Batch operations -await db.batch([ - db.insert(schema.authUser).values(user1), - db.insert(schema.authUser).values(user2), - db.insert(schema.authUser).values(user3), -]); -``` - -### Regional Performance - -- **Edge Locations**: Turso automatically routes queries to the nearest edge location -- **Read Replicas**: Read operations are served from local replicas -- **Write Consistency**: Writes are replicated globally with eventual consistency - -## Best Practices - -### Environment Management - -```bash -# Development -TURSO_DATABASE_URL=libsql://deploystack-dev-your-org.turso.io -TURSO_AUTH_TOKEN=dev_token_here - -# Staging -TURSO_DATABASE_URL=libsql://deploystack-staging-your-org.turso.io -TURSO_AUTH_TOKEN=staging_token_here - -# Production -TURSO_DATABASE_URL=libsql://deploystack-prod-your-org.turso.io -TURSO_AUTH_TOKEN=prod_token_here -``` - -### Security - -- **Token Rotation**: Regularly rotate authentication tokens -- **Environment Isolation**: Use separate databases for each environment -- **Access Control**: Use Turso's built-in access control features -- **Encryption**: Data is encrypted in transit and at rest - -### Monitoring - -```bash -# Monitor database usage -turso db show deploystack-prod - -# View recent activity -turso db shell deploystack-prod --command ".stats" - -# Check replication status -turso db locations deploystack-prod -``` - -## Debugging and Troubleshooting - -### Common Issues - -**Connection Errors** -``` -Error: Failed to connect to Turso database -``` -- Verify `TURSO_DATABASE_URL` is correct and starts with `libsql://` -- Check that `TURSO_AUTH_TOKEN` is valid and not expired -- Ensure network connectivity to Turso servers - -**Authentication Errors** -``` -Error: Authentication failed -``` -- Regenerate the auth token: `turso db tokens create your-database` -- Verify the token has proper permissions -- Check that the token matches the database - -**Migration Errors** -``` -Error: SQL_MANY_STATEMENTS: SQL string contains more than one statement -``` -- **Status**: This error is automatically handled by DeployStack as of version 1.0+ -- **Cause**: Turso's libSQL client requires individual statement execution -- **Solution**: DeployStack automatically splits and executes statements individually -- If you encounter this error, ensure you're running the latest version of DeployStack - -``` -Error: Migration failed to apply -``` -- Check migration SQL syntax is valid SQLite -- Verify no conflicting schema changes -- Review migration order and dependencies -- Ensure your Turso database has sufficient resources - -### Debug Logging - -Enable detailed logging to troubleshoot issues: - -```bash -# Enable debug logging -LOG_LEVEL=debug npm run dev -``` - -Look for Turso-specific log entries: -``` -[INFO] Creating Turso connection -[INFO] LibSQL client created -[INFO] Turso database instance created successfully -``` - -### Database Inspection - -```bash -# Connect to database shell -turso db shell your-database - -# Run SQL queries -turso db shell your-database --command "SELECT * FROM authUser LIMIT 5" - -# Export database -turso db dump your-database --output backup.sql -``` - -### Performance Monitoring - -```bash -# Check database statistics -turso db show your-database - -# Monitor query performance -turso db shell your-database --command "EXPLAIN QUERY PLAN SELECT * FROM authUser" -``` - -## Technical Implementation Details - -### How DeployStack Handles Turso Migrations - -Unlike SQLite which can execute multiple SQL statements in a single call, Turso's libSQL client requires each statement to be executed individually. DeployStack solves this transparently: - -1. **Automatic Statement Splitting**: Migration files are intelligently parsed to separate: - - Statements divided by `--> statement-breakpoint` markers - - Multiple statements on the same line (like consecutive CREATE INDEX commands) - - Complex migrations with mixed DDL operations - -2. **Sequential Execution**: Each statement is executed in order with proper error handling -3. **Transaction Safety**: Failed statements properly roll back to maintain database consistency -4. **Performance Impact**: Migration execution is slightly slower than SQLite (milliseconds per statement), but this only affects initial setup and schema changes, not runtime performance - -### Migration File Compatibility - -Your existing Drizzle migration files work without modification: - -```sql --- This standard Drizzle migration works perfectly -CREATE TABLE users (...); ---> statement-breakpoint -CREATE INDEX idx_users_email ON users(email);--> statement-breakpoint -CREATE UNIQUE INDEX idx_users_username ON users(username); -``` - -DeployStack automatically handles the parsing and execution, so you write migrations exactly as you would for SQLite. - -## Advanced Features - -### Multi-Region Setup - -```bash -# Create database with specific regions -turso db create deploystack-global --location lax,fra,nrt - -# Check current locations -turso db locations deploystack-global - -# Add more locations -turso db locations add deploystack-global syd -``` - -### Database Branching - -```bash -# Create a branch for development -turso db create deploystack-feature --from-db deploystack-main - -# Switch between branches -turso db shell deploystack-feature -``` - -### Backup and Restore - -```bash -# Create backup -turso db dump deploystack-prod --output backup-$(date +%Y%m%d).sql - -# Restore from backup -turso db shell deploystack-dev < backup-20250104.sql -``` - -## Integration with DeployStack Features - -### Global Settings - -Turso works seamlessly with DeployStack's global settings system: - -- **Batch Operations**: Efficient batch creation of settings -- **Encryption**: Settings are encrypted before storage -- **Performance**: Optimized for Turso's distributed architecture - -### Plugin System - -Plugins can extend the database schema with Turso: - -```typescript -// Example plugin with Turso-optimized tables -class MyPlugin implements Plugin { - databaseExtension: DatabaseExtension = { - tableDefinitions: { - 'my_table': { - id: (builder) => builder('id').primaryKey(), - data: (builder) => builder('data').notNull(), - // Optimized for Turso's replication - region: (builder) => builder('region'), - created_at: (builder) => builder('created_at') - } - } - }; -} -``` - -### Authentication - -Lucia authentication works perfectly with Turso: - -- **Session Management**: Distributed session storage -- **User Data**: Global user data replication -- **Performance**: Fast authentication checks worldwide - -## Migration from Other Databases - -### From SQLite - -Since Turso is SQLite-compatible, migration is straightforward: - -1. **Export SQLite data**: `sqlite3 database.db .dump > export.sql` -2. **Import to Turso**: `turso db shell your-database < export.sql` -3. **Update environment variables**: Switch to Turso configuration -4. **Test application**: Verify all functionality works - -### From D1 (if previously used) - -1. **Export D1 data**: Use Wrangler to export data -2. **Convert to SQLite format**: Ensure compatibility -3. **Import to Turso**: Load data into Turso database -4. **Update configuration**: Switch database type to Turso - -## Known Limitations and Solutions - -### Statement Execution - -**Limitation**: Turso cannot execute multiple SQL statements in a single database call. - -**DeployStack Solution**: Automatic statement splitting and sequential execution. This is completely transparent - you never need to think about it. - -### Migration Speed - -**Limitation**: Migrations apply slightly slower than with local SQLite due to individual statement execution and network latency. - -**DeployStack Solution**: Migrations are a one-time operation during setup or updates. Runtime performance is unaffected. For large migrations, DeployStack provides progress logging to track execution. - -## Cost Optimization - -### Usage Monitoring - -```bash -# Check current usage -turso db show your-database - -# Monitor over time -turso org show -``` - -### Optimization Strategies - -- **Query Efficiency**: Optimize queries to reduce database load -- **Connection Reuse**: Leverage connection pooling -- **Regional Placement**: Choose regions close to your users -- **Data Archiving**: Archive old data to reduce storage costs - -## Support and Resources - -- **Turso Documentation**: [docs.turso.tech](https://docs.turso.tech) -- **libSQL Documentation**: [github.com/libsql/libsql](https://github.com/libsql/libsql) -- **Community Discord**: [discord.gg/turso](https://discord.gg/turso) -- **GitHub Issues**: [github.com/tursodatabase/turso-cli](https://github.com/tursodatabase/turso-cli) - -## Next Steps - -1. **Set up your Turso database** following the configuration steps above -2. **Configure environment variables** in your `.env` file -3. **Run the database setup** using the DeployStack API -4. **Start developing** with global SQLite performance -5. **Monitor and optimize** your database usage - -For more information about database management in DeployStack, see the [Database Management Guide](/development/backend/database). diff --git a/development/backend/environment-variables.mdx b/development/backend/environment-variables.mdx index e519391..ea43345 100644 --- a/development/backend/environment-variables.mdx +++ b/development/backend/environment-variables.mdx @@ -164,7 +164,7 @@ services: - DEPLOYSTACK_ENCRYPTION_SECRET=your-encryption-secret volumes: - - ./data:/app/data # For SQLite database persistence + - ./data:/app/data # For persistent data storage ``` ### Dockerfile Environment Handling @@ -241,15 +241,16 @@ export const createServer = async () => { ### Database Configuration Example ```typescript -// src/db/setup.ts -const setupDatabase = () => { +// src/db/config.ts +const getDatabaseUrl = () => { const isTestEnv = process.env.NODE_ENV === 'test' - const sqliteDbFileName = isTestEnv ? 'deploystack.test.db' : 'deploystack.db' - - const dbPath = path.join(process.cwd(), 'data', sqliteDbFileName) - - return new Database(dbPath) + const dbName = isTestEnv ? 'deploystack_test' : 'deploystack' + + return process.env.DATABASE_URL || `postgresql://localhost:5432/${dbName}` } + +const connectionString = getDatabaseUrl() +const pool = new Pool({ connectionString }) ``` ### Plugin System Configuration diff --git a/development/backend/index.mdx b/development/backend/index.mdx index 0d1bd50..efa4175 100644 --- a/development/backend/index.mdx +++ b/development/backend/index.mdx @@ -13,7 +13,7 @@ The DeployStack backend is a modern, high-performance Node.js application built - **Framework**: Fastify for high-performance HTTP server - **Language**: TypeScript for type safety -- **Database**: SQLite (default) or PostgreSQL with Drizzle ORM +- **Database**: PostgreSQL with Drizzle ORM - **Validation**: JSON Schema for request/response validation and OpenAPI generation - **Plugin System**: Extensible architecture with security isolation - **Authentication**: Dual authentication system - cookie-based sessions for frontend and OAuth 2.1 for satellite access @@ -39,12 +39,12 @@ The development server starts at `http://localhost:3000` with API documentation Learn how to generate OpenAPI specifications, use Swagger UI, and implement JSON Schema validation for automatic API documentation. - - SQLite and PostgreSQL setup, schema management, migrations, and Drizzle ORM best practices. + PostgreSQL setup, schema management, migrations, and Drizzle ORM best practices. { const rawConn = server.rawDbConnection; if (rawConn) { const status = getDbStatus(); - if (status.dialect === 'sqlite' && 'close' in rawConn) { - (rawConn as SqliteDriver.Database).close(); - server.log.info('SQLite connection closed.'); + if (status.dialect === 'postgresql') { + await (rawConn as Pool).end(); + server.log.info('PostgreSQL connection pool closed.'); } } }); @@ -408,7 +408,7 @@ GROUP BY b.id; ## Database Schema -For the complete database schema, see [schema.sqlite.ts](https://github.com/deploystackio/deploystack/blob/main/services/backend/src/db/schema.sqlite.ts) in the backend directory. +For the complete database schema, see [schema.ts](https://github.com/deploystackio/deploystack/blob/main/services/backend/src/db/schema.ts) in the backend directory. ### Jobs Table @@ -472,7 +472,7 @@ CREATE TABLE queue_job_batches ( ### Why Database-Backed? -No additional infrastructure required (Redis, message queues). Uses existing SQLite/Turso database, and jobs persist across server restarts. +No additional infrastructure required (Redis, message queues). Uses existing PostgreSQL database, and jobs persist across server restarts. ### Why Sequential Processing? @@ -525,6 +525,6 @@ Generate complex reports from large datasets without blocking API requests. ## Summary -The background job queue system provides a simple, reliable way to process long-running tasks in DeployStack. Built on familiar SQLite/Turso infrastructure, it requires no additional services while providing persistence, retry logic, and rate limiting. Workers follow a straightforward pattern making them easy to implement and test. +The background job queue system provides a simple, reliable way to process long-running tasks in DeployStack. Built on PostgreSQL infrastructure, it requires no additional services while providing persistence, retry logic, and rate limiting. Workers follow a straightforward pattern making them easy to implement and test. For routine operations, the system handles thousands of jobs efficiently. For specialized needs requiring higher throughput or distributed processing, the architecture supports clear migration paths to more advanced solutions. diff --git a/development/backend/metrics.mdx b/development/backend/metrics.mdx index 8349a38..1b231ec 100644 --- a/development/backend/metrics.mdx +++ b/development/backend/metrics.mdx @@ -37,7 +37,7 @@ Cleanup Worker (cron job + background worker) **MCP Client Activity Metrics** serves as the complete reference implementation. All files are in place and can be used as templates for new metric types. **Key Files**: -- Database: `src/db/schema.sqlite.ts` (table: `mcpClientActivityMetrics`) +- Database: `src/db/schema.ts` (table: `mcpClientActivityMetrics`) - Base Service: `src/services/metrics/TimeSeriesMetricsService.ts` - Metric Service: `src/services/metrics/McpClientActivityMetricsService.ts` - Event Handler: `src/events/satellite/mcp-client-activity.ts` @@ -83,7 +83,7 @@ Permissions: Users view their own, admins view all ### Step 2: Create Database Table -Create your metrics table in `src/db/schema.sqlite.ts` following the `mcpClientActivityMetrics` table pattern. +Create your metrics table in `src/db/schema.ts` following the `mcpClientActivityMetrics` table pattern. **Critical Requirements**: - Use `bucket_timestamp` (integer, Unix seconds) @@ -115,9 +115,9 @@ Create a service that extends the base `TimeSeriesMetricsService`: ```typescript import { eq, gte, lte, and, sql } from 'drizzle-orm'; -import type { LibSQLDatabase } from 'drizzle-orm/libsql'; +import type { PostgresJsDatabase } from 'drizzle-orm/postgres-js'; import { TimeSeriesMetricsService } from './TimeSeriesMetricsService'; -import { serverInstallMetrics } from '../../db/schema.sqlite'; +import { serverInstallMetrics } from '../../db/schema.ts'; import type { QueryParams, BucketData, @@ -131,9 +131,9 @@ interface ServerInstallBucket extends BucketData { } export class ServerInstallMetricsService extends TimeSeriesMetricsService { - private db: LibSQLDatabase; + private db: PostgresJsDatabase; - constructor(db: LibSQLDatabase) { + constructor(db: PostgresJsDatabase) { super(); this.db = db; } @@ -407,16 +407,6 @@ const results = await this.db .orderBy(metrics.bucket_timestamp); ``` -### Database Driver Compatibility - -Handle both SQLite (`changes`) and Turso (`rowsAffected`): - -```typescript -const deletedCount = (result.changes || result.rowsAffected || 0); -``` - -For more details, see [Database Driver Compatibility](/development/backend/database/#database-driver-compatibility). - ## Common Pitfalls @@ -447,15 +437,6 @@ const results = await db.select({ .groupBy(metrics.bucket_timestamp); ``` -### ❌ Not handling both database drivers -```typescript -// WRONG - Only works with SQLite -const deletedCount = result.changes; - -// CORRECT - Works with both SQLite and Turso -const deletedCount = (result.changes || result.rowsAffected || 0); -``` - ## Related Documentation - [Database Management](/development/backend/database/) - Schema design, migrations, Drizzle ORM diff --git a/development/backend/oauth-providers.mdx b/development/backend/oauth-providers.mdx index 1cf7ac3..d536eb9 100644 --- a/development/backend/oauth-providers.mdx +++ b/development/backend/oauth-providers.mdx @@ -38,7 +38,7 @@ services/backend/src/ │ ├── github-oauth.ts # GitHub settings │ └── [provider]-oauth.ts # New provider settings ├── db/ -│ └── schema.sqlite.ts # User table with provider IDs +│ └── schema.ts # User table with provider IDs └── lib/ └── lucia.ts # Session management ``` @@ -135,7 +135,7 @@ Key considerations: Add provider ID field to `authUser` table: ```typescript -// In src/db/schema.sqlite.ts +// In src/db/schema.ts // Add field like: // google_id: text('google_id').unique() // microsoft_id: text('microsoft_id').unique() diff --git a/development/backend/oauth2-server.mdx b/development/backend/oauth2-server.mdx index 14b177a..742317b 100644 --- a/development/backend/oauth2-server.mdx +++ b/development/backend/oauth2-server.mdx @@ -133,7 +133,7 @@ Implements RFC 7591 Dynamic Client Registration: #### Database Storage - **Table**: `dynamic_oauth_clients` -- **Schema**: See `services/backend/src/db/schema.sqlite.ts` +- **Schema**: See `services/backend/src/db/schema.ts` - **Fields**: client_id, client_name, redirect_uris, grant_types, response_types, scope, token_endpoint_auth_method, client_id_issued_at, expires_at - **Persistence**: Survives server restarts and supports multiple instances @@ -178,7 +178,7 @@ Handles token lifecycle: ### Database Schema #### Dynamic OAuth Clients Table -- **File**: `services/backend/src/db/schema.sqlite.ts` +- **File**: `services/backend/src/db/schema.ts` - **Table**: `dynamic_oauth_clients` - **Migration**: `0006_keen_firestar.sql` - **Purpose**: Persistent storage for dynamically registered MCP clients diff --git a/development/backend/plugins.mdx b/development/backend/plugins.mdx index 9ca5db5..e1dbdf4 100644 --- a/development/backend/plugins.mdx +++ b/development/backend/plugins.mdx @@ -103,24 +103,24 @@ Add basic plugin information: ### 3. Define Database Schema (Optional) -If your plugin requires database tables, create a `schema.ts` file: +If your plugin requires database tables, create a `schema.ts` file using PostgreSQL table definitions: ```typescript -import { sqliteTable, text, integer, sql } from 'drizzle-orm/sqlite-core'; +import { pgTable, text, timestamp } from 'drizzle-orm/pg-core'; // Define your plugin's tables -export const myCustomEntities = sqliteTable('my_custom_entities', { +export const myCustomEntities = pgTable('my_custom_entities', { id: text('id').primaryKey(), name: text('name').notNull(), data: text('data'), - createdAt: integer('created_at', { mode: 'timestamp' }).notNull().default(sql`(strftime('%s', 'now'))`), + created_at: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(), }); // You can define multiple tables if needed -export const myCustomRelations = sqliteTable('my_custom_relations', { +export const myCustomRelations = pgTable('my_custom_relations', { id: text('id').primaryKey(), - entityId: text('entity_id').notNull().references(() => myCustomEntities.id), - relationType: text('relation_type').notNull(), + entity_id: text('entity_id').notNull().references(() => myCustomEntities.id), + relation_type: text('relation_type').notNull(), }); ``` @@ -131,30 +131,17 @@ Create a `routes.ts` file for your API routes: ```typescript import { type PluginRouteManager } from '../../plugin-system/route-manager'; import { type AnyDatabase, getSchema } from '../../db'; -import { type BetterSQLite3Database } from 'drizzle-orm/better-sqlite3'; -import { type NodePgDatabase } from 'drizzle-orm/node-postgres'; -import { type SQLiteTable } from 'drizzle-orm/sqlite-core'; -import { type PgTable } from 'drizzle-orm/pg-core'; import { eq } from 'drizzle-orm'; -// Helper type guard for database type checking -function isSQLiteDB(db: AnyDatabase): db is BetterSQLite3Database { - return typeof (db as BetterSQLite3Database).get === 'function' && - typeof (db as BetterSQLite3Database).all === 'function' && - typeof (db as BetterSQLite3Database).run === 'function'; -} - /** * Register all routes for your custom plugin - * + * * All routes registered here will be automatically namespaced under: * /api/plugin/my-custom-plugin/ */ export async function registerRoutes(routeManager: PluginRouteManager, db: AnyDatabase | null): Promise { - // Note: In actual plugin development, you should receive a logger instance - // For this example, we'll show the pattern you should follow - const logger = routeManager.getLogger(); // Assuming this method exists - + const logger = routeManager.getLogger(); + if (!db) { logger?.warn(`Database not available, skipping routes.`); return; @@ -172,36 +159,21 @@ export async function registerRoutes(routeManager: PluginRouteManager, db: AnyDa // Register GET /entities route // This becomes: GET /api/plugin/my-custom-plugin/entities routeManager.get('/entities', async () => { - if (isSQLiteDB(db)) { - const entities = await db.select().from(table as SQLiteTable).all(); - return { entities }; - } else { - const entities = await (db as NodePgDatabase).select().from(table as PgTable); - return { entities }; - } + const entities = await db.select().from(table); + return { entities }; }); // Register GET /entities/:id route // This becomes: GET /api/plugin/my-custom-plugin/entities/:id routeManager.get('/entities/:id', async (request, reply) => { const { id } = request.params as { id: string }; - let entity; - - if (isSQLiteDB(db)) { - const typedTable = table as SQLiteTable & { id: any }; - entity = await db - .select() - .from(typedTable) - .where(eq(typedTable.id, id)) - .get(); - } else { - const typedTable = table as PgTable & { id: any }; - const rows = await (db as NodePgDatabase) - .select() - .from(typedTable) - .where(eq(typedTable.id, id)); - entity = rows[0] ?? null; - } + + const rows = await db + .select() + .from(table) + .where(eq(table.id, id)); + + const entity = rows[0] ?? null; if (!entity) { return reply.status(404).send({ error: 'Entity not found' }); @@ -225,11 +197,7 @@ export async function registerRoutes(routeManager: PluginRouteManager, db: AnyDa data: body.data || null, }; - if (isSQLiteDB(db)) { - await db.insert(table as SQLiteTable).values(entityData).run(); - } else { - await (db as NodePgDatabase).insert(table as PgTable).values(entityData); - } + await db.insert(table).values(entityData); return { id, ...body }; }); @@ -243,32 +211,21 @@ export async function registerRoutes(routeManager: PluginRouteManager, db: AnyDa Create an `index.ts` file that implements the Plugin interface: ```typescript -import { - type Plugin, +import { + type Plugin, type DatabaseExtension, type PluginRouteManager } from '../../plugin-system/types'; import { type AnyDatabase, getSchema } from '../../db'; -import { type BetterSQLite3Database } from 'drizzle-orm/better-sqlite3'; -import { type NodePgDatabase } from 'drizzle-orm/node-postgres'; -import { type SQLiteTable } from 'drizzle-orm/sqlite-core'; -import { type PgTable } from 'drizzle-orm/pg-core'; import { sql } from 'drizzle-orm'; -// Helper type guard for database type checking -function isSQLiteDB(db: AnyDatabase): db is BetterSQLite3Database { - return typeof (db as BetterSQLite3Database).get === 'function' && - typeof (db as BetterSQLite3Database).all === 'function' && - typeof (db as BetterSQLite3Database).run === 'function'; -} - // Table definitions for this plugin const myCustomPluginTableDefinitions = { 'my_custom_entities': { id: (b: any) => b('id').primaryKey(), name: (b: any) => b('name').notNull(), data: (b: any) => b('data'), - createdAt: (b: any) => b('created_at', { mode: 'timestamp' }).notNull().defaultNow(), + created_at: (b: any) => b('created_at', { mode: 'timestamp' }).notNull().defaultNow(), } }; @@ -281,14 +238,13 @@ class MyCustomPlugin implements Plugin { description: 'Adds custom functionality to DeployStack', author: 'Your Name', }; - + // Database extension (optional - remove if not needed) databaseExtension: DatabaseExtension = { tableDefinitions: myCustomPluginTableDefinitions, - + // Optional initialization function for seeding data onDatabaseInit: async (db: AnyDatabase, logger?: FastifyBaseLogger) => { - // Note: In actual implementation, logger should be passed from PluginManager logger?.info(`Initializing database...`); const currentSchema = getSchema(); @@ -300,19 +256,11 @@ class MyCustomPlugin implements Plugin { return; } - let currentCount = 0; - if (isSQLiteDB(db)) { - const result = await db - .select({ count: sql`count(*)` }) - .from(table as SQLiteTable) - .get(); - currentCount = result?.count ?? 0; - } else { - const rows = await (db as NodePgDatabase) - .select({ count: sql`count(*)` }) - .from(table as PgTable); - currentCount = rows[0]?.count ?? 0; - } + // Check if we need to seed initial data + const rows = await db + .select({ count: sql`count(*)` }) + .from(table); + const currentCount = rows[0]?.count ?? 0; if (currentCount === 0) { logger?.info(`Seeding initial data...`); @@ -322,19 +270,14 @@ class MyCustomPlugin implements Plugin { data: JSON.stringify({ initialized: true }), }; - if (isSQLiteDB(db)) { - await db.insert(table as SQLiteTable).values(dataToSeed).run(); - } else { - await (db as NodePgDatabase).insert(table as PgTable).values(dataToSeed); - } + await db.insert(table).values(dataToSeed); logger?.info(`Seeded initial data`); } }, }; - + // Plugin initialization (non-route initialization only) async initialize(db: AnyDatabase | null, logger?: FastifyBaseLogger) { - // Note: In actual implementation, logger should be passed from PluginManager logger?.info(`Initializing...`); // Non-route initialization only - routes are registered via registerRoutes method logger?.info(`Initialized successfully`); @@ -345,10 +288,9 @@ class MyCustomPlugin implements Plugin { const { registerRoutes } = await import('./routes'); await registerRoutes(routeManager, db); } - + // Optional shutdown method for cleanup async shutdown(logger?: FastifyBaseLogger) { - // Note: In actual implementation, logger should be passed from PluginManager logger?.info(`Shutting down...`); // Perform any cleanup needed } @@ -395,8 +337,11 @@ const myPluginTableDefinitions = { **Important Notes:** - Use `created_at` (snake_case) for database column names, not `createdAt` (camelCase) -- Timestamp columns with `{ mode: 'timestamp' }` automatically get `DEFAULT (strftime('%s', 'now'))` -- Column types are auto-detected: `id`/`count` → INTEGER, `*_at`/`*date` → INTEGER (timestamp), others → TEXT +- Timestamp columns with `{ mode: 'timestamp' }` automatically get `TIMESTAMP WITH TIME ZONE DEFAULT NOW()` +- Column types are auto-detected and converted for PostgreSQL: + - `id`/`count` → INTEGER + - `*_at`/`*date` → TIMESTAMP WITH TIME ZONE + - Others → TEXT - Tables are prefixed with your plugin ID: `my-plugin_my_entities` ### API Routes @@ -513,13 +458,13 @@ To test your plugin: Your plugin can access configuration provided by the plugin manager: ```typescript -async initialize(app: FastifyInstance, db: BetterSQLite3Database) { +async initialize(app: FastifyInstance, db: AnyDatabase) { // Access plugin-specific configuration const config = app.pluginManager.getPluginConfig(this.meta.id); - + // Use configuration values const apiKey = config?.apiKey as string; - + // Initialize with configuration } ``` @@ -613,8 +558,8 @@ Plugins can contribute their own global settings to the DeployStack system. Thes ```typescript // In your plugin's index.ts -import { - type Plugin, +import { + type Plugin, type GlobalSettingsExtension, // ... other imports } from '../../plugin-system/types'; @@ -682,9 +627,8 @@ class MyAwesomePlugin implements Plugin { // ... rest of your plugin implementation (databaseExtension, initialize, etc.) async initialize(app: FastifyInstance, db: AnyDatabase | null, logger?: FastifyBaseLogger) { - // Note: In actual implementation, logger should be passed from PluginManager logger?.info(`Initializing...`); - + // You can try to access your plugin's settings here if needed during init, // using GlobalSettingsService.get('myAwesomePlugin.features.enableSuperFeature') // Note: Ensure GlobalSettingsService is available or handle potential errors. diff --git a/development/backend/satellite/communication.mdx b/development/backend/satellite/communication.mdx index ad28cb6..909a34c 100644 --- a/development/backend/satellite/communication.mdx +++ b/development/backend/satellite/communication.mdx @@ -252,7 +252,7 @@ Configuration respects team boundaries and isolation: ### Core Table Structure -The satellite system integrates with existing DeployStack schema through 5 specialized tables. For detailed schema definitions, see [`services/backend/src/db/schema.sqlite.ts`](https://github.com/deploystackio/deploystack/blob/main/services/backend/src/db/schema.sqlite.ts). +The satellite system integrates with existing DeployStack schema through 5 specialized tables. For detailed schema definitions, see [`services/backend/src/db/schema.ts`](https://github.com/deploystackio/deploystack/blob/main/services/backend/src/db/schema.ts). **Satellite Registry** (`satellites`): - Central registration of all satellites @@ -412,7 +412,7 @@ server.get('/satellites/:satelliteId/commands', { The satellite system extends the existing database schema with 5 specialized tables: -**Schema Location**: `services/backend/src/db/schema.sqlite.ts` +**Schema Location**: `services/backend/src/db/schema.ts` **Table Relationships**: - `satellites` table links to existing `teams` and `authUser` tables @@ -479,11 +479,11 @@ npm run dev # Starts on http://localhost:3001 **Database Inspection**: ```bash # View registered satellites -sqlite3 services/backend/persistent_data/database/deploystack.db +psql deploystack > SELECT id, name, satellite_type, status FROM satellites; # View MCP server installations -> SELECT installation_name, team_id FROM mcpServerInstallations; +> SELECT installation_name, team_id FROM "mcpServerInstallations"; ``` ## API Documentation diff --git a/development/backend/satellite/events.mdx b/development/backend/satellite/events.mdx index ec35223..bed0292 100644 --- a/development/backend/satellite/events.mdx +++ b/development/backend/satellite/events.mdx @@ -79,7 +79,7 @@ export interface EventHandler { handle: ( satelliteId: string, eventData: Record, - db: LibSQLDatabase, + db: PostgresJsDatabase, eventTimestamp: Date ) => Promise; } @@ -208,8 +208,8 @@ Inserts record into `satelliteUsageLogs` for analytics and audit trails. Create a new file in `services/backend/src/events/satellite/`: ```typescript -import type { LibSQLDatabase } from 'drizzle-orm/libsql'; -import { yourTable } from '../../db/schema.sqlite'; +import type { PostgresJsDatabase } from 'drizzle-orm/postgres-js'; +import { yourTable } from '../../db/schema.ts'; import { eq } from 'drizzle-orm'; export const EVENT_TYPE = 'your.event.type'; @@ -240,7 +240,7 @@ interface YourEventData { export async function handle( satelliteId: string, eventData: Record, - db: LibSQLDatabase, + db: PostgresJsDatabase, eventTimestamp: Date ): Promise { const data = eventData as unknown as YourEventData; @@ -320,17 +320,6 @@ Each event is processed in a separate database transaction: - Maintains data consistency per event - Isolated error handling prevents cascade failures -### Database Driver Compatibility - -When updating records, use the driver-compatible pattern: - -```typescript -const result = await db.update(table).set(data).where(condition); - -// Handle both SQLite (changes) and Turso (rowsAffected) -const updated = (result.changes || result.rowsAffected || 0) > 0; -``` - ## Performance Considerations ### Batch Processing Efficiency @@ -524,7 +513,7 @@ LIMIT 10; **DO**: - Use parameterized queries via Drizzle ORM -- Handle both SQLite and Turso driver differences +- Use PostgreSQL-specific features when needed - Include timestamps for all state changes - Use transactions for multi-step operations - Index frequently queried fields diff --git a/development/backend/test.mdx b/development/backend/test.mdx index 53f9ad9..f40cdb0 100644 --- a/development/backend/test.mdx +++ b/development/backend/test.mdx @@ -85,8 +85,8 @@ The test suite uses a sophisticated database isolation strategy to ensure comple ### Timestamp-Based Isolation -Each test run creates a unique SQLite database file with a millisecond timestamp: -- Example: `deploystack-1704369600000.db` +Each test run creates a unique PostgreSQL database with a millisecond timestamp: +- Example: `deploystack-1704369600000` - This ensures complete isolation between parallel test runs - No conflicts when multiple developers run tests simultaneously - Automatic cleanup through directory removal @@ -173,10 +173,10 @@ When adding new E2E tests: - **Purpose**: Verifies the initial database setup functionality. - **Key Checks**: - Ensures the test database directory does not exist before setup. - - Calls `POST /api/db/setup` with `{"type": "sqlite"}`. - - Verifies the API response indicates successful setup initiation and includes `database_type: "sqlite"`. - - Checks that the SQLite database file is created in the test database directory (`persistent_data/database-test/deploystack-{timestamp}.db`). - - Calls `GET /api/db/status` and verifies the response shows `configured: true`, `initialized: true`, and `dialect: "sqlite"`. + - Calls `POST /api/db/setup` with `{"type": "postgresql"}`. + - Verifies the API response indicates successful setup initiation and includes `database_type: "postgresql"`. + - Checks that the PostgreSQL test database is created. + - Calls `GET /api/db/status` and verifies the response shows `configured: true`, `initialized: true`, and `dialect: "postgresql"`. - Validates global settings initialization without errors. - Confirms all migrations are applied successfully. - Tests proper error handling for duplicate setup attempts. diff --git a/development/backend/user-preferences-system.mdx b/development/backend/user-preferences-system.mdx index 7e23158..cc1a172 100644 --- a/development/backend/user-preferences-system.mdx +++ b/development/backend/user-preferences-system.mdx @@ -268,7 +268,7 @@ If you need to rename or remove preferences: - [API Security](/development/backend/api/security) - Security patterns and authorization - [Role Management](/development/backend/roles) - Permission system details -- [Database Schema](https://github.com/deploystackio/deploystack/blob/main/services/backend/src/db/schema.sqlite.ts) - Complete database schema reference +- [Database Schema](https://github.com/deploystackio/deploystack/blob/main/services/backend/src/db/schema.ts) - Complete database schema reference ## Key Benefits diff --git a/development/index.mdx b/development/index.mdx index 386303a..a67af86 100644 --- a/development/index.mdx +++ b/development/index.mdx @@ -123,7 +123,7 @@ deploystack/ ### Backend Stack - **Fastify** for high-performance cloud control plane - **TypeScript** with full type safety -- **Drizzle ORM** supporting SQLite and PostgreSQL +- **Drizzle ORM** with PostgreSQL - **Plugin System** with isolated routes (`/api/plugin//`) - **Role-Based Access Control** with session management diff --git a/development/satellite/backend-communication.mdx b/development/satellite/backend-communication.mdx index 2735103..90577ed 100644 --- a/development/satellite/backend-communication.mdx +++ b/development/satellite/backend-communication.mdx @@ -278,7 +278,7 @@ The Backend maintains satellite state in five tables: - `satelliteUsageLogs` - Usage analytics and audit - `satelliteHeartbeats` - Health monitoring data -See `services/backend/src/db/schema.sqlite.ts` for complete schema definitions. +See `services/backend/src/db/schema.ts` for complete schema definitions. ## Security Implementation diff --git a/development/satellite/registration.mdx b/development/satellite/registration.mdx index 7331a11..ea48907 100644 --- a/development/satellite/registration.mdx +++ b/development/satellite/registration.mdx @@ -253,7 +253,7 @@ The Backend maintains satellite state across five database tables: - **satelliteUsageLogs**: Usage analytics and audit trails - **satelliteHeartbeats**: Health monitoring and status updates -See `services/backend/src/db/schema.sqlite.ts` for complete schema definitions. +See `services/backend/src/db/schema.ts` for complete schema definitions. ### Registration Database Operations diff --git a/docs.json b/docs.json index 0114c5f..af0ad0b 100644 --- a/docs.json +++ b/docs.json @@ -32,7 +32,8 @@ "/general/mcp-catalog", "/general/mcp-installation", "/general/mcp-categories", - "/general/mcp-admin-schema-workflow" + "/general/mcp-admin-schema-workflow", + "/general/mcp-oauth" ] }, { @@ -142,8 +143,7 @@ "group": "Database", "pages": [ "/development/backend/database/index", - "/development/backend/database/sqlite", - "/development/backend/database/turso" + "/development/backend/database/postgresql" ] }, { diff --git a/general/local-setup.mdx b/general/local-setup.mdx index 7c53130..5906e32 100644 --- a/general/local-setup.mdx +++ b/general/local-setup.mdx @@ -18,7 +18,7 @@ This guide is for contributors and developers who want to run DeployStack locall # - Git: Version control system # - Node.js v18+: JavaScript runtime (v18 or higher required) # - npm v8+: Package manager (comes with Node.js) -# - Docker: For running databases (optional but recommended) +# - Docker: For running PostgreSQL database # Verify Installation git --version @@ -110,6 +110,14 @@ DEPLOYSTACK_ENCRYPTION_SECRET=your-32-character-secret-here # Frontend URL (for CORS and redirects) DEPLOYSTACK_FRONTEND_URL=http://localhost:5173 +# PostgreSQL Configuration (matches postgres:local defaults) +POSTGRES_HOST=localhost +POSTGRES_PORT=5432 +POSTGRES_DATABASE=deploystack +POSTGRES_USER=deploystack +POSTGRES_PASSWORD=deploystack +POSTGRES_SSL=false + # Development settings NODE_ENV=development PORT=3000 @@ -155,34 +163,38 @@ node -e "console.log(require('crypto').randomBytes(16).toString('hex'))" ``` -## Step 4: Set Up Database (Optional) - -DeployStack uses SQLite by default for development, but you can optionally set up PostgreSQL: +## Step 4: Start PostgreSQL Database - -```text SQLite (Default) -No additional setup required. DeployStack will create a SQLite database automatically in services/backend/persistent_data/. +DeployStack uses PostgreSQL as its database backend. For local development, we provide a convenient script to start PostgreSQL in Docker: -The database file will be created on first run: -services/backend/persistent_data/database/deploystack.db +```bash +# Start PostgreSQL 18 in Docker +npm run postgres:local ``` -```bash PostgreSQL (Optional) -# If you prefer PostgreSQL for development: +This command will: +- Pull PostgreSQL 18 Docker image +- Create a local PostgreSQL container with default credentials: + - **Host**: localhost + - **Port**: 5432 + - **Database**: deploystack + - **User**: deploystack + - **Password**: deploystack +- Create a persistent volume for database data + + + The PostgreSQL container will persist data between restarts. To reset the database, remove the `postgres_data` volume: `docker volume rm postgres_data` + + +### Verify PostgreSQL is Running -# Start PostgreSQL with Docker -docker run -d \ - --name deploystack-postgres \ - -e POSTGRES_DB=deploystack \ - -e POSTGRES_USER=deploystack \ - -e POSTGRES_PASSWORD=deploystack \ - -p 5432:5432 \ - postgres:16 +```bash +# Check if PostgreSQL container is running +docker ps | grep postgres-local -# Update your services/backend/.env: -DATABASE_URL=postgresql://deploystack:deploystack@localhost:5432/deploystack +# Test connection +psql -h localhost -U deploystack -d deploystack -c "SELECT version();" ``` - ## Step 5: Running the Development Servers @@ -244,13 +256,16 @@ Once both services are running: curl http://localhost:5173 # Frontend dev server ``` - + Open [http://localhost:5173](http://localhost:5173) in your browser. You should see the DeployStack interface. - - - Follow the on-screen setup wizard to create your first admin user and configure basic settings. + + + Follow the on-screen setup wizard to: + - Configure PostgreSQL database connection + - Create your first admin user + - Set up basic platform settings @@ -268,6 +283,9 @@ Both services support hot reloading: From the project root: ```bash +# Database +npm run postgres:local # Start PostgreSQL in Docker + # Development npm run dev:frontend # Start frontend dev server npm run dev:backend # Start backend dev server @@ -281,6 +299,9 @@ npm run lint:frontend # Lint frontend code npm run lint:backend # Lint backend code npm run lint:md # Lint markdown files +# Database Migrations +npm run db:generate # Generate new migrations + # Testing npm run test:backend:unit # Run backend unit tests npm run test:backend:e2e # Run backend e2e tests @@ -304,7 +325,7 @@ deploystack/ │ ├── backend/ # Fastify backend API │ │ ├── src/ # Source code │ │ ├── tests/ # Test files -│ │ ├── persistent_data/ # SQLite database and uploads +│ │ ├── persistent_data/ # Database and application data │ │ ├── package.json # Backend dependencies │ │ └── tsconfig.json # TypeScript configuration │ └── shared/ # Shared utilities and types @@ -323,6 +344,7 @@ deploystack/ # Check what's using the port lsof -i :3000 # Backend port lsof -i :5173 # Frontend port +lsof -i :5432 # PostgreSQL port # Kill process using the port kill -9 @@ -355,17 +377,24 @@ Run your terminal as Administrator or ensure you have write permissions to the p ``` -#### Database Connection Issues +#### PostgreSQL Connection Issues ```bash -# Check if database directory exists -ls -la services/backend/persistent_data/ +# Check if PostgreSQL container is running +docker ps | grep postgres-local -# Create directory if missing -mkdir -p services/backend/persistent_data/database +# View PostgreSQL logs +docker logs postgres-local -# Check database file permissions -ls -la services/backend/persistent_data/database/ +# Restart PostgreSQL +docker stop postgres-local +npm run postgres:local + +# Reset PostgreSQL (removes all data) +docker stop postgres-local +docker rm postgres-local +docker volume rm postgres_data +npm run postgres:local ``` #### Environment Variable Issues @@ -377,6 +406,9 @@ ls -la services/frontend/.env # Check if encryption secret is set grep DEPLOYSTACK_ENCRYPTION_SECRET services/backend/.env + +# Check PostgreSQL configuration +grep POSTGRES services/backend/.env ``` ### Getting Help diff --git a/general/mcp-catalog.mdx b/general/mcp-catalog.mdx index 281746a..f94402f 100644 --- a/general/mcp-catalog.mdx +++ b/general/mcp-catalog.mdx @@ -142,6 +142,7 @@ Each server in the catalog includes comprehensive metadata: - **Tags**: Searchable keywords and labels - **Status**: Active, deprecated, or maintenance mode - **Sync Status**: Whether server is synced from official registry +- **OAuth Requirement**: Whether server requires OAuth authorization (see [OAuth-Enabled MCP Servers](/mcp-oauth)) #### Technical Specifications - **Language**: Programming language (TypeScript, Python, etc.) diff --git a/general/mcp-configuration.mdx b/general/mcp-configuration.mdx index 72b0962..78703fa 100644 --- a/general/mcp-configuration.mdx +++ b/general/mcp-configuration.mdx @@ -14,7 +14,9 @@ The system separates configuration into three distinct layers: 2. **Team Level** - Shared team configurations with lock/unlock controls 3. **User Level** - Personal configurations within team-defined boundaries -This architecture enables teams to share common settings like API keys while allowing individual members to customize personal settings like local file paths while maintaining team security and standards. +This architecture enables teams to share common settings like API keys while allowing individual members to use their own private credentials or customize personal settings like local file paths - all within the same team installation, maintaining both team collaboration and individual privacy. + +**Note on OAuth-Enabled MCP Servers:** Some MCP servers require OAuth authorization, which happens at the user level—separate from this three-tier configuration system. Each user must authorize individually with their own account. For details, see [OAuth-Enabled MCP Servers](/mcp-oauth). ## How It Works @@ -42,9 +44,9 @@ This architecture enables teams to share common settings like API keys while all ┌─────────────────────────────────────────────────────────────────────────────────┐ │ TIER 3: USER (Individual) │ │ ┌─────────────────────────────────────────────────────────────────────────────┐ │ -│ │ 🔓 Personal Settings: Local paths, preferences │ │ -│ │ 🔗 Automatic Inheritance: Use team credentials seamlessly │ │ -│ │ 🛡️ Team Security: Access controlled through team OAuth tokens │ │ +│ │ 🔓 Personal Settings: Private credentials, local paths, preferences │ │ +│ │ 🔗 Automatic Inheritance: Use team credentials OR your own private ones │ │ +│ │ 🛡️ Privacy: Your credentials are NOT shared with other team members │ │ │ └─────────────────────────────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────────────────────┘ │ @@ -67,17 +69,20 @@ The heart of the system is sophisticated lock/unlock controls with precise categ - **Validation Rules** - Set data types, constraints, and security requirements for configurable elements - **Precise Schema Definition** - Create detailed schemas that control the exact configuration experience -**Team Administrator Controls:** +**Team Administrator Controls (`team_admin` role only):** +- **Install MCP Servers** - Only `team_admin` can install MCP servers to teams (not `team_user`) - **Configure Team Settings** - Set shared credentials and parameters within schema boundaries - **Control User Access** - Lock/unlock elements for team members based on organizational needs - **Manage Team Credentials** - Securely handle team-wide secrets with appropriate visibility controls - **Work Within Schema Boundaries** - Configure only elements designated as "Team Configurable" by global admins +**Important:** Users with `team_user` role cannot install MCP servers, view team credentials, or modify team settings. They can only configure their own personal user-level settings. + **User Access:** +- **Private Credentials** - Configure personal API keys and secrets that are NOT shared with other team members - **Personal Customization** - Modify only unlocked elements within boundaries set by global admin categorization -- **Secure Experience** - No access to locked configuration, team secrets, or template elements +- **Credential Privacy** - Your user-level credentials remain private and isolated from other team members - **Focused Interface** - See only configuration elements designated as personally configurable -- **Team Integration** - Access through OAuth team authentication ## User Journey Workflows @@ -88,16 +93,20 @@ Each tier has its own focused workflow: Key workflow: Repository → Claude Desktop Config → **Configuration Schema Categorization** → Basic Info → Catalog Entry -### For Team Administrators +### For Team Administrators (`team_admin` role) **[Team Installation](/mcp-team-installation)** - Learn how to install MCP servers from the catalog, configure shared team settings, and control user access. Key workflow: Browse Catalog → Configure Team Settings → Set Lock Controls → Deploy Installation -### For Individual Users +**Note:** Only users with `team_admin` role can perform team installations. Users with `team_user` role skip this step and go directly to user configuration. + +### For Individual Users (both `team_admin` and `team_user` roles) **[User Configuration](/mcp-user-configuration)** - Learn how to configure personal MCP settings and customize your workflow. Key workflow: Access Team Installation → Configure Personal Settings → Save Configuration +**Note:** Both `team_admin` and `team_user` roles configure personal settings the same way. The difference is that only `team_admin` can install the MCP server to the team in the first place. + ## Official Registry Configuration Mapping When MCP servers are synced from the official MCP Registry, their environment variables are automatically mapped to the appropriate tier based on their properties: @@ -201,17 +210,21 @@ This automatic mapping enables synced servers from the official registry to work **Flexibility:** Support for variable-length configurations and individual customization -**Collaboration:** Teams coordinate through shared settings while maintaining individual customization +**Collaboration:** Teams coordinate through shared settings while maintaining individual customization and credential privacy **Governance:** Clear boundaries and audit trails for organizational compliance, with precise control over configuration inheritance ## Common Use Cases -**Development Teams:** Share Git tokens and project settings while allowing personal directory configurations +**Development Teams:** Share org-wide Git tokens and project settings while team members can use their personal GitHub tokens for individual rate limits + +**Data Science Teams:** Share production database credentials at team level while data scientists use personal API keys for external services + +**Support Teams:** Share customer service API keys at team level while support agents use personal OAuth tokens for individual accountability -**Data Science Teams:** Share database credentials and data lake access while supporting individual analysis workflows +**Rate Limit Management:** Team shares basic API access while individual users configure personal premium API keys for higher rate limits -**Support Teams:** Share customer service API keys while allowing personal workspace customization +**Multi-Account Access:** Team accesses shared resources while users maintain separate credentials for personal accounts or dev environments ## Official Registry Transport Types diff --git a/general/mcp-oauth.mdx b/general/mcp-oauth.mdx new file mode 100644 index 0000000..3c2ee9e --- /dev/null +++ b/general/mcp-oauth.mdx @@ -0,0 +1,252 @@ +--- +title: OAuth-Enabled MCP Servers +description: Learn how to install and authorize OAuth-enabled MCP servers with per-user authentication and automatic token management. +sidebarTitle: MCP OAuth Servers +--- + +Some MCP servers require OAuth authorization to access external services like Box, Google Drive, or Slack. OAuth-enabled servers work differently from standard MCP servers because each user must authorize individually with their own account. + +## Overview + +**OAuth-enabled MCP servers** connect to third-party services that require user consent. When your team installs an OAuth server, each team member must go through their own authorization flow. + +**Key Difference:** +- **Standard MCP servers**: Team admin configures shared credentials +- **OAuth MCP servers**: Each user authorizes with their personal account + +**Why User-Level Authorization?** +- Alice's Google Drive ≠ Bob's Google Drive +- Each user accesses their own data, not shared team data +- OAuth tokens are private and encrypted per user +- Actions are traceable to individual users for accountability + +**This is NOT:** +- GitHub OAuth (used for logging into DeployStack) +- Satellite OAuth (used for client connections) +- Team-level configuration (args/env/headers from three-tier system) + +## How OAuth MCP Servers Work + +### Team Installation (Team Admin Only) + +When a `team_admin` installs an OAuth-enabled MCP server: + +1. **Browse the catalog** and find an OAuth server (Box, Google Drive, etc.) +2. **Click "Install & Authorize"** to create the team installation +3. **The team admin is redirected** to authorize with their own account +4. **Installation appears in team** with "Connected" status for team admin + +**Important:** Only the team admin sees "Connected" at this point. Other team members will see "Auth Required" until they authorize. + +### Individual Authorization (All Users) + +When Bob (team_user) or any other team member views the installations: + +1. **Bob sees the installation** with "⚠ Auth Required" status +2. **Bob clicks "Reconnect"** to start his own authorization +3. **Browser popup opens** to the OAuth provider (Box, Google, etc.) +4. **Bob consents** to DeployStack accessing his account +5. **Bob is redirected back** and sees "✓ Connected" status + +**Result:** Bob's tokens are stored separately from Alice's tokens. Each user's MCP operations use their own credentials. + +## Authorization Flow + +### Step 1: User Initiates Authorization + +When you click "Authorize" or "Reconnect": +- DeployStack starts OAuth discovery from the MCP server +- A unique authorization URL is generated for you +- PKCE security parameters are created (prevents token theft) + +### Step 2: Consent Screen + +A browser window opens showing: +- The service you're authorizing (Box, Google, etc.) +- What permissions DeployStack is requesting +- Your personal account to authorize with + +**You choose:** Allow or Deny + +### Step 3: Token Exchange + +After you click "Allow": +- The OAuth provider redirects you back to DeployStack +- DeployStack exchanges the authorization code for tokens +- Tokens are encrypted and stored with your user ID +- You see "✓ Connected" status + +### Step 4: Automatic Token Management + +Once authorized: +- Background job monitors token expiration +- Tokens are automatically refreshed before they expire +- You stay connected without re-authorizing +- If refresh fails, you'll see "Auth Required" again + +## Multi-User Team Scenarios + +### Scenario: Three-Person Team with Box MCP + +**Team:** Acme Corp +- Alice (team_admin) +- Bob (team_user) +- Carol (team_user) + +**Timeline:** + +1. **Alice installs Box MCP Server** + - Creates installation "Team Box Files" + - Alice authorizes with her Box account + - Alice sees: ✓ Connected + +2. **Bob logs in** + - Sees installation "Team Box Files" + - Status: ⚠ Auth Required + - Clicks "Reconnect" + - Authorizes with his Box account + - Bob sees: ✓ Connected + +3. **Carol logs in** + - Sees installation "Team Box Files" + - Status: ⚠ Auth Required + - Must authorize with her Box account + - Carol sees: ✓ Connected + +**Data Access:** +- Alice's requests → Alice's Box files +- Bob's requests → Bob's Box files +- Carol's requests → Carol's Box files +- No cross-user data access + +### Scenario: User Switches Teams + +**Context:** Alice is in both "Engineering Team" and "Marketing Team" + +**What Happens:** + +1. **Engineering Team** installs Google Drive MCP + - Alice authorizes with her Google account + - Alice's tokens stored for Engineering Team + +2. **Marketing Team** also has Google Drive MCP + - Alice must authorize again for Marketing Team + - Separate tokens stored for Marketing Team context + - Same Google account, different team = separate authorization + +**Why?** Tokens are stored per user + team + installation combination for security isolation. + +## Token Storage and Security + +### Encryption at Rest + +Your OAuth tokens are encrypted in the database using AES-256-GCM encryption: +- Access tokens encrypted before storage +- Refresh tokens encrypted before storage +- Only decrypted when Satellite needs them at runtime + +For complete security details, see [Security and Privacy](/security). + +### Privacy Guarantees + +**Your tokens are private:** +- Team admins cannot see your OAuth tokens +- Other team members cannot see your OAuth tokens +- Tokens are filtered by your user ID on every query + +**Your tokens are isolated:** +- Each user has separate token records +- Revoking your access doesn't affect other users +- Your authorization status is independent + +### Runtime Token Injection + +When you use an OAuth MCP server: + +1. **Satellite receives your request** (with your user ID) +2. **Satellite asks backend** for your tokens (filtered by user_id) +3. **Backend decrypts** your tokens and returns them +4. **Satellite injects tokens** into the MCP server process +5. **MCP server uses your credentials** to access the service + +This happens automatically and transparently. + +## Token Refresh Process + +### Automatic Refresh + +A background job runs every few minutes: +- Checks for tokens expiring soon (< 10 minutes remaining) +- Uses the refresh_token to get new access_token +- Updates stored tokens automatically +- No user action required + +### When Refresh Fails + +If token refresh fails (revoked access, expired refresh token): +- Your installation status changes to "⚠ Auth Required" +- You see a prompt to re-authorize +- Click "Reconnect" to start a new authorization flow +- New tokens are stored after authorization + +## Identifying OAuth-Enabled Servers + +### In the MCP Catalog + +OAuth-enabled servers are marked with: +- **"Requires OAuth" badge** in the server card +- **OAuth indicator** in the server details +- **Authorization notice** in the installation instructions + +### During Installation + +When installing an OAuth server: +- Button text: "Install & Authorize" (not just "Install") +- You'll be redirected to authorization immediately after installation +- Installation won't be fully functional until authorized + +## Common Questions + +### Q: Can team admins authorize on behalf of users? + +**No.** Each user must authorize with their own account. Team admins cannot authorize for other users because OAuth tokens are tied to individual user accounts. + +### Q: What if I don't want to authorize? + +You won't be able to use that MCP server. The installation will show "Auth Required" and remain unusable until you authorize. + +### Q: Can I revoke access later? + +Yes. You can revoke DeployStack's access directly from the OAuth provider (Box, Google, etc.). DeployStack will show "Auth Required" status after revocation, and you can re-authorize anytime. + +### Q: Do I need to re-authorize if I leave and rejoin the team? + +Yes. When you leave a team, your tokens for that team are deleted. If you rejoin, you must authorize again. + +### Q: What happens if tokens expire while I'm using the server? + +The Satellite will detect expired tokens and return an error. You'll see "Auth Required" status and need to re-authorize. + +## Comparison with Other OAuth Types + +| Feature | GitHub OAuth | Satellite OAuth | MCP Server OAuth | +|---------|-------------|-----------------|------------------| +| **Purpose** | Login to DeployStack | Connect clients to Satellite | Access third-party services | +| **Who authorizes** | Individual users | Team (via credentials) | Individual users | +| **Frequency** | Once per login session | Per Satellite connection | Per MCP server | +| **Storage** | Session cookies | Client configuration | Database (per-user) | +| **Visibility** | User only | Team visible | User-private | +| **Refresh** | Session-based | Client handles | Automatic background | +| **Revocation** | Logout | Delete credentials | Revoke at provider | + +## Related Documentation + +For complete understanding of OAuth MCP servers in context: + +- [MCP Configuration System](/mcp-configuration) - Three-tier architecture (OAuth is separate) +- [MCP Team Installation](/mcp-team-installation) - How team admins install OAuth servers +- [MCP User Configuration](/mcp-user-configuration) - User settings and authorization +- [MCP Catalog](/mcp-catalog) - Discovering OAuth-enabled servers +- [Security and Privacy](/security) - Token encryption and storage security + +OAuth-enabled MCP servers provide secure, per-user access to external services while maintaining privacy and security through encrypted token storage and automatic token management. diff --git a/general/mcp-team-installation.mdx b/general/mcp-team-installation.mdx index cc57be7..e5df1ba 100644 --- a/general/mcp-team-installation.mdx +++ b/general/mcp-team-installation.mdx @@ -5,16 +5,25 @@ sidebarTitle: MCP Team Installation --- -Team administrators install MCP servers from the catalog and configure shared team settings that all team members inherit. You control what users can customize through lock/unlock settings. +Team administrators install MCP servers from the catalog and configure shared team settings. You control whether team members must use shared credentials or can configure their own private credentials through lock/unlock settings. ## Overview -As a team administrator, you: +**Important:** This guide is for users with the **`team_admin`** role. Only team administrators can install and configure MCP servers for their teams. Users with the **`team_user`** role can only configure their personal settings after a team admin has installed the server. -- **Install MCP servers** from the catalog into your team workspace -- **Configure shared settings** like API keys and common parameters -- **Control user access** through lock/unlock settings -- **Manage team credentials** securely for all team members +As a team administrator (`team_admin`), you: + +- **Install MCP servers** from the catalog into your team workspace (only `team_admin` can do this) +- **Configure shared settings** like team-wide API keys and common parameters +- **Control credential policy** - force shared credentials OR allow users to use private ones +- **Manage team credentials** securely that can be shared across team members +- **Enable credential privacy** - unlock settings so users can configure private credentials + +**What `team_user` members cannot do:** +- Cannot install MCP servers to the team +- Cannot view or modify team-level credentials set by team admins +- Cannot change lock/unlock settings +- Can only configure their own personal user-level settings (if unlocked) For an overview of the three-tier system, see [MCP Configuration System](/mcp-configuration). For details on how global administrators create the schemas that define your configuration options, see [Admin Schema Workflow](/mcp-admin-schema-workflow). @@ -32,6 +41,8 @@ Each installation gets a meaningful name like "DevOps Team Filesystem" or "Custo **Server Sources**: When browsing the catalog, you'll see servers from multiple sources - official registry servers (automatically synced and marked with badges) and manually created custom integrations. Both types work identically with DeployStack's three-tier configuration system. +**OAuth-Enabled MCP Servers:** Some MCP servers require OAuth authorization (Box, Google Drive, Slack, etc.). When you install an OAuth server, you'll authorize with your own account first, but each team member must then authorize individually with their own account. For complete details, see [OAuth-Enabled MCP Servers](/mcp-oauth). + The configuration options available to you are determined by how the global administrator categorized elements during schema creation. You can only configure elements that were designated as "Team Configurable" in the original schema definition. ## Lock/Unlock Controls @@ -48,57 +59,101 @@ The lock/unlock system gives you granular control over what team members can mod - **🔒 Locked** - Users cannot modify this setting (team-controlled) - **🔓 Unlocked** - Users can customize this setting (user-controlled) -**When to Lock:** -- **Security** - API keys and sensitive credentials -- **Standardization** - Settings that should be consistent across team -- **Compliance** - Organizational policy requirements +**When to Lock (Force Team Credentials):** +- **Shared Resources** - When team should use same account for billing/tracking +- **Standardization** - Settings that must be consistent across team +- **Compliance** - Organizational policy requires shared credentials +- **Cost Control** - Prevent users from using personal premium accounts -**When to Unlock:** -- **Personal Workflow** - Individual customization needs -- **User Preferences** - Personal productivity settings +**When to Unlock (Allow Private Credentials):** +- **Rate Limits** - Let users configure personal API keys for higher limits +- **Individual Accountability** - Track actions per user through personal tokens +- **Personal Accounts** - Users have their own service subscriptions +- **Development vs Production** - Users can test with personal dev credentials ## Team Configuration Example -**Team Web Search Server:** +**Scenario A: Locked Credentials (Forced Shared)** ``` -Installation Name: "Team Web Search" +Installation Name: "Team Web Search - Shared Account" Template Configuration (Set by Global Admin, Cannot Change): ├─ Command: "npx" (🔒 Locked Forever) ├─ Package: "@brightdata/mcp-server-web-search" (🔒 Locked Forever) -├─ System Flag: "-y" (🔒 Locked Forever) +└─ System Flag: "-y" (🔒 Locked Forever) Team Configuration (You Control): -├─ API_KEY: "••••• (encrypted secret)" (🔒 Locked) -├─ SEARCH_QUOTA: "1000 queries/day" (🔒 Locked) -├─ CONTENT_FILTERS: "enabled" (🔒 Locked) +├─ SHARED_API_KEY: "team_key_abc123" (🔒 Locked - users MUST use this) +├─ SEARCH_QUOTA: "1000 queries/day" (🔒 Locked - same for everyone) +└─ CONTENT_FILTERS: "enabled" (🔒 Locked) -User Controls (You Decide Lock/Unlock): +User Controls: ├─ Default Search Engine: 🔓 Unlocked (users choose preference) ├─ Results Per Page: 🔓 Unlocked (individual preference) -├─ Cache Settings: 🔓 Unlocked (performance tuning) +└─ Cache Settings: 🔓 Unlocked (performance tuning) +``` + +**Result:** All team members use the same API key. Good for cost control and shared billing. + +--- + +**Scenario B: Unlocked Credentials (Private Allowed)** + +``` +Installation Name: "Team Web Search - Personal Keys" + +Template Configuration (Set by Global Admin, Cannot Change): +├─ Command: "npx" (🔒 Locked Forever) +├─ Package: "@brightdata/mcp-server-web-search" (🔒 Locked Forever) +└─ System Flag: "-y" (🔒 Locked Forever) + +Team Configuration (You Control): +├─ SHARED_API_KEY: "team_key_abc123" (🔓 Unlocked - users CAN override) +├─ SEARCH_QUOTA: "1000 queries/day" (🔓 Unlocked - users can set their own) +└─ CONTENT_FILTERS: "enabled" (🔒 Locked - enforce for all) + +User Controls: +├─ PERSONAL_API_KEY: 🔓 Unlocked (users can use their own key) +├─ Default Search Engine: 🔓 Unlocked +├─ Results Per Page: 🔓 Unlocked +└─ Cache Settings: 🔓 Unlocked ``` -**Result:** Team members automatically inherit template configuration and team API credentials with quota limits, but can customize their search preferences and performance settings within the boundaries you set. +**Result:** Users can choose to use team credentials OR configure their own private API keys. Good for rate limits and individual accountability. ## Credential Management +**Team Credential Types:** + +1. **Shared Team Credentials (Locked)** + - All team members use the same credentials + - Encrypted in database, visible as `*****` to users + - Good for: Shared accounts, cost control, unified billing + +2. **Default Team Credentials (Unlocked)** + - Team provides default credentials as fallback + - Users can override with their own private credentials + - Good for: Optional personal upgrades, rate limit flexibility + **Security Features:** - All team credentials are encrypted in the database -- Team administrators can configure and modify credentials -- Team members can use credentials but may not see actual values +- **Only `team_admin` can view and modify team credentials** - `team_user` members cannot see or change them +- Team credentials appear as `*****` to all users (including `team_admin` in some contexts) +- Users can ONLY see their own private credentials, not other users' credentials -**Credential Visibility:** -- **Secret Fields** - Users see `*****` and use them automatically (for API keys, tokens) -- **Visible Fields** - Users can see actual values (for service URLs, non-sensitive settings) +**Credential Privacy:** +- **Locked Credentials** - Users must use team credentials (cannot see actual values) +- **Unlocked Credentials** - Users can configure private credentials (not shared with team) +- **User Isolation** - Each user's private credentials are encrypted separately +- **Role-Based Access** - Only `team_admin` can manage team-level credentials; `team_user` cannot access them For complete details on how secret fields are encrypted and protected, see [Security and Privacy](/security). **Updates:** -- Update credentials without affecting user configurations -- Changes automatically apply to all team members -- No downtime during credential updates +- Update shared team credentials without affecting user configurations +- Changes automatically apply to all team members using shared credentials +- Users with private credentials are unaffected by team credential changes ## Security and Isolation @@ -107,20 +162,26 @@ For complete details on how secret fields are encrypted and protected, see [Secu - No cross-team access to configurations or credentials - Only team administrators can modify team installations +**User Privacy Within Team:** +- Users can configure private credentials not shared with other team members +- Each user's private credentials are encrypted separately in the database +- Team administrators cannot see individual users' private credentials +- Users cannot see other team members' private credentials + **Configuration Inheritance:** - Team settings automatically flow to all team members -- Users build on top of team configuration +- Users can inherit team credentials OR configure their own private ones - Clean separation between shared and personal settings ## What Team Members Experience Based on your lock/unlock decisions and the schema boundaries set by global administrators, team members: -- Only see configuration elements they can modify -- Use team credentials automatically without seeing sensitive values -- Can customize unlocked elements for their workflow -- Get consistent behavior across all team members -- Benefit from satellite-managed remote execution +- **See only configurable elements** - Configuration options you unlocked for them +- **Choose credential source** - Use team credentials OR their own private ones (if unlocked) +- **Maintain privacy** - Their private credentials are never visible to you or other team members +- **Get consistent baseline** - Inherit locked team settings while customizing unlocked ones +- **Benefit from satellite execution** - Remote MCP server execution with proper credential isolation For details on the user experience, see [MCP User Configuration](/mcp-user-configuration). diff --git a/general/mcp-user-configuration.mdx b/general/mcp-user-configuration.mdx index 63e32ea..9865da1 100644 --- a/general/mcp-user-configuration.mdx +++ b/general/mcp-user-configuration.mdx @@ -5,16 +5,17 @@ sidebarTitle: MCP User Configuration --- -Individual users customize personal MCP settings within boundaries set by their team administrators. You configure only the settings made available to you, focusing on personal productivity while automatically inheriting secure team credentials and standards. +Individual users configure personal MCP settings within boundaries set by their team administrators. You can use your own private credentials that are NOT shared with other team members, or inherit team credentials automatically. Your personal configuration remains private while working within the same team installation. ## Overview As a user, you personalize your MCP server experience within team-defined boundaries: +- **Private Credentials** - Use your own API keys and secrets that remain private from other team members - **Personal Settings** that adapt to your individual workflow -- **Automatic Team Integration** with shared credentials and team standards +- **Flexible Credential Choice** - Use team credentials OR configure your own private ones - **Simplified Interface** showing only settings you can modify -- **Secure Experience** without credential management burden +- **Credential Privacy** - Your personal credentials are never visible to other team members The user tier builds on team configurations, which build on global schemas. For an overview of the complete system, see [MCP Configuration System](/mcp-configuration). @@ -23,18 +24,21 @@ The user tier builds on team configurations, which build on global schemas. For Your configuration options are precisely determined by how global administrators categorized elements during schema creation and your team administrator's lock/unlock decisions: **🔓 You Can Configure:** +- **Private Credentials** - Your personal API keys and secrets (NOT shared with team members) - **Unlocked Elements** - Settings your team admin made available for personal customization - **User-Specific Elements** - Settings designed for individual workflow (like local file paths) **🔒 You Cannot See or Modify:** - **Locked Team Settings** - Shared configuration controlled by team administrators -- **Hidden Credentials** - API keys and secrets managed securely by your team +- **Other Users' Credentials** - You cannot see other team members' personal credentials - **Template Elements** - System-level parameters locked by global administrators -**🔗 You Automatically Inherit:** -- **Team Credentials** - API keys and authentication tokens -- **Team Standards** - Shared settings and organizational preferences -- **Template Configuration** - System-level parameters locked by global administrators +**🔗 You Can Choose To:** +- **Use Team Credentials** - Automatically inherit shared API keys and authentication tokens +- **Use Private Credentials** - Configure your own personal credentials that override team settings +- **Mix Both** - Use team credentials for some services and personal credentials for others + +**Note on OAuth-Enabled MCP Servers:** If your team uses OAuth servers (Box, Google Drive, etc.), you must authorize individually with your own account even after your team admin installs the server. For complete details, see [OAuth-Enabled MCP Servers](/mcp-oauth). For details on how global administrators define these boundaries and team administrators control access, see [Admin Schema Workflow](/mcp-admin-schema-workflow) and [Team Installation](/mcp-team-installation). @@ -45,29 +49,37 @@ When you configure an MCP server, you see a clean interface focused only on your ``` Personal Configuration: "Team Web Search" +YOUR PRIVATE CREDENTIALS (Not shared with team) + +API Configuration: +├─ Use Personal API Key: ✓ +├─ Your API Key: ••••• (encrypted, private) +└─ Personal Rate Limit: 5000 queries/day + YOUR PERSONAL SETTINGS Search Preferences: ├─ Default Search Engine: Google ▼ -├─ Results Per Page: 10 ▼ +├─ Results Per Page: 10 ▼ └─ Safe Search: Moderate ▼ Cache Settings: ├─ Enable Result Caching: ✓ └─ Cache Duration: 1 hour ▼ -TEAM-MANAGED SETTINGS (You inherit these automatically) +TEAM SETTINGS (You can choose to inherit or override) -✓ Team API credentials: ••••• (encrypted, see Security) -✓ Shared search quotas: 1000 queries/day -✓ Team content filters: Enabled +□ Use team API credentials: ••••• (shared with team) +✓ Use team search quotas: 1000 queries/day +✓ Use team content filters: Enabled [Save Configuration] [Test Configuration] ``` **Key Interface Features:** -- **Only Personal Options** - You see only settings you can modify -- **Clear Inheritance** - Understanding of what you get from your team +- **Private Credential Section** - Configure personal API keys separate from team +- **Clear Credential Choice** - Choose between personal or team credentials +- **Privacy Indicators** - Clear marking of what's private vs. shared - **Validation** - Immediate feedback on configuration validity ## Personal Configuration Types @@ -76,6 +88,11 @@ TEAM-MANAGED SETTINGS (You inherit these automatically) Most commonly, you'll configure: +**Private Credentials:** +- Personal API keys (not shared with team members) +- Individual authentication tokens +- Private service URLs and endpoints + **API Preferences:** - Search result limits and pagination - Content filtering and safety settings @@ -83,10 +100,16 @@ Most commonly, you'll configure: ### User Environment Variables +**Private Credentials and Secrets:** +- Personal API keys (`PERSONAL_API_KEY`, `GITHUB_TOKEN`) +- Individual OAuth tokens (`OAUTH_ACCESS_TOKEN`) +- Private database credentials (`DB_PASSWORD`) + **Personal Preferences:** - Search engine preferences and result formatting - Cache settings and performance tuning - Interface customization options +- Local file paths (`/Users/yourname/workspace`) ## Configuration Process @@ -110,25 +133,35 @@ Template (System): ├─ Package: "@brightdata/mcp-server-web-search" └─ System flags: "-y" -+ Team (Shared): -├─ Team API Key: "••••• (encrypted secret, hidden from you)" ++ Team (Shared - Available to All Team Members): +├─ Team API Key: "team_shared_key_abc123" ├─ Search quota: "1000 queries/day" └─ Content filters: "enabled" -+ Your Personal Settings: ++ Your Personal Settings (Private - Not Shared): +├─ Personal API Key: "your_private_key_xyz789" (OVERRIDES team key) +├─ Personal Rate Limit: "5000 queries/day" (OVERRIDES team quota) ├─ Default search engine: "google" ├─ Results per page: 10 └─ Cache duration: "1 hour" -= Final Runtime Configuration: += Final Runtime Configuration FOR YOU: Command: npx -y @brightdata/mcp-server-web-search Environment: { - "TEAM_API_KEY": "decrypted-for-runtime-only", - "SEARCH_QUOTA": "1000", - "CONTENT_FILTERS": "enabled", - "DEFAULT_ENGINE": "google", - "RESULTS_PER_PAGE": "10", - "CACHE_DURATION": "3600" + "API_KEY": "your_private_key_xyz789", ← YOUR private key used + "SEARCH_QUOTA": "5000", ← YOUR personal limit used + "CONTENT_FILTERS": "enabled", ← Team setting inherited + "DEFAULT_ENGINE": "google", ← Your preference + "RESULTS_PER_PAGE": "10", ← Your preference + "CACHE_DURATION": "3600" ← Your preference +} + += Other Team Members See DIFFERENT Configuration: +Environment: { + "API_KEY": "team_shared_key_abc123", ← Team key (you're using yours) + "SEARCH_QUOTA": "1000", ← Team limit (you're using yours) + "CONTENT_FILTERS": "enabled", ← Same team setting + ... (their own preferences) } ``` diff --git a/self-hosted/database-setup.mdx b/self-hosted/database-setup.mdx index 1bd74b4..5198c32 100644 --- a/self-hosted/database-setup.mdx +++ b/self-hosted/database-setup.mdx @@ -1,6 +1,6 @@ --- title: Database Setup for Self-Hosting -description: Step-by-step guide to configure your database when self-hosting DeployStack - designed for non-technical users. +description: Step-by-step guide to configure PostgreSQL for your self-hosted DeployStack instance. Sidebar: Database Setup Icon: Database --- @@ -8,161 +8,361 @@ Icon: Database ## Overview -When you first start your self-hosted DeployStack instance, you'll need to choose and configure a database. This guide will walk you through the process step-by-step. +DeployStack uses PostgreSQL as its database backend, providing enterprise-grade reliability with ACID compliance, connection pooling, and advanced features for production deployments. -**Important**: This setup only needs to be done once when you first install DeployStack. +**Important**: PostgreSQL must be running and accessible before starting your DeployStack instance. ## What You'll Need -- Your DeployStack instance running (backend and frontend) -- Access to your server's environment variables (if choosing cloud databases) -- About 5-10 minutes to complete the setup +- PostgreSQL 13+ installed and running (or included in Docker Compose) +- Database connection details (host, port, username, password) +- About 5-10 minutes to complete the configuration -## Step 1: Access the Setup Page +## Deployment Options -1. **Start your DeployStack instance** following your installation guide -2. **Open your web browser** and navigate to your DeployStack URL -3. **You'll be automatically redirected** to the setup page at `/setup` +### Option 1: Docker Compose (Recommended) -If you see a message like "Database setup required" or are redirected to a setup page, you're in the right place! +If you're using our Docker Compose setup, PostgreSQL is included and automatically configured. No manual database setup required! -## Step 2: Choose Your Database +```bash +# PostgreSQL is automatically included +docker-compose up -d +``` -You'll see two database options. Here's what each one means: +The Docker Compose setup includes: +- PostgreSQL 18 Alpine +- Automatic health checks +- Persistent data volume +- Pre-configured connection details -### Option 1: SQLite (Recommended for Most Users) -- **Best for**: Small to medium teams, development, testing -- **Pros**: - - No additional setup required - - Works immediately - - No external dependencies - - Perfect for getting started -- **Cons**: - - Single server only (no clustering) - - Limited to one database file +### Option 2: External PostgreSQL Server -**Choose this if**: You're just getting started, have a small team, or want the simplest setup. +For production deployments with existing PostgreSQL infrastructure: -### Option 2: Turso (For Advanced Users) -- **Best for**: Advanced users needing distributed databases -- **Pros**: - - Multi-region replication - - Advanced SQLite features - - Good performance -- **Cons**: - - Requires Turso account - - More complex setup +## Step 1: Prepare PostgreSQL Database -**Choose this if**: You need advanced database features or multi-region deployment. +### Create Database and User -## Step 3: Configure Your Chosen Database +Connect to your PostgreSQL server and create a dedicated database and user: -### If You Chose SQLite (Easiest) +```sql +-- Connect to PostgreSQL as admin +psql -U postgres -1. **Select "SQLite"** from the options -2. **Click "Setup Database"** -3. **Wait for confirmation** (usually takes 10-30 seconds) -4. **Done!** You'll be redirected to the main application +-- Create database +CREATE DATABASE deploystack; -No additional configuration needed - SQLite works out of the box! +-- Create user with password +CREATE USER deploystack_user WITH ENCRYPTED PASSWORD 'your_secure_password_here'; -### If You Chose Turso +-- Grant privileges +GRANT ALL PRIVILEGES ON DATABASE deploystack TO deploystack_user; -Before you can use Turso, you need to set up environment variables: +-- Grant schema privileges (PostgreSQL 15+) +\c deploystack +GRANT ALL ON SCHEMA public TO deploystack_user; +GRANT ALL ON ALL TABLES IN SCHEMA public TO deploystack_user; +GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO deploystack_user; +ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO deploystack_user; +ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO deploystack_user; -#### Prerequisites -1. **Create a Turso account** at [turso.tech](https://turso.tech) -2. **Install Turso CLI** and create a database: - ```bash - turso db create deploystack-db +-- Exit +\q +``` + +### Verify Connection + +Test the database connection: + +```bash +# Test connection +psql -h localhost -U deploystack_user -d deploystack -c "SELECT version();" + +# You should see PostgreSQL version information +``` + +## Step 2: Configure Environment Variables + +Set PostgreSQL connection details in your environment: + +### For Docker Deployments + +Add to your `.env` file: + +```bash +# PostgreSQL Configuration +POSTGRES_HOST=your-postgres-host # e.g., localhost or postgres.example.com +POSTGRES_PORT=5432 # Default PostgreSQL port +POSTGRES_DATABASE=deploystack # Database name +POSTGRES_USER=deploystack_user # Database user +POSTGRES_PASSWORD=your_secure_password_here +POSTGRES_SSL=false # Set to 'true' for SSL connections +``` + +### For Local Development + +Edit `services/backend/.env`: + +```bash +# PostgreSQL Configuration +POSTGRES_HOST=localhost +POSTGRES_PORT=5432 +POSTGRES_DATABASE=deploystack +POSTGRES_USER=deploystack +POSTGRES_PASSWORD=deploystack +POSTGRES_SSL=false +``` + +## Step 3: Start DeployStack + +Once PostgreSQL is configured, start your DeployStack instance: + +### Docker Compose + +```bash +docker-compose up -d +``` + +### Individual Containers + +```bash +# Start backend with PostgreSQL configuration +docker run -d \ + --name deploystack-backend \ + -p 3000:3000 \ + -e POSTGRES_HOST=your-postgres-host \ + -e POSTGRES_PORT=5432 \ + -e POSTGRES_DATABASE=deploystack \ + -e POSTGRES_USER=deploystack_user \ + -e POSTGRES_PASSWORD=your_secure_password_here \ + -e POSTGRES_SSL=false \ + -e DEPLOYSTACK_ENCRYPTION_SECRET=your-secret-here \ + -v deploystack_backend_persistent:/app/persistent_data \ + deploystack/backend:latest +``` + +## Step 4: Complete Setup Wizard + +1. **Access DeployStack**: Navigate to your frontend URL (e.g., `http://localhost:8080`) +2. **Automatic Redirect**: You'll be redirected to `/setup` +3. **Database Initialization**: The wizard will: + - Test PostgreSQL connection + - Apply database migrations + - Create necessary tables + - Initialize system data +4. **Create Admin Account**: Set up your administrator account +5. **Configuration**: Complete basic platform settings + +## SSL/TLS Connection + +For secure connections to PostgreSQL: + +### Enable SSL in PostgreSQL + +1. **Configure PostgreSQL** (`postgresql.conf`): + ```conf + ssl = on + ssl_cert_file = '/path/to/server.crt' + ssl_key_file = '/path/to/server.key' + ssl_ca_file = '/path/to/root.crt' ``` -3. **Get your database URL and auth token**: + +2. **Set Environment Variable**: ```bash - turso db show deploystack-db - turso db tokens create deploystack-db + POSTGRES_SSL=true ``` -#### Server Configuration -Add these environment variables to your server: +3. **Restart PostgreSQL** and DeployStack backend + +## Production Considerations + +### Connection Pooling + +DeployStack uses `node-postgres` with connection pooling: + +- Default max connections: 20 +- Idle timeout: 30 seconds +- Connection timeout: 2 seconds + +### Database Maintenance + +```bash +# Vacuum database (reclaim storage) +psql -U deploystack_user -d deploystack -c "VACUUM ANALYZE;" + +# Check database size +psql -U deploystack_user -d deploystack -c "SELECT pg_size_pretty(pg_database_size('deploystack'));" + +# View active connections +psql -U deploystack_user -d deploystack -c "SELECT count(*) FROM pg_stat_activity WHERE datname = 'deploystack';" +``` + +### Backup Strategy ```bash -TURSO_DATABASE_URL=libsql://your-database-url -TURSO_AUTH_TOKEN=your_auth_token_here +# Create backup +pg_dump -h localhost -U deploystack_user deploystack > backup.sql + +# Compressed backup +pg_dump -h localhost -U deploystack_user deploystack | gzip > backup.sql.gz + +# Custom format (supports parallel restore) +pg_dump -h localhost -U deploystack_user -Fc deploystack > backup.dump + +# Restore from backup +psql -h localhost -U deploystack_user deploystack < backup.sql + +# Restore from custom format +pg_restore -h localhost -U deploystack_user -d deploystack backup.dump ``` -#### Complete Setup -1. **Restart your DeployStack instance** after setting the environment variables -2. **Go back to the setup page** (`/setup`) -3. **Select "Turso"** -4. **Click "Setup Database"** -5. **Wait for confirmation** +### Performance Tuning -## Step 4: Verify Setup +Edit PostgreSQL configuration (`postgresql.conf`): -After successful setup, you should: +```conf +# Memory settings +shared_buffers = 256MB # 25% of RAM +effective_cache_size = 1GB # 50-75% of RAM +maintenance_work_mem = 64MB +work_mem = 16MB -1. **See a success message** confirming database initialization -2. **Be redirected to the main application** -3. **Be able to create your first user account** +# Connections +max_connections = 100 -If you see any errors, check the troubleshooting section below. +# Write-ahead log +wal_buffers = 16MB +checkpoint_completion_target = 0.9 + +# Query planner +random_page_cost = 1.1 # For SSD storage +effective_io_concurrency = 200 # For SSD storage +``` ## Troubleshooting -### "Database setup has already been performed" -- This means your database is already configured -- You can proceed to use the application normally -- If you need to change databases, contact your system administrator +### "Connection refused" or "Cannot connect" + +**Solutions**: +1. **Check PostgreSQL is running**: + ```bash + # For system service + sudo systemctl status postgresql + + # For Docker + docker ps | grep postgres + ``` + +2. **Check PostgreSQL is listening**: + ```bash + netstat -an | grep 5432 + ``` + +3. **Check PostgreSQL configuration** (`postgresql.conf`): + ```conf + listen_addresses = '*' # Or specific IP + ``` + +4. **Check firewall rules**: + ```bash + # Allow PostgreSQL port + sudo ufw allow 5432 + ``` + +### "Authentication failed" + +**Solutions**: +1. **Verify credentials**: Double-check username and password +2. **Check pg_hba.conf**: + ```conf + # Allow password authentication + host all all 0.0.0.0/0 md5 + ``` +3. **Reload PostgreSQL** after config changes: + ```bash + sudo systemctl reload postgresql + ``` + +### "Database does not exist" + +**Solutions**: +1. **Create database** as shown in Step 1 +2. **Check database name** matches environment variable +3. **Verify user has access**: + ```sql + \l -- List all databases + ``` -### "Configuration incomplete" or "Missing environment variables" -- **For Turso**: Check that both Turso environment variables are set correctly -- **Restart your server** after setting environment variables +### "Permission denied" -### "Failed to connect" or "Network error" -- **Check your internet connection** -- **For Turso**: Verify your database URL and auth token are correct -- **Check server logs** for more detailed error messages +**Solutions**: +1. **Grant proper privileges** as shown in Step 1 +2. **Check user permissions**: + ```sql + \du -- List user permissions + ``` + +### Migration Errors -### Setup page keeps loading -- **Check that your backend server is running** -- **Verify the backend is accessible** from your browser -- **Check browser console** for any JavaScript errors +**Solutions**: +1. **Check PostgreSQL version**: DeployStack requires PostgreSQL 13+ +2. **Verify user privileges**: User needs CREATE, ALTER, DROP permissions +3. **Check logs**: Review backend logs for detailed error messages +4. **Manual migration reset** (development only): + ```sql + -- Connect to database + psql -U deploystack_user -d deploystack -## Changing Databases Later + -- Drop all tables + DROP SCHEMA public CASCADE; + CREATE SCHEMA public; + GRANT ALL ON SCHEMA public TO deploystack_user; -**Important**: Once you've set up a database, changing to a different type requires: + -- Restart backend to re-apply migrations + ``` -1. **Backing up your data** (if you have important information) -2. **Stopping your DeployStack instance** -3. **Removing the database selection file** (`persistent_data/db.selection.json`) -4. **Updating environment variables** for the new database type -5. **Restarting and going through setup again** +## Monitoring -**Note**: This will reset your application data, so make sure to backup anything important first. +### Check Database Health -## Getting Help +```sql +-- Check active connections +SELECT count(*) FROM pg_stat_activity WHERE datname = 'deploystack'; -If you're having trouble with database setup: +-- Check table sizes +SELECT schemaname, tablename, pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) +FROM pg_tables +WHERE schemaname = 'public' +ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC; -1. **Check the server logs** for detailed error messages -2. **Verify environment variables** are set correctly -3. **Ensure your server has internet access** (for cloud databases) -4. **Contact support** with your error messages and setup details +-- Check index usage +SELECT schemaname, tablename, indexname, idx_scan as scans +FROM pg_stat_user_indexes +ORDER BY idx_scan DESC; + +-- Check slow queries +SELECT pid, now() - query_start as duration, query +FROM pg_stat_activity +WHERE state = 'active' AND now() - query_start > interval '1 second' +ORDER BY duration DESC; +``` ## Security Notes -- **Keep your API tokens secure** - never share them publicly -- **Use environment variables** - don't put credentials directly in code -- **Regularly rotate API tokens** for cloud databases -- **Backup your SQLite database file** if using SQLite +- **Use strong passwords** for database users +- **Enable SSL/TLS** for production deployments +- **Restrict network access** using pg_hba.conf +- **Regular backups** are essential for data protection +- **Rotate passwords** periodically +- **Monitor access logs** for suspicious activity ## Next Steps After successful database setup: -1. **Create your administrator account** -2. **Configure your application settings** -3. **Set up user authentication** (email, GitHub, etc.) -4. **Invite your team members** +1. **Complete Setup Wizard** - Create your admin account +2. **Configure Global Settings** - Set up email, authentication, etc. +3. **Deploy Satellites** - Set up MCP server management infrastructure +4. **Create Teams** - Invite team members and set up workspaces Your DeployStack instance is now ready to use! diff --git a/self-hosted/setup.mdx b/self-hosted/setup.mdx index 5a75cf6..2bfc1f3 100644 --- a/self-hosted/setup.mdx +++ b/self-hosted/setup.mdx @@ -25,12 +25,13 @@ Configure your self-hosted DeployStack instance with essential settings to custo If this is a fresh installation, first visit `https:///setup` to complete the database initialization wizard. This creates: **For Docker deployments:** - - Database configuration stored in the Docker volume `deploystack_backend_persistent` + - PostgreSQL database configuration stored in the Docker volume `deploystack_backend_persistent` + - PostgreSQL data stored in `deploystack_postgres_data` volume - Access the setup wizard at `http://localhost:8080/setup` (or your configured frontend URL) - + **For local development:** - - `services/backend/persistent_data/db.selection.json` (database type configuration) - - `services/backend/persistent_data/database/deploystack.db` (if using SQLite) + - PostgreSQL connection configured via environment variables in `services/backend/.env` + - `services/backend/persistent_data/db.selection.json` (database initialization status) @@ -185,8 +186,8 @@ Follow this recommended setup workflow for new DeployStack instances: - Navigate to `https:///setup` (Docker: `http://localhost:8080/setup` by default) - - Complete the database setup wizard (SQLite or Turso) - - This initializes the database and saves configuration + - Complete the database setup wizard (PostgreSQL) + - This initializes the PostgreSQL database and applies migrations - Create your admin account - Log in to the platform @@ -371,9 +372,9 @@ docker run --rm -v deploystack_backend_persistent:/data \ tar xzf /backup/deploystack-backup-20250108.tar.gz -C / # The volume contains: -# - database/deploystack.db - SQLite database (if using SQLite) -# - db.selection.json - Database type configuration +# - db.selection.json - Database initialization status # - Any other persistent application data +# Note: PostgreSQL data is stored separately in deploystack_postgres_data volume ``` ```bash Local Development @@ -389,9 +390,9 @@ tar czf deploystack-backup-$(date +%Y%m%d).tar.gz \ tar xzf deploystack-backup-20250108.tar.gz # The directory contains: -# - database/deploystack.db - SQLite database (if using SQLite) -# - db.selection.json - Database type configuration +# - db.selection.json - Database initialization status # - Any other persistent application data +# Note: PostgreSQL runs separately via Docker (npm run postgres:local) ```