Caution
Disclaimer -- Experimental / Early Stage: This research demonstrator project references thirdβparty models, tools, pricing, and docs that evolve quickly. Treat outputs as recommendations and verify against official docs and your own benchmarks before production use.
A Model Context Protocol (MCP) server offering advanced tools and templates for hierarchical prompting, code hygiene, visualization, memory optimization, and agile planning.
- Installation
- Documentation
- Demos
- Features
- VS Code Integration
- Agent-Relative Calls
- Configuration
- Development
- Contributing
- Changelog
- License
# NPX (recommended)
npx mcp-ai-agent-guidelines
# NPM global
npm install -g mcp-ai-agent-guidelines
# From source
git clone https://github.com/Anselmoo/mcp-ai-agent-guidelines.git
cd mcp-ai-agent-guidelines
npm ci && npm run build && npm startnpm run build # TypeScript build
npm run start # Build and start server
npm run test:all # Unit + integration + demos + MCP smoke
npm run test:coverage:unit # Unit test coverage (c8) -> coverage/ + summary
npm run quality # Type-check + Biome checks
npm run links:check # Check links in main markdown files
npm run links:check:all # Check links in all markdown files (slow)The project includes automated link checking via GitHub Actions. To check links locally before committing:
# Quick check (README, CONTRIBUTING, DISCLAIMER)
npm run links:check
# Comprehensive check (all markdown files)
npm run links:check:all
# Or use npx directly
npx markdown-link-check --config .mlc_config.json README.mdConfiguration is in .mlc_config.json. Ignored patterns and retries are configured there.
π Complete Documentation Index - Full guide to all tools and features
- π― AI Interaction Tips - Learn to ask targeted questions for better results
- π Prompting Hierarchy - Understanding prompt levels and evaluation
- π Agent-Relative Call Patterns - Invoking tools in workflows
- π Flow-Based Prompting - Multi-step prompt workflows
- π Agent-to-Agent (A2A) Orchestration - Tool-to-tool chaining with context propagation
- π A2A Practical Examples - Real-world A2A workflow patterns
- π¨ Mermaid Diagram Generation - Create flowcharts, sequences, ER diagrams
- π Code Quality Analysis - Hygiene scoring and best practices
- β‘ Sprint Planning - Dependency-aware timeline calculation
- ποΈ Bridge Connectors - Integration patterns for external systems
- π Serena Integration - Semantic analysis strategies
- π Complete Reference - Credits, research papers, and citations
See docs/README.md for the complete documentation index.
- π― AI Interaction Tips - Learn to ask targeted questions for better results
- π Prompting Hierarchy - Understanding prompt levels and evaluation
- π Agent-Relative Call Patterns - Invoking tools in workflows
- π Flow-Based Prompting - Advanced chaining strategies
- π¨ Mermaid Diagrams - Visual diagram generation
- π€ Contributing Guidelines - How to contribute
- β¨ Clean Code Initiative - Quality standards (100/100 scoring)
- π§ Technical Improvements - Refactoring and enhancements
β οΈ Error Handling - Best practices- ποΈ Bridge Connectors - Integration patterns
- π¦ Export Formats Guide - LaTeX, CSV, JSON export options and chat integration
- π€ Model Management Guide - Managing AI model definitions in YAML
See the complete documentation for the full list of guides organized by topic.
Explore real-world examples showing the tools in action. All demos are auto-generated and kept in sync with the codebase.
π Complete Demo Index - Full list of all demos with descriptions
Code Analysis & Quality:
- Code Hygiene Report - Pattern detection and best practices
- Guidelines Validation - AI agent development standards
- Clean Code Scoring - Comprehensive quality metrics (0-100)
Prompt Engineering:
- Hierarchical Prompt - Structured refactoring plan
- Domain-Neutral Prompt - Generic template
- Security Hardening Prompt - OWASP-focused analysis
- Flow-Based Prompting - Multi-step workflows
Visualization & Planning:
- Architecture Diagram - Mermaid system diagrams
- Sprint Planning - Dependency-aware timeline
- Model Compatibility - AI model selection
Advanced Features:
- Memory Context Optimization - Token efficiency
- Strategy Frameworks - SWOT, BCG, Porter's Five Forces
- Gap Analysis - Current vs. desired state
npm run build
node demos/demo-tools.js # Generate sample tool outputsDemos are automatically regenerated when tool code changes via GitHub Actions.
27 professional tools for AI-powered development workflows. Each tool is rated by complexity:
β Complexity Ratings:
- β Simple - Single input, immediate output (5-10 min to master)
- ββ Moderate - Multiple parameters, straightforward usage (15-30 min)
- βββ Advanced - Complex inputs, requires understanding of domain (1-2 hours)
- ββββ Expert - Multi-phase workflows, deep domain knowledge (half day)
- βββββ Master - Enterprise-scale, comprehensive orchestration (1-2 days)
π Complete Tools Reference - Detailed documentation with examples
Build structured, effective prompts for various use cases.
| Tool | Purpose | Complexity | Learn More |
|---|---|---|---|
hierarchical-prompt-builder |
Multi-level specificity prompts (context β goal β requirements) | ββ | Guide |
code-analysis-prompt-builder |
Code review prompts (security, performance, maintainability) | ββ | Guide |
architecture-design-prompt-builder |
Architecture design with scale-appropriate guidance | βββ | Guide |
digital-enterprise-architect-prompt-builder |
Enterprise architecture with mentor perspectives & research | ββββ | Guide |
debugging-assistant-prompt-builder |
Systematic debugging prompts with structured analysis | ββ | Guide |
l9-distinguished-engineer-prompt-builder |
L9 (Distinguished Engineer) high-level technical design | βββββ | Guide |
documentation-generator-prompt-builder |
Technical docs tailored to audience (API, user guide, spec) | ββ | Guide |
domain-neutral-prompt-builder |
Generic templates with objectives and workflows | βββ | Guide |
security-hardening-prompt-builder |
Security analysis with OWASP/compliance focus | βββ | Guide |
Analyze and improve code quality with automated insights.
| Tool | Purpose | Complexity | Learn More |
|---|---|---|---|
clean-code-scorer |
Comprehensive 0-100 quality score with metric breakdown | βββ | Guide |
code-hygiene-analyzer |
Detect outdated patterns, unused dependencies, code smells | ββ | Guide |
dependency-auditor |
Audit package.json for security, deprecation, ESM compatibility | β | Guide |
iterative-coverage-enhancer |
Analyze coverage gaps, generate test suggestions, adapt thresholds | βββ | Guide |
semantic-code-analyzer |
Identify symbols, structure, dependencies, patterns (LSP-based) | ββ | Guide |
guidelines-validator |
Validate practices against AI agent development guidelines | β | Guide |
mermaid-diagram-generator |
Generate visual diagrams (flowchart, sequence, ER, class, etc.) | ββ | Guide |
Business strategy analysis and agile project planning.
| Tool | Purpose | Complexity | Learn More |
|---|---|---|---|
strategy-frameworks-builder |
SWOT, BSC, VRIO, Porter's Five Forces, market analysis | βββ | Guide |
gap-frameworks-analyzers |
Capability, technology, maturity, skills gap analysis | βββ | Guide |
sprint-timeline-calculator |
Dependency-aware sprint planning with bin-packing optimization | ββ | Guide |
model-compatibility-checker |
Recommend best AI models for task requirements and budget | β | Guide |
project-onboarding |
Comprehensive project structure analysis and documentation generation | ββ | Guide |
Multi-phase design orchestration with constraint enforcement.
| Tool | Purpose | Complexity | Learn More |
|---|---|---|---|
design-assistant |
Constraint-driven design sessions with artifact generation (ADRs, specs, roadmaps) | ββββ | Guide |
Supporting tools for workflow optimization.
| Tool | Purpose | Complexity | Learn More |
|---|---|---|---|
memory-context-optimizer |
Optimize prompt caching and context window usage | ββ | Guide |
mode-switcher |
Switch between agent operation modes (planning, debugging, refactoring) | β | Guide |
prompting-hierarchy-evaluator |
Evaluate prompts with numeric scoring (clarity, specificity, completeness) | ββ | Guide |
hierarchy-level-selector |
Select optimal prompting level for task complexity | β | Guide |
spark-prompt-builder |
Build UI/UX product prompts with structured inputs (colors, typography, components) | βββ | Guide |
π‘ Pro Tip: Start with β tools to learn the basics, then progress to βββ+ tools for advanced workflows.
Use buttons below to add this MCP server to VS Code (User Settings β mcp.servers):
Manual settings (User Settings JSON):
{
"mcp": {
"servers": {
"ai-agent-guidelines": {
"command": "npx",
"args": ["-y", "mcp-ai-agent-guidelines"]
}
}
}
}Using Docker:
{
"mcp": {
"servers": {
"ai-agent-guidelines": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"ghcr.io/anselmoo/mcp-ai-agent-guidelines:latest"
]
}
}
}
}After adding the server, open your chat client (e.g., Cline in VS Code). The tools appear under the server name. You can:
- Run a tool directly by name:
hierarchical-prompt-builderβ Provide context, goal, and optional requirements.clean-code-scorerβ Calculate comprehensive Clean Code score (0-100) with coverage metrics.code-hygiene-analyzerβ Paste code or point to a file and set language.mermaid-diagram-generatorβ Describe the system and select a diagram type.
- Ask in natural language and pick the suggested tool.
Example prompts:
- "Use hierarchical-prompt-builder to create a refactor plan for src/index.ts with outputFormat markdown."
- "Use clean-code-scorer to analyze my project with current coverage metrics and get a quality score."
- "Analyze this Python file with code-hygiene-analyzer; highlight security issues."
- "Generate a Mermaid sequence diagram showing: User sends request to API, API queries Database, Database returns data, API responds to User."
- "Create an ER diagram for: Customer has Orders, Order contains LineItems, Product referenced in LineItems."
- "Build a user journey map for our checkout flow using mermaid-diagram-generator."
Tip: Most clients can pass file content automatically when you select a file and invoke a tool.
GitHub Chat (VS Code): In the chat, type your request and pick a tool suggestion, or explicitly reference a tool by name (e.g., βUse mermaid-diagram-generator to draw a flowchart for our pipelineβ).
This MCP server fully supports agent-relative calls, the MCP standard pattern for enabling AI agents to discover and invoke tools contextually. Following the GitHub MCP documentation, agents can use natural language patterns to orchestrate complex multi-tool workflows.
Agent-relative calls are natural language patterns like:
Use the [tool-name] MCP to [action] with [parameters/context]Single Tool Invocation:
Use the hierarchical-prompt-builder MCP to create a code review prompt for our authentication module focusing on security best practices and OAuth2 implementation.Multi-Tool Workflow:
1. Use the clean-code-scorer MCP to establish baseline quality metrics
2. Use the code-hygiene-analyzer MCP to identify specific technical debt
3. Use the security-hardening-prompt-builder MCP to create a remediation plan
4. Use the sprint-timeline-calculator MCP to estimate implementation timelineIntegration with Other MCP Servers:
# Accessibility Compliance Workflow
Use the Figma MCP to analyze design specifications for WCAG 2.1 AA compliance.
Use the security-hardening-prompt-builder MCP from AI Agent Guidelines to create accessibility security audit prompts.
Use the GitHub MCP to categorize open accessibility issues.
Use the iterative-coverage-enhancer MCP from AI Agent Guidelines to plan accessibility test coverage.
Use the Playwright MCP to create and run automated accessibility tests.For complete documentation with 20+ detailed examples, workflow patterns, and best practices, see:
π Agent-Relative Call Patterns Guide
This guide covers:
- Core prompt patterns (single tool, chains, parallel, conditional)
- Tool categories with complete usage examples
- Multi-MCP server integration workflows
- Best practices for agent-driven development
- Performance optimization techniques
- Troubleshooting common issues
Access agent-relative call guidance via MCP resources:
Use the resource guidelines://agent-relative-calls to get comprehensive patterns and examplesOr access programmatically:
// MCP ReadResource request
{
uri: "guidelines://agent-relative-calls";
}π Prompt Chaining Builder β Multi-step prompts with output passing
Usage: prompt-chaining-builder
| Parameter | Required | Description |
|---|---|---|
chainName |
β | Name of the prompt chain |
steps |
β | Array of chain steps with prompts |
description |
β | Description of chain purpose |
context |
β | Global context for the chain |
globalVariables |
β | Variables accessible to all steps |
executionStrategy |
β | sequential/parallel-where-possible |
Build sophisticated multi-step prompt workflows where each step can depend on outputs from previous steps. Supports error handling strategies (skip/retry/abort) and automatic Mermaid visualization.
Example:
{
chainName: "Security Analysis Pipeline",
steps: [
{
name: "Scan",
prompt: "Scan for vulnerabilities",
outputKey: "vulns"
},
{
name: "Assess",
prompt: "Assess severity of {{vulns}}",
dependencies: ["vulns"],
errorHandling: "retry"
}
]
}π Prompt Flow Builder β Declarative flows with branching/loops
Usage: prompt-flow-builder
| Parameter | Required | Description |
|---|---|---|
flowName |
β | Name of the prompt flow |
nodes |
β | Flow nodes (prompt/condition/loop/parallel/merge/transform) |
edges |
β | Connections between nodes with conditions |
entryPoint |
β | Starting node ID |
variables |
β | Flow-level variables |
outputFormat |
β | markdown/mermaid/both |
Create complex adaptive prompt flows with conditional branching, loops, parallel execution, and merge points. Automatically generates Mermaid flowcharts and execution guides.
Example:
{
flowName: "Adaptive Code Review",
nodes: [
{ id: "analyze", type: "prompt", name: "Analyze" },
{ id: "check", type: "condition", name: "Complex?",
config: { expression: "complexity > 10" } },
{ id: "deep", type: "prompt", name: "Deep Review" },
{ id: "quick", type: "prompt", name: "Quick Check" }
],
edges: [
{ from: "analyze", to: "check" },
{ from: "check", to: "deep", condition: "true" },
{ from: "check", to: "quick", condition: "false" }
]
}π Semantic Code Analyzer β Symbol-based code understanding
Usage: semantic-code-analyzer
| Parameter | Required | Description |
|---|---|---|
codeContent |
β | Code content to analyze |
language |
β | Programming language (auto-detected) |
analysisType |
β | symbols/structure/dependencies/patterns/all |
Performs semantic analysis to identify symbols, dependencies, patterns, and structure. Inspired by Serena's language server approach.
π Project Onboarding β Comprehensive project familiarization
Usage: project-onboarding
| Parameter | Required | Description |
|---|---|---|
projectPath |
β | Path to project directory |
projectName |
β | Name of the project |
projectType |
β | library/application/service/tool/other |
analysisDepth |
β | quick/standard/deep |
includeMemories |
β | Generate project memories (default: true) |
Analyzes project structure, detects technologies, and generates memories for context retention. Based on Serena's onboarding system.
π Mode Switcher β Flexible agent operation modes
Usage: mode-switcher
| Parameter | Required | Description |
|---|---|---|
targetMode |
β | Mode to switch to (planning/editing/analysis/etc.) |
currentMode |
β | Current active mode |
context |
β | Operating context (desktop-app/ide-assistant/etc.) |
reason |
β | Reason for mode switch |
Switches between operation modes with optimized tool sets and prompting strategies. Modes include: planning, editing, analysis, interactive, one-shot, debugging, refactoring, documentation.
Hierarchical Prompt Builder β Build structured prompts with clear hierarchies
Usage: hierarchical-prompt-builder
| Parameter | Required | Description |
|---|---|---|
context |
β | The broad context or domain |
goal |
β | The specific goal or objective |
requirements |
β | Detailed requirements and constraints |
outputFormat |
β | Desired output format |
audience |
β | Target audience or expertise level |
Code Hygiene Analyzer β Analyze codebase for outdated patterns and hygiene issues
Usage: code-hygiene-analyzer
| Parameter | Required | Description |
|---|---|---|
codeContent |
β | Code content to analyze |
language |
β | Programming language |
framework |
β | Framework or technology stack |
Security Hardening Prompt Builder β Build specialized security analysis and vulnerability assessment prompts
Usage: security-hardening-prompt-builder
| Parameter | Required | Description |
|---|---|---|
codeContext |
β | Code context or description to analyze for security |
securityFocus |
β | Security analysis focus (vulnerability-analysis, security-hardening, compliance-check, threat-modeling, penetration-testing) |
securityRequirements |
β | Specific security requirements to check |
complianceStandards |
β | Compliance standards (OWASP-Top-10, NIST-Cybersecurity-Framework, ISO-27001, SOC-2, GDPR, HIPAA, PCI-DSS) |
language |
β | Programming language of the code |
riskTolerance |
β | Risk tolerance level (low, medium, high) |
analysisScope |
β | Security areas to focus on (input-validation, authentication, authorization, etc.) |
outputFormat |
β | Output format (detailed, checklist, annotated-code) |
Security Focus Areas:
- π Vulnerability analysis with OWASP Top 10 coverage
- π‘οΈ Security hardening recommendations
- π Compliance checking against industry standards
β οΈ Threat modeling and risk assessment- π§ͺ Penetration testing guidance
Compliance Standards: OWASP Top 10, NIST Cybersecurity Framework, ISO 27001, SOC 2, GDPR, HIPAA, PCI-DSS
Mermaid Diagram Generator β Generate professional diagrams from text descriptions
Usage: mermaid-diagram-generator
Generates Mermaid diagrams with intelligent parsing of descriptions for rich, customizable visualizations.
| Parameter | Required | Description |
|---|---|---|
description |
β | Description of the system or process to diagram. Be detailed and specific for better diagram generation. |
diagramType |
β | Type: flowchart, sequence, class, state, gantt, pie, er, journey, quadrant, git-graph, mindmap, timeline |
theme |
β | Visual theme: default, dark, forest, neutral |
direction |
β | Flowchart direction: TD/TB (top-down), BT (bottom-top), LR (left-right), RL (right-left) |
strict |
β | If true, never emit invalid diagram; use fallback if needed (default: true) |
repair |
β | Attempt auto-repair on validation failure (default: true) |
accTitle |
β | Accessibility title (added as Mermaid comment) |
accDescr |
β | Accessibility description (added as Mermaid comment) |
customStyles |
β | Custom CSS/styling directives for advanced customization |
advancedFeatures |
β | Type-specific advanced features (e.g., {autonumber: true} for sequence diagrams) |
Enhanced Features:
- Intelligent Description Parsing: All diagram types now parse descriptions to extract relevant entities, relationships, and structures
- New Diagram Types:
er- Entity Relationship diagrams for database schemasjourney- User journey maps for UX workflowsquadrant- Quadrant/priority charts for decision matricesgit-graph- Git commit history visualizationmindmap- Hierarchical concept mapstimeline- Event timelines and roadmaps
- Advanced Customization: Direction control, themes, custom styles, and type-specific features
- Smart Fallbacks: Generates sensible default diagrams when description parsing is ambiguous
Examples:
# Sequence diagram with participants auto-detected from description
{
"description": "User sends login request to API. API queries Database for credentials. Database returns user data. API responds to User with token.",
"diagramType": "sequence",
"advancedFeatures": {"autonumber": true}
}
# Class diagram with relationships extracted
{
"description": "User has id and email. Order contains Product items. User places Order. Product has price and name.",
"diagramType": "class"
}
# ER diagram for database schema
{
"description": "Customer places Order. Order contains LineItem. Product is referenced in LineItem.",
"diagramType": "er"
}
# User journey map
{
"description": "Shopping Journey. Section Discovery: User finds product. User reads reviews. Section Purchase: User adds to cart. User completes checkout.",
"diagramType": "journey"
}
# Gantt chart with tasks from description
{
"description": "Project: Feature Development. Phase Planning: Research requirements. Design architecture. Phase Development: Implement backend. Create frontend. Phase Testing: QA validation.",
"diagramType": "gantt"
}
# Flowchart with custom direction
{
"description": "Receive request. Validate input. Process data. Return response.",
"diagramType": "flowchart",
"direction": "LR"
}Memory Context Optimizer β Optimize prompt caching and context window usage
Usage: memory-context-optimizer
| Parameter | Required | Description |
|---|---|---|
contextContent |
β | Context content to optimize |
maxTokens |
β | Maximum token limit |
cacheStrategy |
β | Strategy: aggressive, conservative, balanced |
Sprint Timeline Calculator β Calculate optimal development cycles and sprint timelines
Usage: sprint-timeline-calculator
| Parameter | Required | Description |
|---|---|---|
tasks |
β | List of tasks with estimates |
teamSize |
β | Number of team members |
sprintLength |
β | Sprint length in days |
velocity |
β | Team velocity (story points per sprint) |
Model Compatibility Checker β Recommend best AI models for specific tasks
Usage: model-compatibility-checker
| Parameter | Required | Description |
|---|---|---|
taskDescription |
β | Description of the task |
requirements |
β | Specific requirements (context length, multimodal, etc.) |
budget |
β | Budget constraints: low, medium, high |
Guidelines Validator β Validate development practices against established guidelines
Usage: guidelines-validator
| Parameter | Required | Description |
|---|---|---|
practiceDescription |
β | Description of the development practice |
category |
β | Category: prompting, code-management, architecture, visualization, memory, workflow |
- Node.js 20+ required (see
enginesinpackage.json). - Tools are exposed by the MCP server and discoverable via client schemas.
- Mermaid diagrams render client-side (Markdown preview). No server rendering.
- Package version:
0.7.0(matches internal resource versions). - Tags
vX.Y.Ztrigger CI for NPM and Docker releases. - Pin exact versions for production stability.
Use the Release Setup Issue Form to streamline the release process:
- Automated version management: Update version numbers across the codebase
- GitHub Copilot compatible: Structured form enables bot automation
- Quality gates: Pre-release checklist ensures reliability
- CI/CD integration: Supports existing NPM and Docker publishing workflow
To create a new release, open a release setup issue with the target version and release details.
Prerequisites:
- Node.js 20+
- npm 10+
Setup:
git clone https://github.com/Anselmoo/mcp-ai-agent-guidelines.git
cd mcp-ai-agent-guidelines
npm install
npm run build
npm startProject structure:
/src - TypeScript source (tools, resources, server)
/tests - Test files and utilities
/scripts - Shell scripts and helpers
/demos - Demo scripts and generated artifacts
/.github - CI and community health files
Testing and quality:
npm run test:unit # Unit tests
npm run test:integration # Integration tests
npm run test:demo # Demo runner
npm run test:mcp # MCP smoke script
npm run test:coverage:unit # Unit test coverage (text-summary, lcov, html)
npm run quality # Type-check + Biome check
npm run audit # Security audit (production dependencies)
npm run audit:fix # Auto-fix vulnerabilities
npm run audit:production # Audit production dependencies onlyDemo files are automatically regenerated when tools change via GitHub Actions:
- Trigger: Any changes to
src/tools/**/*.tsin a pull request - Action: Automatically runs
npm run test:demoto regenerate demos - Result: Updated demo files are committed to the PR automatically
Benefits:
- β Documentation always stays in sync with code
- β No manual steps to remember
- β Reviewers can see demo changes alongside code changes
Workflow: .github/workflows/auto-regenerate-demos.yml
Manual regeneration (if needed):
npm run build
npm run test:demoThis project uses Lefthook for fast, reliable Git hooks that enforce code quality and security standards.
Mandatory for GitHub Copilot Agent: All quality gates must pass before commits and pushes.
Setup (automatic via npm install):
npm run hooks:install # Install lefthook git hooks
npm run hooks:uninstall # Remove lefthook git hooks
npx lefthook run pre-commit # Run pre-commit checks manually
npx lefthook run pre-push # Run pre-push checks manuallyPre-commit hooks (fast, parallel execution):
- π Security: Gitleaks secret detection
- π¨ Code Quality: Biome formatting & linting
- π· Type Safety: TypeScript type checking
- π§Ή Code Hygiene: Trailing whitespace & EOF fixes
Pre-push hooks (comprehensive validation):
- π Security Audit: Dependency vulnerability scanning (moderate+ level)
- π§ͺ Testing: Full test suite (unit, integration, demo, MCP)
- β‘ Quality: Type checking + Biome validation
Why Lefthook?
- β‘ Fast: Written in Go, parallel execution
- π Reliable: Better error handling than pre-commit
- π€ CI Integration: Mandatory quality gates for GitHub Copilot Agent
- π Simple: Single YAML configuration file
Configuration: lefthook.yml
- CI publishes a coverage summary in the jobβs Summary and uploads
coverage/as an artifact. - Coverage is also uploaded to Codecov on Node 22 runs; see the badge above for status.
# Run with Docker
docker run -p 3000:3000 ghcr.io/anselmoo/mcp-ai-agent-guidelines:latest
# Build locally
docker build -t mcp-ai-agent-guidelines .
docker run -p 3000:3000 mcp-ai-agent-guidelinesVS Code + Docker settings:
{
"mcp": {
"servers": {
"mcp-ai-agent-guidelines": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"ghcr.io/anselmoo/mcp-ai-agent-guidelines:latest"
]
}
}
}
}- Dependency Scanning: Automated vulnerability scanning runs on every PR and push to main
- Production dependencies: fails on moderate+ vulnerabilities
- All dependencies: audited and reported (dev dependencies don't block builds)
- Local audit:
npm run auditornpm audit --audit-level=moderate - Auto-fix:
npm run audit:fixto automatically fix vulnerabilities when possible - Pre-push hook: automatically checks for vulnerabilities before pushing code
- Secrets Protection: No secrets committed; releases use provenance where supported
- Supply Chain Security: Docker images are signed (Cosign); artifacts signed via Sigstore
- Vulnerability Reporting: Report security issues via GitHub Security tab or Issues
When vulnerabilities are detected:
- Review the vulnerability:
npm auditprovides details about affected packages - Update dependencies:
npm run audit:fixto apply automatic fixes - Manual updates: If auto-fix doesn't work, update package.json manually:
npm update <package-name> # or for major version updates npm install <package-name>@latest
- Test changes: Run
npm run test:allto ensure updates don't break functionality - Override if needed: For false positives or accepted risks, document in security policy
- MCP Specification: https://modelcontextprotocol.io/
- Tools implementation: see
src/tools/in this repo. - Generated examples: see
demos/and links above.
This project references third-party tools, frameworks, APIs, and services for informational purposes. See DISCLAIMER.md for important information about external references, trademarks, and limitations of liability.
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
- Complete Documentation - Full documentation index
- Clean Code Standards - Quality requirements and scoring
- Error Handling Patterns - Best practices for error handling
- Architecture Guide - System architecture and integration patterns
- Type System Organization - TypeScript conventions
- TypeScript strict mode - All code must pass type checking
- 100% test coverage goal - See Clean Code Initiative
- Biome linting - Code must pass
npm run quality - Git hooks - Automated checks via Lefthook (see lefthook.yml)
Keep changes typed, linted, and include tests when behavior changes.
MIT Β© Anselmoo β see LICENSE.
For a comprehensive list of references, research papers, and detailed attribution, see docs/tips/references.md.
- Model Context Protocol team for the specification
- Anthropic for prompt caching research
- Mermaid community for diagram tooling
- @ruvnet/claude-flow - Inspired flow-based prompting features
- @oraios/serena - Influenced semantic analysis and mode switching
- All open-source contributors whose work has shaped this project
See docs/tips/references.md for the complete list of research papers, projects, and inspirations.