Skip to content

[agent-performance] Enable GitHub MCP server for Agent Performance Analyzer #8026

@github-actions

Description

@github-actions

Problem

The Agent Performance Analyzer workflow currently cannot access GitHub API data to analyze agent outputs, quality, and effectiveness. This severely limits its ability to:

  • Assess output quality (issues, PRs, comments created by agents)
  • Calculate task completion rates
  • Measure PR merge rates
  • Detect behavioral patterns (over/under-creation, duplication)
  • Map agent collaboration patterns
  • Generate meaningful performance scores

Current Limitation

The workflow documentation states:

The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.

Without GitHub API access, the analyzer can only perform configuration-based analysis, not actual performance assessment.

Solution

Add GitHub MCP server configuration to agent-performance-analyzer.md:

tools:
  agentic-workflows:
  github:
    toolsets: 
      - default      # Issues, PRs, discussions
      - actions      # Workflow runs, jobs, logs
      - repos        # Repository metadata
  repo-memory:
    branch-name: memory/meta-orchestrators
    file-glob: "**/*"

Expected Improvements

Once GitHub MCP server is enabled:

  1. Output Quality Analysis

    • Query issues/PRs with "AI generated by" attribution
    • Sample and rate clarity, accuracy, completeness
    • Calculate quality scores (0-100)
  2. Effectiveness Measurement

    • Track issue resolution rates
    • Measure PR merge rates
    • Calculate time-to-completion metrics
  3. Behavioral Pattern Detection

    • Identify over/under-creation patterns
    • Detect duplicate work
    • Flag scope creep instances
  4. Collaboration Mapping

    • Analyze cross-agent interactions
    • Find productive collaborations
    • Detect conflicts
  5. Data-Driven Rankings

    • Rank agents by quality scores
    • Identify top performers
    • Flag underperformers with evidence

Impact

High - Transforms the workflow from configuration analysis to comprehensive performance assessment

Acceptance Criteria

  • GitHub MCP server configured in workflow frontmatter
  • Workflow successfully queries GitHub API for agent outputs
  • Performance report includes quality scores
  • Behavioral pattern detection functional
  • Agent rankings based on actual data

Priority

High - Core functionality for meta-orchestrator effectiveness

Estimated Effort

30 minutes (configuration + testing)

Related

  • Part of Agent Performance Analysis infrastructure
  • Complements Metrics Collector workflow data
  • Enables #aw_fix_empty_engines validation

Testing

After implementation, verify:

  1. Workflow can query recent issues with agent attribution
  2. PR data accessible for merge rate analysis
  3. Workflow run data available for execution metrics
  4. Quality scoring logic produces reasonable results

AI generated by Agent Performance Analyzer - Meta-Orchestrator

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions