Skip to content

Standardize metric naming across daily reports with shared glossary#10523

Merged
pelikhan merged 4 commits intomainfrom
copilot/standardize-metric-naming
Jan 18, 2026
Merged

Standardize metric naming across daily reports with shared glossary#10523
pelikhan merged 4 commits intomainfrom
copilot/standardize-metric-naming

Conversation

Copy link
Contributor

Copilot AI commented Jan 18, 2026

Daily reports used inconsistent metric names ("issues_7d" vs "issues_opened_7d", "Open Issues" 13 vs 10), making cross-report validation unreliable. Each workflow was developed independently without shared vocabulary.

Changes

Created specs/metrics-glossary.md

  • 40+ canonical metric definitions with explicit scopes
  • Documents intentional differences (e.g., issues_analyzed: Daily Issues = 1000 total, Issue Arborist = 100 open without parent)
  • Cross-report comparison rules

Standardized 7 daily reports

  • daily-issues-report.md: issues_7dissues_opened_7d, added scope comments
  • daily-firewall-report.md: firewall_requests_total, firewall_requests_allowed, firewall_domains_blocked
  • daily-observability-report.md: firewall_enabled_workflows, runs_with_complete_logs
  • copilot-agent-analysis.md: agent_prs_total, agent_prs_merged, agent_success_rate
  • daily-regulatory.md: scope-aware validation logic

Example scope documentation

# Before
issues_7d = len(df[df['createdAt'] > now - timedelta(days=7)])

# After (with standardized name and scope comment)
# Scope: Issues created in last 7 days
issues_opened_7d = len(df[df['createdAt'] > now - timedelta(days=7)])

Regulatory validation improvements

# Now validates scope before flagging discrepancies
if metric.scope == "same_across_reports" and diff > tolerance:
    flag_discrepancy()
else:
    document_scope_difference()  # Don't flag issues_analyzed differences

All metrics now reference glossary. Regulatory report checks scopes before flagging discrepancies. Python/shell code includes scope comments.

Original prompt

This section details on the original issue you should resolve

<issue_title>[plan] Standardize metric naming across daily reports</issue_title>
<issue_description>## Objective

Create a shared metrics glossary and standardize metric names across all daily reports to improve consistency and cross-report analysis.

Context

The regulatory report identified inconsistent metric naming across different daily reports, making cross-report comparisons difficult and error-prone.

Examples of Inconsistencies:

  • "Total Issues" vs "Issues Analyzed"
  • "Open Issues" reported differently in Daily Issues (13) vs Issue Arborist (10)
  • Different scopes: Daily Issues analyzes 1000 total, Issue Arborist only 10 open

Root Cause

Each daily report workflow was developed independently without a shared vocabulary or data model for common metrics.

Approach

1. Inventory Current Metrics

Review all daily report discussions and document:

  • Metric names used
  • Definitions/scope for each metric
  • Report types using each metric
  • Any conflicts or overlaps

Reports to Review:

  • Daily Issues Report
  • Code Metrics Report
  • Firewall Report
  • Issue Arborist Report
  • Observability Coverage Report
  • Copilot Agent Analysis
  • Delight Audit Report
  • Prompt Analysis, Secrets Analysis, etc.

2. Create Metrics Glossary

Create a canonical definition file:

# Daily Reports Metrics Glossary

## Issue Metrics

- **total_issues**: Total number of issues in the repository (all states)
- **open_issues**: Count of issues with state=open
- **issues_analyzed**: Subset of issues included in analysis (specify scope)
- **closed_last_30d**: Issues closed in the last 30 days
...

## Workflow Metrics

- **workflows_with_firewall**: Total workflow files with firewall enabled
- **firewall_runs_analyzed**: Number of workflow runs analyzed in report period
...

Location: docs/metrics-glossary.md or similar

3. Update Report Workflows

For each daily report workflow:

  • Update metric names to match glossary
  • Add comments documenting scope/filters
  • Ensure consistent date ranges and selection criteria

Example Changes:

# Before
issues_found: 10

# After  
open_issues_analyzed: 10  # Scope: Only open issues created in last 7 days

4. Update Regulatory Validation

Modify the regulatory report workflow to:

  • Use standardized metric names for comparisons
  • Add scope-aware validation (don't compare metrics with different scopes)
  • Reference the glossary in discrepancy warnings

Files to Create/Modify

  • Create: docs/metrics-glossary.md (or specs/metrics-glossary.md)
  • Update: Each daily report workflow in .github/workflows/
    • daily-issues.md
    • code-metrics.md
    • firewall-report.md
    • issue-arborist.md
    • observability.md
    • copilot-agent-analysis.md
    • delight-audit.md
    • Others as needed
  • Update: regulatory-report.md (validation logic)

Acceptance Criteria

  • Metrics glossary created with definitions for all common metrics
  • At least 5 daily report workflows updated to use standardized names
  • Regulatory report references glossary in validation
  • Cross-report discrepancies reduced by at least 50%
  • Documentation explains scope differences where metrics differ

Example Glossary Entry

### open_issues

**Definition**: Count of issues where `state = "open"` at the time of report generation

**Used By**: Daily Issues Report, Issue Arborist Report

**Scope**: All issues in repository, no time filter

**Notes**: May differ from `issues_analyzed` which applies additional filters

Related to #10515

AI generated by Plan Command for discussion #10496

Comments on the Issue (you are @copilot in this section)


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 3 commits January 18, 2026 02:50
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Co-authored-by: mnkiefer <8320933+mnkiefer@users.noreply.github.com>
Copilot AI changed the title [WIP] Create metrics glossary for daily reports Standardize metric naming across daily reports with shared glossary Jan 18, 2026
Copilot AI requested a review from mnkiefer January 18, 2026 03:00
@pelikhan pelikhan marked this pull request as ready for review January 18, 2026 05:39
@pelikhan pelikhan merged commit 0bc73c3 into main Jan 18, 2026
@pelikhan pelikhan deleted the copilot/standardize-metric-naming branch January 18, 2026 05:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[plan] Standardize metric naming across daily reports

3 participants