Skip to content

[workflow-style] Normalize report formatting for daily-testify-uber-super-expert #13093

@github-actions

Description

@github-actions

Workflow to Update

Workflow File: .github/workflows/daily-testify-uber-super-expert.md
Issue: This workflow generates testing analysis discussions but doesn't include markdown style guidelines

Required Changes

Update the workflow prompt to include formatting guidelines:

## Report Formatting Guidelines

**CRITICAL**: Follow these formatting guidelines to create well-structured, readable reports:

### 1. Header Levels
**Use h3 (###) or lower for all headers in your report to maintain proper document hierarchy.**

The discussion title serves as h1, so all content headers should start at h3:
- Use `###` for main sections (e.g., "### Test Coverage Analysis", "### Recommendations")
- Use `####` for subsections (e.g., "#### Unit Tests", "#### Integration Tests")
- Never use `##` (h2) or `#` (h1) in the report body

### 2. Progressive Disclosure
**Wrap long sections in `<details><summary><b>Section Name</b></summary>` tags to improve readability and reduce scrolling.**

Use collapsible sections for:
- Complete test file listings
- Detailed per-package test coverage
- Full code examples and best practices
- Verbose anti-pattern analysis

Example:
``````markdown
<details>
<summary><b>Detailed Test Coverage by Package</b></summary>

### pkg/cli

**Coverage**: 85%
**Test Files**: 12
**Anti-patterns Found**: 2

[Detailed breakdown...]

### pkg/workflow

[Similar breakdown...]

</details>

3. Report Structure Pattern

Your discussion should follow this structure for optimal readability:

  1. Executive Summary (always visible): Brief overview of test health, coverage, key findings
  2. Key Statistics (always visible): Total tests, coverage percentage, anti-patterns found
  3. Detailed Analysis (in <details> tags): Per-package breakdown, test file listings, code examples
  4. Recommendations (always visible): Top 5-10 actionable suggestions for improving tests

Design Principles

Create reports that:

  • Build trust through clarity: Most important metrics (coverage, critical issues) immediately visible
  • Exceed expectations: Add helpful context like best practices, code examples, comparison to standards
  • Create delight: Use progressive disclosure to reduce overwhelm from detailed analysis
  • Maintain consistency: Follow the same patterns as other code quality workflows

#### Add Report Template

``````markdown
## Discussion Report Template

``````markdown
### 🧪 Test Coverage Analysis Summary

Brief 2-3 paragraph overview of the testing landscape: overall coverage, test distribution, critical gaps, and adherence to testify/Uber best practices.

### 📊 Key Metrics

- **Total Test Files**: [NUMBER]
- **Total Test Functions**: [NUMBER]
- **Overall Coverage**: [PERCENT]%
- **Packages with Tests**: [NUMBER] / [TOTAL]
- **Anti-Patterns Found**: [NUMBER]
- **Best Practices Violations**: [NUMBER]

### 🎯 Coverage Breakdown

| Package | Tests | Coverage | Status |
|---------|-------|----------|--------|
| pkg/cli | 25 | 85% | ✅ Good |
| pkg/workflow | 38 | 72% | ⚠️ Needs improvement |
| pkg/parser | 15 | 45% | ❌ Critical |

### 🚨 Critical Findings

[Always visible - highlight most important issues]

1. **Low coverage in pkg/parser (45%)**
   - Impact: High risk of parser bugs
   - Recommendation: Add comprehensive parser tests

2. **Missing table-driven tests in pkg/cli**
   - Impact: Less maintainable tests
   - Recommendation: Convert to table-driven format

<details>
<summary><b>Detailed Package Analysis</b></summary>

### pkg/cli (Coverage: 85%)

**Test Files**: 12
**Test Functions**: 25
**Status**: ✅ Good coverage, minor improvements needed

#### Strengths
- Good use of `require.*` for setup assertions
- Table-driven tests in most files
- Clear test names with descriptive scenarios

#### Areas for Improvement
- 3 files using `assert.NotNil(t, err)` instead of `assert.Error(t, err)`
- Missing test coverage for error paths in 2 functions

#### Example Best Practice
``````go
// ✅ GOOD - Table-driven test with require/assert
func TestCompile(t *testing.T) {
    tests := []struct {
        name      string
        input     string
        expected  string
        shouldErr bool
    }{
        {"valid workflow", "test.md", "test.lock.yml", false},
        {"missing file", "missing.md", "", true},
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            result, err := Compile(tt.input)
            
            if tt.shouldErr {
                assert.Error(t, err, "Should return error")
            } else {
                require.NoError(t, err, "Should not return error")
                assert.Equal(t, tt.expected, result)
            }
        })
    }
}

pkg/workflow (Coverage: 72%)

[Similar detailed analysis...]


pkg/parser (Coverage: 45%)

Test Files: 6
Test Functions: 15
Status: ❌ Critical - needs significant test coverage

Critical Gaps

  • No tests for error handling in ParseFrontmatter
  • Missing tests for YAML validation
  • No edge case tests for malformed input

Recommended Tests to Add

  1. TestParseFrontmatter_MalformedYAML
  2. TestParseFrontmatter_MissingRequiredFields
  3. TestParseFrontmatter_InvalidEngineValue

[Continue for all packages...]

Anti-Patterns & Best Practice Violations

Using assert.NotNil instead of assert.Error

Count: 12 occurrences
Impact: Less clear error assertions

Examples:

// ❌ BAD
assert.NotNil(t, err)

// ✅ GOOD
assert.Error(t, err, "Should return error for invalid input")

Files Affected:

  • pkg/cli/compile_test.go:45
  • pkg/workflow/validator_test.go:123
  • [... full list ...]

Missing Assertion Messages

Count: 28 occurrences
Impact: Harder to debug failing tests

Example:

// ❌ BAD
assert.Equal(t, expected, actual)

// ✅ GOOD
assert.Equal(t, expected, actual, "Should parse frontmatter correctly")

[Continue for all anti-patterns...]

Testify/Uber Best Practices Reference

Recommended Patterns

1. Use require.* for Setup, assert.* for Validations

func TestMyFunction(t *testing.T) {
    // Setup - use require (stops test on failure)
    config := NewConfig()
    require.NotNil(t, config, "Config should be created")
    
    // Validate - use assert (continues checking)
    result, err := MyFunction(config)
    assert.NoError(t, err, "Should not error on valid config")
    assert.Equal(t, expected, result, "Should return expected result")
}

2. Table-Driven Tests with t.Run()

[Example...]

3. Clear Assertion Messages

[Example...]

[Additional best practices...]

✅ Recommendations

High Priority

  1. Increase coverage in pkg/parser to 70%+

    • Add tests for error handling
    • Test edge cases and malformed input
    • Estimated effort: 4-6 hours
  2. Fix anti-pattern: assert.NotNil → assert.Error

    • Replace 12 occurrences
    • Automated with find/replace
    • Estimated effort: 30 minutes

Medium Priority

  1. Add assertion messages to all assertions

    • Makes debugging easier
    • Start with new tests
    • Estimated effort: 2 hours
  2. Convert remaining tests to table-driven

    • Improves maintainability
    • Focus on pkg/cli first
    • Estimated effort: 3 hours

Low Priority

[Additional recommendations...]

📈 Progress Tracking

  • Fix critical anti-patterns (High Priority rejig docs #1-2)
  • Increase coverage in low-coverage packages (High Priority)
  • Add assertion messages to existing tests (Medium Priority)
  • Convert to table-driven tests (Medium Priority)

Report generated automatically by the Daily Testify Uber Super Expert workflow

Example Reference

See daily-code-metrics for a similar code quality analysis workflow with good progressive disclosure.

Agent Task

Update the workflow file .github/workflows/daily-testify-uber-super-expert.md to include the formatting guidelines above.

AI generated by Workflow Normalizer

  • expires on Feb 8, 2026, 12:29 PM UTC

Metadata

Metadata

Assignees

No one assigned

    Labels

    cookieIssue Monster Loves Cookies!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions