-
Notifications
You must be signed in to change notification settings - Fork 215
Description
Overview
The test file pkg/cli/logs_flatten_test.go has been selected for quality improvement by the Testify Uber Super Expert. This 481-line test file provides solid coverage of artifact flattening functionality but has opportunities to adopt testify best practices more consistently. While the file already uses table-driven tests effectively, it relies heavily on manual error checking with t.Errorf() and t.Fatalf() instead of testify assertions. Converting to testify will improve error messages, reduce code verbosity, and align with the project's testing standards defined in specs/testing.md.
Current State:
- Test File:
pkg/cli/logs_flatten_test.go(481 lines) - Source File:
pkg/cli/logs_download.go - Test Functions: 5 comprehensive test functions
- Coverage: 3 of 8 functions in source file (37.5% function coverage)
- Pattern: Already uses excellent table-driven test structure
Test Quality Analysis
Strengths ✅
- Excellent table-driven test structure - The file already uses table-driven tests with descriptive test case names and comprehensive coverage of edge cases
- Good test organization - Tests are logically grouped by functionality (single-file, unified artifact, audit compatibility)
- Realistic test scenarios - Tests simulate actual artifact structures as they would appear in production (e.g.,
agent-artifacts/tmp/gh-aw/paths)
Areas for Improvement 🎯
1. Testify Assertions
Current Issues:
The test file uses manual error checking throughout instead of testify assertions. This makes tests more verbose and produces less helpful error messages.
Current Pattern (used 24+ times):
// ❌ CURRENT (manual error checking)
if err := tt.setup(tmpDir); err != nil {
t.Fatalf("Setup failed: %v", err)
}
if err := flattenSingleFileArtifacts(tmpDir, true); err != nil {
t.Fatalf("flattenSingleFileArtifacts failed: %v", err)
}
if _, err := os.Stat(path); os.IsNotExist(err) {
t.Errorf("Expected file %s does not exist", file)
}
if err == nil {
t.Error("Expected error for non-existent directory, got nil")
}Recommended Pattern:
// ✅ IMPROVED (testify assertions)
err := tt.setup(tmpDir)
require.NoError(t, err, "setup should succeed")
err = flattenSingleFileArtifacts(tmpDir, true)
require.NoError(t, err, "flattening should succeed")
_, err = os.Stat(path)
assert.False(t, os.IsNotExist(err), "expected file %s should exist", file)
err = flattenSingleFileArtifacts("/nonexistent/directory", false)
assert.Error(t, err, "should return error for non-existent directory")Why this matters:
- Testify provides clearer failure messages with actual vs expected values
require.*clearly indicates critical setup steps that should stop the testassert.*clearly indicates validations that should continue checking- Follows the pattern used throughout the codebase (see
specs/testing.md)
Specific Conversion Examples:
Lines 119-125 (setup and execution):
// ❌ CURRENT
if err := tt.setup(tmpDir); err != nil {
t.Fatalf("Setup failed: %v", err)
}
if err := flattenSingleFileArtifacts(tmpDir, true); err != nil {
t.Fatalf("flattenSingleFileArtifacts failed: %v", err)
}
// ✅ IMPROVED
err := tt.setup(tmpDir)
require.NoError(t, err, "setup should create test structure")
err = flattenSingleFileArtifacts(tmpDir, true)
require.NoError(t, err, "flattening should complete without errors")Lines 130-133 (file existence checks):
// ❌ CURRENT
path := filepath.Join(tmpDir, file)
if _, err := os.Stat(path); os.IsNotExist(err) {
t.Errorf("Expected file %s does not exist", file)
}
// ✅ IMPROVED
path := filepath.Join(tmpDir, file)
_, err := os.Stat(path)
assert.NoError(t, err, "expected file %s should exist", file)Lines 138-143 (directory checks):
// ❌ CURRENT
path := filepath.Join(tmpDir, dir)
info, err := os.Stat(path)
if os.IsNotExist(err) {
t.Errorf("Expected directory %s does not exist", dir)
} else if err == nil && !info.IsDir() {
t.Errorf("Expected %s to be a directory", dir)
}
// ✅ IMPROVED
path := filepath.Join(tmpDir, dir)
info, err := os.Stat(path)
require.NoError(t, err, "expected directory %s should exist", dir)
assert.True(t, info.IsDir(), "%s should be a directory", dir)Lines 149-152 (non-existence checks):
// ❌ CURRENT
path := filepath.Join(tmpDir, file)
if _, err := os.Stat(path); err == nil {
t.Errorf("Unexpected file %s exists", file)
}
// ✅ IMPROVED
path := filepath.Join(tmpDir, file)
_, err := os.Stat(path)
assert.True(t, os.IsNotExist(err), "unexpected file %s should not exist", file)Lines 167-170 (error expectations):
// ❌ CURRENT
err := flattenSingleFileArtifacts("/nonexistent/directory", false)
if err == nil {
t.Error("Expected error for non-existent directory, got nil")
}
// ✅ IMPROVED
err := flattenSingleFileArtifacts("/nonexistent/directory", false)
assert.Error(t, err, "should return error for non-existent directory")2. Test Coverage Gaps
Missing Tests:
The source file pkg/cli/logs_download.go has 8 exported functions, but only 3 are tested directly:
✅ Tested:
flattenSingleFileArtifacts- Comprehensive coverageflattenUnifiedArtifact- Good coverageflattenAgentOutputsArtifact- Indirectly tested
❌ Not Tested:
4. downloadWorkflowRunLogs - Critical function for downloading logs
5. unzipFile - File extraction logic
6. extractZipFile - Individual file extraction
7. listArtifacts - Artifact discovery
8. downloadRunArtifacts - Artifact download orchestration
Priority Functions to Test:
-
downloadWorkflowRunLogs(runID int64, outputDir string, verbose bool) error- Why it's important: Main entry point for log downloading, orchestrates multiple operations
- Recommended test cases:
- Valid run ID downloads successfully
- Invalid run ID returns appropriate error
- Output directory creation
- Verbose mode output formatting
-
unzipFile(zipPath, destDir string, verbose bool) error- Why it's important: Handles archive extraction with error handling
- Recommended test cases:
- Valid zip file extracts successfully
- Invalid/corrupted zip file returns error
- Non-existent zip file returns error
- Nested directory structures extract correctly
- File permissions preserved
-
listArtifacts(outputDir string) ([]string, error)- Why it's important: Artifact discovery affects audit command functionality
- Recommended test cases:
- Lists artifacts in directory correctly
- Empty directory returns empty list
- Non-existent directory returns error
- Mixed artifacts and files handled correctly
Recommended Test Skeleton:
func TestDownloadWorkflowRunLogs(t *testing.T) {
// Note: This will require mocking gh CLI or marking as integration test
tests := []struct {
name string
runID int64
shouldErr bool
}{
{name: "invalid run ID", runID: -1, shouldErr: true},
{name: "zero run ID", runID: 0, shouldErr: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tmpDir := testutil.TempDir(t, "test-download-*")
err := downloadWorkflowRunLogs(tt.runID, tmpDir, false)
if tt.shouldErr {
assert.Error(t, err, "should return error for invalid run ID")
} else {
require.NoError(t, err, "should download logs successfully")
}
})
}
}
func TestUnzipFile(t *testing.T) {
tests := []struct {
name string
setup func(string) (string, error) // Returns zip path
shouldErr bool
}{
{
name: "valid zip file extracts",
setup: func(dir string) (string, error) {
// Create a test zip file
zipPath := filepath.Join(dir, "test.zip")
// ... create zip with test content
return zipPath, nil
},
shouldErr: false,
},
{
name: "non-existent zip returns error",
setup: func(dir string) (string, error) {
return filepath.Join(dir, "nonexistent.zip"), nil
},
shouldErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tmpDir := testutil.TempDir(t, "test-unzip-*")
zipPath, err := tt.setup(tmpDir)
require.NoError(t, err, "test setup should succeed")
destDir := filepath.Join(tmpDir, "extracted")
err = unzipFile(zipPath, destDir, false)
if tt.shouldErr {
assert.Error(t, err, "should return error for invalid zip")
} else {
require.NoError(t, err, "should extract zip successfully")
// Verify extracted contents
}
})
}
}
func TestListArtifacts(t *testing.T) {
tests := []struct {
name string
setup func(string) error
expectedCount int
shouldErr bool
}{
{
name: "lists artifacts correctly",
setup: func(dir string) error {
return os.WriteFile(filepath.Join(dir, "test.txt"), []byte("test"), 0644)
},
expectedCount: 1,
shouldErr: false,
},
{
name: "empty directory returns empty list",
setup: func(dir string) error { return nil },
expectedCount: 0,
shouldErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tmpDir := testutil.TempDir(t, "test-list-*")
err := tt.setup(tmpDir)
require.NoError(t, err, "test setup should succeed")
artifacts, err := listArtifacts(tmpDir)
if tt.shouldErr {
assert.Error(t, err, "should return error")
} else {
require.NoError(t, err, "should list artifacts successfully")
assert.Len(t, artifacts, tt.expectedCount, "should return expected number of artifacts")
}
})
}
}3. Helper Function Opportunities
Current Issues:
Several validation patterns are repeated across test cases:
// Repeated pattern 1: File existence check (lines 128-133, 234-244, 443-450)
for _, file := range tt.expectedFiles {
path := filepath.Join(tmpDir, file)
if _, err := os.Stat(path); os.IsNotExist(err) {
t.Errorf("Expected file %s does not exist", file)
}
}
// Repeated pattern 2: Directory existence check (lines 135-144, 247-253, 453-462)
for _, dir := range tt.expectedDirs {
path := filepath.Join(tmpDir, dir)
info, err := os.Stat(path)
if os.IsNotExist(err) {
t.Errorf("Expected directory %s does not exist", dir)
} else if err == nil && !info.IsDir() {
t.Errorf("Expected %s to be a directory", dir)
}
}Recommended Helper Functions:
// assertFilesExist verifies that all expected files exist and contain content
func assertFilesExist(t *testing.T, baseDir string, files []string) {
t.Helper()
for _, file := range files {
path := filepath.Join(baseDir, file)
info, err := os.Stat(path)
assert.NoError(t, err, "expected file %s should exist", file)
if err == nil {
assert.False(t, info.IsDir(), "expected %s to be a file, not directory", file)
}
}
}
// assertDirsExist verifies that all expected directories exist
func assertDirsExist(t *testing.T, baseDir string, dirs []string) {
t.Helper()
for _, dir := range dirs {
path := filepath.Join(baseDir, dir)
info, err := os.Stat(path)
assert.NoError(t, err, "expected directory %s should exist", dir)
if err == nil {
assert.True(t, info.IsDir(), "expected %s to be a directory", dir)
}
}
}
// assertPathsNotExist verifies that paths (files or dirs) don't exist
func assertPathsNotExist(t *testing.T, baseDir string, paths []string) {
t.Helper()
for _, p := range paths {
path := filepath.Join(baseDir, p)
_, err := os.Stat(path)
assert.True(t, os.IsNotExist(err), "unexpected path %s should not exist", p)
}
}
// assertFileContent verifies file exists and contains expected content
func assertFileContent(t *testing.T, path string, expectedContent string) {
t.Helper()
content, err := os.ReadFile(path)
require.NoError(t, err, "should be able to read file %s", path)
assert.Contains(t, string(content), expectedContent, "file content should contain expected text")
}Usage Example:
// ❌ CURRENT (lines 128-160)
for _, file := range tt.expectedFiles {
path := filepath.Join(tmpDir, file)
if _, err := os.Stat(path); os.IsNotExist(err) {
t.Errorf("Expected file %s does not exist", file)
}
}
for _, dir := range tt.expectedDirs {
path := filepath.Join(tmpDir, dir)
info, err := os.Stat(path)
if os.IsNotExist(err) {
t.Errorf("Expected directory %s does not exist", dir)
} else if err == nil && !info.IsDir() {
t.Errorf("Expected %s to be a directory", dir)
}
}
for _, file := range tt.unexpectedFiles {
path := filepath.Join(tmpDir, file)
if _, err := os.Stat(path); err == nil {
t.Errorf("Unexpected file %s exists", file)
}
}
for _, dir := range tt.unexpectedDirs {
path := filepath.Join(tmpDir, dir)
if _, err := os.Stat(path); err == nil {
t.Errorf("Unexpected directory %s exists", dir)
}
}
// ✅ IMPROVED (4 lines instead of 32)
assertFilesExist(t, tmpDir, tt.expectedFiles)
assertDirsExist(t, tmpDir, tt.expectedDirs)
assertPathsNotExist(t, tmpDir, tt.unexpectedFiles)
assertPathsNotExist(t, tmpDir, tt.unexpectedDirs)Benefits:
- Reduces code duplication (32 lines → 4 lines per test case)
- Consistent validation logic across all tests
t.Helper()provides accurate line numbers in failure messages- Easier to add new validation logic in one place
4. Test Documentation
Current Issues:
- Some complex test scenarios lack inline comments explaining the "why"
- Setup functions don't document the artifact structure they create
Recommended Improvements:
// ✅ IMPROVED - Document complex artifact structures
{
name: "unified artifact with nested structure gets flattened",
setup: func(dir string) error {
// Create structure: agent-artifacts/tmp/gh-aw/...
// This simulates the old artifact format from gh CLI v2.40.0
// where all files were nested under tmp/gh-aw/
nestedPath := filepath.Join(dir, "agent-artifacts", "tmp", "gh-aw")
if err := os.MkdirAll(nestedPath, 0755); err != nil {
return err
}
// Create audit files (aw_info.json, safe_output.jsonl)
if err := os.WriteFile(filepath.Join(nestedPath, "aw_info.json"), []byte("test"), 0644); err != nil {
return err
}
// Create subdirectories (aw-prompts/, mcp-logs/)
// These should maintain their structure after flattening
promptDir := filepath.Join(nestedPath, "aw-prompts")
// ...
},
// ...
}Implementation Guidelines
Priority Order
- High Priority: Convert all manual error checks to testify assertions (biggest impact on code quality)
- High Priority: Create helper functions to reduce duplication (improves maintainability)
- Medium Priority: Add tests for
unzipFileandlistArtifacts(improves coverage) - Medium Priority: Add inline documentation for complex test scenarios
- Low Priority: Add tests for
downloadWorkflowRunLogsanddownloadRunArtifacts(may require mocking gh CLI)
Best Practices from specs/testing.md
- ✅ Use
require.*for critical setup that should stop test on failure - ✅ Use
assert.*for test validations that should continue checking - ✅ Write table-driven tests with
t.Run()and descriptive names (already doing well!) - ✅ Include helpful assertion messages for debugging
- ✅ Use
t.Helper()in helper functions for accurate error reporting
Import Required
import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)Conversion Strategy
Phase 1: Convert Critical Setup (require.NoError)
- Lines 119, 123: Setup and function execution errors
- Lines 182-183, 284-285, 296-297: Setup failures
- Lines 434-435, 438-439: Setup and execution
Phase 2: Convert Existence Checks (assert.NoError/assert.True)
- Lines 130-133, 234-244, 443-450: File existence
- Lines 138-144, 247-253, 453-462: Directory existence
- Lines 149-152, 155-160, 464-478: Non-existence checks
Phase 3: Add Helper Functions
- Create 4 helper functions at top of file
- Replace repeated validation patterns in all test functions
Phase 4: Convert Content Validation
- Lines 237-242, 331-339: File content checks
- Convert to
assert.Containsorassert.Equal
Testing Commands
# Run tests for this file
go test -v ./pkg/cli -run "TestFlatten"
# Run with coverage
go test -cover ./pkg/cli -run "TestFlatten"
# Run all CLI package tests
make test-unit
# Verify formatting before commit
make fmtAcceptance Criteria
- All manual error checks (
t.Fatalf,t.Errorf,if err != nil) replaced with testify assertions - Helper functions created for repeated validation patterns
-
require.NoErrorused for setup/execution errors that should stop test -
assert.NoError,assert.True,assert.Errorused for validation checks - All assertions include helpful messages
- Tests for
unzipFileandlistArtifactsfunctions added - Test file reduced by ~50-100 lines due to helper function usage
- All tests pass:
go test -v ./pkg/cli -run "TestFlatten" - Code follows patterns in
specs/testing.md - No regressions:
make test-unitpasses
Expected Impact
Before:
- 481 lines with extensive manual error checking
- Repeated validation code (32 lines per test case for file/dir checks)
- Less clear distinction between setup failures and validation failures
- Generic error messages
After:
- ~380-400 lines (20% reduction due to helper functions)
- 4 lines per test case for file/dir validation
- Clear
require.*vsassert.*semantics - Descriptive error messages from testify
- Better test coverage with new tests for untested functions
Estimated Effort: Medium (3-4 hours)
- Phase 1 (critical setup): 30 minutes
- Phase 2 (existence checks): 1 hour
- Phase 3 (helper functions): 1 hour
- Phase 4 (new tests): 1-2 hours
Additional Context
- Repository Testing Guidelines: See
specs/testing.mdfor comprehensive testing patterns - Example Tests: Look at
pkg/workflow/*_test.gofor examples of testify usage - Testify Documentation: https://github.com/stretchr/testify
- Helper Function Pattern: See
pkg/cli/audit_command_test.gofor helper function examples
Priority: Medium
Effort: Medium (3-4 hours)
Expected Impact: Improved test quality, reduced code duplication, better error messages, aligned with project standards
Files Involved:
- Test file:
pkg/cli/logs_flatten_test.go(481 lines) - Source file:
pkg/cli/logs_download.go(8 functions, 3 tested)
AI generated by Daily Testify Uber Super Expert