From 48dd7c382a49c418bc2985fd8cd55726ff014058 Mon Sep 17 00:00:00 2001 From: "claude[bot]" <41898282+claude[bot]@users.noreply.github.com> Date: Sun, 2 Nov 2025 02:21:16 +0000 Subject: [PATCH] docs: Add comprehensive code review feedback guidance to AGENTS.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add new "Working with Code Review Feedback" section with guidance for both Claude (AI coding agent) and CodeRabbit (AI code reviewer) to improve collaboration quality and reduce incomplete implementations. Key additions for Claude: - Triage feedback into critical/important/optional categories - Evaluate suggestions against existing codebase patterns - Consider context and scope before implementing changes - Verify completion before claiming work is ready - Document deferrals with clear rationale Key additions for CodeRabbit: - Project-specific patterns (async-first, ManagedEntry, test mixins) - Prioritization guidance for categorizing feedback by severity - Context awareness for different code types (production vs debug) - Pattern consistency checks before suggesting changes Enhanced "Radical Honesty" section with specific guidance on documenting unresolved items, acknowledging uncertainty, and sharing trade-offs. Added common feedback categories section covering clock usage, connection ownership, async patterns, test isolation, and type safety. Resolves #198 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: William Easton --- AGENTS.md | 147 +++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 145 insertions(+), 2 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index 31ff4509..d703cfc7 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -112,6 +112,139 @@ to the async package. You will need to include the generated code in your pull request. Nobody will generate it for you. This also means pull requests will contain two copies of your changes, this is intentional! +## Working with Code Review Feedback + +This project uses AI-assisted code review (CodeRabbit) and development (Claude). +This section provides guidance for both AI agents and human developers on how +to handle automated code review feedback effectively. + +### For AI Coding Agents (Claude) + +When responding to CodeRabbit feedback on pull requests: + +#### 1. Triage Before Acting + +Always categorize feedback before implementation: + +- **Critical**: Security issues, data corruption, resource leaks, production bugs +- **Important**: Type safety, error handling, performance issues, test failures +- **Optional**: Style preferences, nitpicks, suggestions that conflict with + existing patterns + +Document your triage in the response to the user. + +#### 2. Evaluate Against Existing Patterns + +Before accepting suggestions: + +1. Search the codebase for similar patterns +2. Check if other stores/wrappers handle this differently +3. Preserve consistency over isolated "best practices" +4. If uncertain, ask the repository owner + +**Example**: Test patterns should match existing `ContextManagerStoreTestMixin` +usage rather than one-off suggestions for individual test files. + +#### 3. Consider Context and Scope + +Not all code has the same requirements: + +- **Production stores**: Prioritize correctness, performance, security +- **Debug/development tools**: Can defer async optimization, extensive validation +- **Test code**: Clarity and coverage over production patterns +- **Examples**: Simplicity and readability over comprehensive error handling + +#### 4. Verify Completion + +Before claiming work is "ready to merge" or "complete": + +- [ ] All critical issues addressed or documented as out-of-scope +- [ ] All important issues addressed or explicitly deferred with rationale +- [ ] No unrelated changes from bad merges +- [ ] `make precommit` passes (lint, typecheck, codegen) +- [ ] Tests pass + +Never claim completion with unresolved critical or important issues. + +#### 5. Document Deferrals + +If feedback is not implemented, explain why: + +- Conflicts with established pattern (cite similar code) +- Out of scope for this component's purpose +- Trade-off not worth the complexity +- Requires design decision from repository owner + +### For AI Code Reviewers (CodeRabbit) + +When reviewing pull requests, please consider: + +#### Project-Specific Patterns + +- **Async-first architecture**: All changes originate in `key-value/key-value-aio/` + and are code-generated into `key-value/key-value-sync/`. PRs will naturally + contain duplicate changes - this is intentional. +- **Test patterns**: The project uses `ContextManagerStoreTestMixin` for store + tests. Look for consistency with existing test implementations. +- **ManagedEntry wrapper**: Values are never stored directly but are wrapped in + `ManagedEntry` objects. This is by design, not a mistake. + +#### Prioritization Guidance + +When providing feedback, please categorize suggestions by severity: + +- **Critical**: Issues that could cause data loss, security vulnerabilities, + resource leaks, or production failures +- **Important**: Type safety issues, missing error handling, performance problems, + test coverage gaps +- **Minor/Optional**: Style preferences, minor optimizations, suggestions that + may conflict with existing patterns + +This helps human developers and AI agents prioritize their work effectively. + +#### Context Awareness + +Consider the purpose and scope of the code being reviewed: + +- **Production stores** (DynamoDB, Redis, PostgreSQL, etc.): Apply strict + standards for correctness, performance, security, and resource management +- **Debug/development tools** (FileTreeStore, LoggingWrapper): More lenient on + performance optimizations; prioritize clarity and simplicity +- **Test code**: Focus on clarity, coverage, and maintainability over production + patterns +- **Example code**: Prioritize readability and educational value over + comprehensive error handling + +#### Pattern Consistency + +Before suggesting changes: + +1. Check if similar patterns exist elsewhere in the codebase +2. If the pattern exists in multiple stores/wrappers, it's likely intentional +3. Suggest consistency improvements across all implementations rather than + one-off changes + +### Common Feedback Categories + +**Clock usage**: Prefer monotonic clocks (`time.monotonic()`) for intervals, +especially in wrappers like rate limiters and circuit breakers. Wall-clock time +(`time.time()`) is vulnerable to system clock changes. + +**Connection ownership**: Track whether stores own their client connections to +avoid closing externally-provided resources. Use flags like `_owns_client` to +distinguish between internally-created and externally-provided connections. + +**Async patterns**: Production stores should use actual async I/O (not +`asyncio.sleep()` or `run_in_executor()`). Debug-only tools may use simpler +patterns for clarity. + +**Test isolation**: Ensure tests clean up resources (connections, temp files, +etc.) and don't interfere with each other. Use context managers and proper +teardown. + +**Type safety**: This project uses strict type checking (Basedpyright). Address +type annotation issues to maintain type safety guarantees. + ## Make Commands Reference | Command | Purpose | @@ -242,5 +375,15 @@ make bump-version-dry VERSION=1.2.3 # Dry run ## Radical Honesty -Agents should be honest! Properly document any problems encountered, share -feedback, and be transparent about your AI-assisted work. +Agents should be honest! When working with code review feedback: + +- **Document unresolved items**: List any feedback that wasn't addressed and why +- **Acknowledge uncertainty**: If unclear whether to implement a suggestion, ask +- **Report problems**: Document issues encountered during implementation +- **Share trade-offs**: Explain reasoning for rejecting or deferring feedback +- **Admit limitations**: If unable to verify a fix works correctly, say so + +Never claim work is complete if you have doubts about correctness or completeness. + +Properly document any problems encountered, share feedback, and be transparent +about your AI-assisted work.