Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 145 additions & 2 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,139 @@ to the async package. You will need to include the generated code in your pull
request. Nobody will generate it for you. This also means pull requests will contain
two copies of your changes, this is intentional!

## Working with Code Review Feedback

This project uses AI-assisted code review (CodeRabbit) and development (Claude).
This section provides guidance for both AI agents and human developers on how
to handle automated code review feedback effectively.

### For AI Coding Agents (Claude)

When responding to CodeRabbit feedback on pull requests:

#### 1. Triage Before Acting

Always categorize feedback before implementation:

- **Critical**: Security issues, data corruption, resource leaks, production bugs
- **Important**: Type safety, error handling, performance issues, test failures
- **Optional**: Style preferences, nitpicks, suggestions that conflict with
existing patterns

Document your triage in the response to the user.

#### 2. Evaluate Against Existing Patterns

Before accepting suggestions:

1. Search the codebase for similar patterns
2. Check if other stores/wrappers handle this differently
3. Preserve consistency over isolated "best practices"
4. If uncertain, ask the repository owner

**Example**: Test patterns should match existing `ContextManagerStoreTestMixin`
usage rather than one-off suggestions for individual test files.

#### 3. Consider Context and Scope

Not all code has the same requirements:

- **Production stores**: Prioritize correctness, performance, security
- **Debug/development tools**: Can defer async optimization, extensive validation
- **Test code**: Clarity and coverage over production patterns
- **Examples**: Simplicity and readability over comprehensive error handling

#### 4. Verify Completion

Before claiming work is "ready to merge" or "complete":

- [ ] All critical issues addressed or documented as out-of-scope
- [ ] All important issues addressed or explicitly deferred with rationale
- [ ] No unrelated changes from bad merges
- [ ] `make precommit` passes (lint, typecheck, codegen)
- [ ] Tests pass

Never claim completion with unresolved critical or important issues.

#### 5. Document Deferrals

If feedback is not implemented, explain why:

- Conflicts with established pattern (cite similar code)
- Out of scope for this component's purpose
- Trade-off not worth the complexity
- Requires design decision from repository owner

### For AI Code Reviewers (CodeRabbit)

When reviewing pull requests, please consider:

#### Project-Specific Patterns

- **Async-first architecture**: All changes originate in `key-value/key-value-aio/`
and are code-generated into `key-value/key-value-sync/`. PRs will naturally
contain duplicate changes - this is intentional.
- **Test patterns**: The project uses `ContextManagerStoreTestMixin` for store
tests. Look for consistency with existing test implementations.
- **ManagedEntry wrapper**: Values are never stored directly but are wrapped in
`ManagedEntry` objects. This is by design, not a mistake.

#### Prioritization Guidance

When providing feedback, please categorize suggestions by severity:

- **Critical**: Issues that could cause data loss, security vulnerabilities,
resource leaks, or production failures
- **Important**: Type safety issues, missing error handling, performance problems,
test coverage gaps
- **Minor/Optional**: Style preferences, minor optimizations, suggestions that
may conflict with existing patterns

This helps human developers and AI agents prioritize their work effectively.

#### Context Awareness

Consider the purpose and scope of the code being reviewed:

- **Production stores** (DynamoDB, Redis, PostgreSQL, etc.): Apply strict
standards for correctness, performance, security, and resource management
- **Debug/development tools** (FileTreeStore, LoggingWrapper): More lenient on
performance optimizations; prioritize clarity and simplicity
- **Test code**: Focus on clarity, coverage, and maintainability over production
patterns
- **Example code**: Prioritize readability and educational value over
comprehensive error handling

#### Pattern Consistency

Before suggesting changes:

1. Check if similar patterns exist elsewhere in the codebase
2. If the pattern exists in multiple stores/wrappers, it's likely intentional
3. Suggest consistency improvements across all implementations rather than
one-off changes

### Common Feedback Categories

**Clock usage**: Prefer monotonic clocks (`time.monotonic()`) for intervals,
especially in wrappers like rate limiters and circuit breakers. Wall-clock time
(`time.time()`) is vulnerable to system clock changes.

**Connection ownership**: Track whether stores own their client connections to
avoid closing externally-provided resources. Use flags like `_owns_client` to
distinguish between internally-created and externally-provided connections.

**Async patterns**: Production stores should use actual async I/O (not
`asyncio.sleep()` or `run_in_executor()`). Debug-only tools may use simpler
patterns for clarity.

**Test isolation**: Ensure tests clean up resources (connections, temp files,
etc.) and don't interfere with each other. Use context managers and proper
teardown.

**Type safety**: This project uses strict type checking (Basedpyright). Address
type annotation issues to maintain type safety guarantees.

## Make Commands Reference

| Command | Purpose |
Expand Down Expand Up @@ -242,5 +375,15 @@ make bump-version-dry VERSION=1.2.3 # Dry run

## Radical Honesty

Agents should be honest! Properly document any problems encountered, share
feedback, and be transparent about your AI-assisted work.
Agents should be honest! When working with code review feedback:

- **Document unresolved items**: List any feedback that wasn't addressed and why
- **Acknowledge uncertainty**: If unclear whether to implement a suggestion, ask
- **Report problems**: Document issues encountered during implementation
- **Share trade-offs**: Explain reasoning for rejecting or deferring feedback
- **Admit limitations**: If unable to verify a fix works correctly, say so

Never claim work is complete if you have doubts about correctness or completeness.

Properly document any problems encountered, share feedback, and be transparent
about your AI-assisted work.