Skip to content

patrob/rpi-strategy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 

Repository files navigation

RPI Strategy for Agentic Engineering

A framework for delivering software at scale using AI coding agents through a disciplined Research > Plan > Implement approach.

Overview

This repository documents the RPI Strategy methodology for engineering teams working with AI coding agents. The three-phase approach provides a repeatable discipline for successfully delivering software projects while leveraging the capabilities of modern AI development tools.

Research > Plan > Implement forms the core framework for:

  • Understanding requirements and constraints
  • Architecting solutions with AI assistance
  • Implementing and deploying with confidence

What's Included

  • Documentation of the RPI Strategy methodology
  • Example workflows for popular AI coding tools:
    • Claude Code: "Analyze the authentication bug in user/session.py:245. Use Research phase: identify problem scope, validate with FAR scale ≥4.0, then plan atomic fixes."
    • GitHub Copilot: Use structured comments like // RESEARCH: User login fails intermittently - need factual evidence from logs before requesting code suggestions
    • Cursor: Apply FACTS scale validation to generated task breakdowns: "Validate this implementation plan using FACTS scale - is each task <4hrs and independently testable?"

Documentation Structure

Core Framework

Quality Framework

Quick Start (5 minutes)

Try this now with your next AI coding task:

  1. Research: Before asking for code, prompt: "Help me understand the problem scope first. What's the specific issue, where in the codebase, and what evidence supports this?"

  2. Validate: Score your findings using FAR scale: Factual ≥4, Actionable ≥3, Relevant ≥3

  3. Plan: Request: "Break this into atomic tasks (single command calls, file edits, etc.). Validate each task is testable independently."

  4. Validate: Check tasks against FACTS scale: Mean ≥3.0 across all dimensions

  5. Implement: Execute one task, measure results, iterate

Example: Instead of "Fix the login bug", try: "Research: User login fails on mobile Chrome. Evidence: 3 support tickets, error 'session undefined' in console logs. Validate this against FAR scale before planning solution."

Getting Started

  1. Start with the RPI Strategy Phases overview to understand the Research → Plan → Implement framework
  2. Apply the Validation Scales to improve the quality of your AI interactions and outputs
  3. Explore phase-specific documentation and example prompts for your preferred AI coding environment

*Inspiration to work on this came from watching this video.

About

documenting my work on what I'm calling the 3D Strategy for disciplined agentic workflows

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •