Skip to content

Releases: WietRob/failure-driven-loop

v1.0.0 - Initial Release

08 Jan 13:40

Choose a tag to compare

Release Notes: Failure-Driven Enforcement Loop v1.0.0

Release Date: 2025-01-08
Status: Production-ready (Minimal Closed-Loop System)
Scope: Phase 1 Only (Minimal Loop Engine)


Announcement

Introducing the Failure-Driven Enforcement Loop — a self-correcting engineering system that learns from failures and improves over time.

"The loop starts with failures, not theoretical planning."

This is not a process framework. This is not a checklist. This is a closed-loop system that:

  1. Detects recurring failures
  2. Encodes patterns into enforceable rules
  3. Validates compliance automatically
  4. Visualizes exactly what to fix
  5. Repeats — producing fewer failures each cycle

What Makes This Different

Traditional Systems This System
Hope-based compliance Automated enforcement
Forgotten lessons Encoded patterns
"Something is missing" "File X at line Y is missing"
Static rules Evolving rules
Human memory Machine detection

The core insight: The system doesn't just catch failures — it learns from them and prevents recurrence.


Phase 1 Scope: What's Included

Component Type Status
Feedback_Tracker Primitive ✅ Complete
Traceability Enforcer Enforcer ✅ Complete
Gap Visualizer Primitive ✅ Complete

What's NOT Included:

Item Reason
TDD/ATDD/BDD methodologies Development practices, not loop engine
Code review processes Team workflows, not loop engine
General refactoring patterns Too broad for this scope
ImpactAnalyzer Planned for Phase 2, when needed
CuraOps framework skills Separate product, not part of this loop

Why This Scope:

The loop is complete and self-sufficient. Adding more components would dilute its identity as a minimal, failure-driven system.


Quick Adoption (5 Minutes)

Installation

# Install from source
git clone https://github.com/WietRob/failure-driven-loop.git
cd failure-driven-loop
pip install -e .

# Verify installation
fdl-validate-naming --help

Your First Loop

# 1. Detect a naming violation
fdl-validate-naming --file examples/minimal/tests/unit/test_auth.py
# ❌ FAIL: Invalid naming format

# 2. Log a failure
fdl-log-feedback \
  --type mistake \
  --context "Wrong test filename" \
  --feedback "Use TC-UT-XXX_name.py pattern" \
  --category Naming \
  --severity medium

# 3. Analyze patterns (after 3 similar errors)
fdl-analyze-patterns

# 4. Fix it (loop closes)
mv examples/minimal/tests/unit/test_auth.py examples/minimal/tests/unit/TC-UT-001_auth.py

# 5. Verify
fdl-validate-naming --file examples/minimal/tests/unit/TC-UT-001_auth.py
# ✅ PASS

See It Learn

# Log the same failure pattern 3 times
fdl-log-feedback --type mistake --context "Naming violation" ...

# Analyze patterns
fdl-analyze-patterns

# Output: Pattern detected, rule suggested
# The system now "remembers" this pattern

See examples/minimal/README.md for complete copy/paste demo with expected outputs.


Core Components

1. Feedback_Tracker (Primitive)

Detects recurring error patterns and encodes them into rules.

# Log a failure
fdl-log-feedback --interactive

# Analyze patterns weekly
fdl-analyze-patterns --weekly

Not a logger — it only detects patterns that justify new rules.

2. Traceability Enforcer (Enforcer)

Validates naming conventions and bidirectional links.

# Validate naming
fdl-validate-naming --all

# Validate links
fdl-validate-links --requirement SW-REQ-001

Not a style guide — violations fail CI/CD.

3. Gap Visualizer (Primitive)

Shows exactly where traceability chains are broken.

# Visualize requirement chain
fdl-tree-analyzer --us US-A1

# JSON output for automation
fdl-tree-analyzer --us US-A1 --json

Not a dashboard — it shows specific files to create.


The Loop in Action

FAILURE OCCURS
     │
     ▼
┌─────────────────┐
│ Feedback_Tracker │ ← Detects pattern (e.g., 3x naming violations)
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│ Rule Encoded    │ ← TDD skill updated with naming warning
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│ CI/CD Fails     │ ← New violations rejected automatically
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│ Gap Visualizer  │ ← Points to exact missing files
└────────┬────────┘
         │
         ▼
FEWER FAILURES → LOOP CONTINUES

What This Is NOT

  • ❌ A project management tool (Jira, Asana, Linear)
  • ❌ A test framework (pytest, unittest, Jest)
  • ❌ A metrics dashboard (Datadog, Grafana, Prometheus)
  • ❌ A documentation generator (Sphinx, Javadoc)
  • ❌ An observability platform (Sentry, ELK stack)

If you need any of these, use the appropriate tool. This system focuses on closing the loop between failure and prevention.


Documentation

Document Purpose
LOOP_EXPLANATION.md System overview (read first)
QUICK_START.md 5-minute getting started guide
EXAMPLES.md Real-world failure scenarios
CONTRACTS.md Component specifications

CLI Reference

Command Description
fdl-validate-naming Validate test file naming conventions
fdl-validate-links Validate bidirectional requirement-test links
fdl-log-feedback Log failures for pattern detection
fdl-analyze-patterns Detect recurring patterns in feedback
fdl-tree-analyzer Visualize traceability chain gaps

Requirements

  • Python 3.10+
  • pyyaml

Installation Verification

# Run verification tests
pytest tests/ -v

Contributing

This is an open-source extraction from the CuraOps Framework. Contributions welcome:

  • Report issues: GitHub Issues
  • Suggest patterns: Pattern detection rules
  • Improve skills: Update skill documentation

License

MIT License


Citation

If you use this system in academic or professional contexts:

Failure-Driven Enforcement Loop v1.0.0. A self-correcting engineering system that learns from failures. Available at: https://github.com/curaops/failure-driven-loop


The loop is closed. The system learns. Failures decrease.