Skip to content

Agent Pattern System (APS) is a design-level pattern language for building agentic AI systems with predictable reasoning, explicit control flow, and auditable decision-making.

Notifications You must be signed in to change notification settings

danchurko/agentic-pattern-system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Agent Pattern System (APS)

Agent Pattern System (APS) is a design-level pattern language for building agentic AI systems with predictable reasoning, explicit control flow, and auditable decision-making.

APS defines reusable reasoning patterns, not an execution framework. It provides a conceptual abstraction layer that can be implemented on top of any agent runtime (for example, graph-based orchestrators or multi-agent workflow engines) without coupling system behaviour to a specific tool or SDK.


Overview

Modern agentic AI systems often focus on how agents are orchestrated—pipelines, supervisors, parallel execution, or message passing. APS addresses a different problem:

How should agents reason, evaluate, adapt, and explain their decisions—independent of how they are executed?

APS introduces a structured vocabulary for:

  • Reasoning behaviors (generation, critique, verification)
  • Control decisions (routing, fallback, escalation)
  • Quality enforcement (evaluation gates, ranking)
  • Transparency (traces and artifacts)

This allows agent systems to be designed intentionally, rather than emergently shaped by prompt complexity or framework defaults.


Design Goals

APS is designed around the following principles:

  • Architecture-agnostic

    Patterns describe intent, not implementation. They can be mapped onto any runtime or orchestration model.

  • Explicit reasoning contracts

    Inputs, outputs, and invariants are defined at the pattern level to reduce ambiguity.

  • Composable by default

    Patterns are designed to be combined, nested, and reused across workflows.

  • Traceable and inspectable

    Reasoning steps produce artifacts and traces suitable for debugging, evaluation, and governance.

  • Strategy-independent

    Execution strategies (LLMs, rules, heuristics, retrieval methods) can be swapped without changing the pattern itself.


Intended Audience

APS is intended for:

  • AI system architects designing agentic workflows
  • Platform and framework engineers building agent runtimes
  • Researchers exploring structured reasoning in LLM systems
  • Teams seeking consistency, auditability, and maintainability in agent behavior

Core Concepts

Pattern

Pattern defines a reusable unit of reasoning behavior with a clear purpose, contract, and expected outcome.

Patterns specify what kind of reasoning occurs, not how it is executed.

Example

An agent generating user-facing content applies a Refinement Loop pattern:

  1. Produce an initial draft
  2. Evaluate against explicit quality criteria
  3. Revise deficient sections
  4. Repeat until acceptance thresholds are met

The pattern remains constant even if the underlying evaluation or generation strategy changes.


Strategy

Strategy is a concrete method used to realize a pattern’s intent.

Strategies are interchangeable and do not alter the pattern’s external contract.

Example

Quality Gate pattern may use:

  • An LLM-based evaluator with a rubric
  • A deterministic rule checklist
  • A hybrid scoring model

The decision logic changes, but the pattern’s role in the system does not.


Policy

Policy defines persistent constraints that shape agent behavior across multiple patterns.

Policies are treated as explicit inputs rather than implicit prompt instructions.

Examples

  • Safety constraints (e.g., prohibited advice domains)
  • Tone and brand alignment rules
  • Compliance or jurisdictional requirements

Policies are applied consistently across generation, evaluation, and routing decisions.


Artifact

An Artifact is a structured data object produced or consumed by patterns.

Artifacts make intermediate reasoning steps explicit and machine-processable.

Example

A task decomposition artifact may include:

  • Original task statement
  • Subtasks
  • Assumptions
  • Success criteria

Downstream patterns consume this artifact directly rather than re-interpreting unstructured text.


Trace

Trace is a human-readable explanation of decisions made during execution.

Traces capture why an outcome was accepted, rejected, or modified.

Traces support:

  • Debugging and iteration
  • Quality audits
  • Explainability and trust

Pattern Families

APS groups patterns by the type of reasoning behavior they encode.


Reasoning Patterns

Patterns responsible for producing or transforming candidate outputs.

  • Draft Generator

    Produces an initial response without quality judgment.

    flowchart TD
      A[Input task] --> B[Generate first draft]
      B --> C[Draft output]
      C --> D[Trace: rationale]
    
    Loading
  • Candidate Generator

    Produces multiple alternative outputs for downstream evaluation.

    flowchart TD
      A[Input task] --> B[Generate multiple candidates]
      B --> C1[Candidate A]
      B --> C2[Candidate B]
      B --> C3[Candidate C]
      C1 --> T[Trace: why options differ]
      C2 --> T
      C3 --> T
    
    Loading
  • Critique and Revise

    Identifies weaknesses in an output and selectively improves it.

    flowchart TD
      A[Initial draft] --> B[Critique against criteria]
      B --> C[Revision plan]
      C --> D[Revise draft]
      D --> E[Improved output]
      B --> T[Trace: issues found]
      D --> T[Trace: changes made]
    
    Loading

Evaluation Patterns

Patterns responsible for assessing correctness, quality, or alignment.

  • Quality Gate

    Determines whether an output meets predefined acceptance criteria.

    flowchart TD
      A[Candidate output] --> B[Evaluate with rubric]
      B -->|Pass| C[Accept output]
      B -->|Fail| D[Reject or request revision]
      B --> T[Trace: scores and reasons]
    
    Loading
  • Ranker

    Orders candidates based on relevance or usefulness.

    flowchart TD
      A[Candidate set] --> B[Score each candidate]
      B --> C[Sort by score]
      C --> D[Top ranked output]
      B --> T[Trace: ranking rationale]
    
    Loading
  • Verifier

    Confirms factual claims against retrieved or trusted evidence.

    flowchart TD
      A[Draft with claims] --> B[Extract claims]
      B --> C[Retrieve evidence]
      C --> D[Check claim vs evidence]
      D -->|Supported| E[Mark verified]
      D -->|Unsupported| F[Flag or revise]
      D --> T[Trace: evidence links]
    
    Loading

Task Structuring Patterns

Patterns that reduce complexity by transforming vague requests into structured problems.

  • Task Decomposer

    Breaks a high-level objective into executable subtasks.

    flowchart TD
      A[High level objective] --> B[Identify subtasks]
      B --> C[Order and dependencies]
      C --> D[Structured task plan artifact]
      D --> T[Trace: assumptions]
    
    Loading
  • Query Decomposer

    Splits complex information needs into focused retrieval queries.

    flowchart TD
      A[Complex information need] --> B[Split into subquestions]
      B --> C1[Query 1]
      B --> C2[Query 2]
      B --> C3[Query 3]
      C1 --> D[Retrieve per query]
      C2 --> D
      C3 --> D
      D --> E[Merge evidence]
      E --> T[Trace: coverage and gaps]
    
    Loading

Routing and Control Patterns

Patterns that determine execution flow.

  • Intent Router

    Selects an appropriate reasoning pathway based on task type.

    flowchart TD
      A[User request] --> B[Classify intent]
      B -->|Write| C[Reasoning path: generate]
      B -->|Answer| D[Reasoning path: retrieve then respond]
      B -->|Decide| E[Reasoning path: compare options]
      B --> T[Trace: routing decision]
    
    Loading
  • Policy Router

    Applies different constraints or behaviors based on context.

    flowchart TD
      A[Task + context] --> B[Select applicable policies]
      B --> C[Apply constraints]
      C --> D[Proceed with constrained execution]
      B --> T[Trace: policy set applied]
    
    Loading
  • Fallback Handler

    Alters strategy when repeated attempts fail or confidence is low.

    flowchart TD
      A[Attempt primary strategy] --> B{Success}
      B -->|Yes| C[Return result]
      B -->|No| D[Switch strategy]
      D --> E[Retry]
      E --> F{Success}
      F -->|Yes| C
      F -->|No| G[Escalate to human or fail safe]
      D --> T[Trace: failure reasons]
      G --> T
    
    Loading

Memory and State Patterns

Patterns that manage continuity across interactions.

  • Memory Updater

    Records explicit user preferences or constraints.

    flowchart TD
      A[Interaction] --> B[Detect explicit preference]
      B --> C[Write memory entry]
      C --> D[Use memory in future steps]
      C --> T[Trace: what was stored]
    
    Loading
  • Preference Learner

    Infers implicit preferences from observed behavior.

    flowchart TD
      A[User behavior signals] --> B[Infer preference hypothesis]
      B --> C[Validate over time]
      C -->|Confirmed| D[Persist preference]
      C -->|Uncertain| E[Keep tentative]
      D --> T[Trace: evidence for preference]
    
    Loading

Positioning and Scope

APS operates at a higher abstraction level than agent orchestration frameworks.

  • Execution frameworks define how agents run
  • APS defines how agents reason

APS can be mapped onto:

  • Graph-based orchestrators
  • Multi-agent workflow engines
  • Custom runtime environments

It does not replace these systems; it complements them by providing a shared reasoning vocabulary.


Why APS Is Distinct

APS formalizes reasoning semantics rather than execution topology.

This separation enables:

  • Clearer system design
  • Safer experimentation
  • Improved maintainability
  • Stronger audit and governance capabilities

APS is best understood as a design system for agent cognition, not an SDK.


When to Use APS (and When Not To)

1. Use APS When

  • You care about predictable reasoning
  • You need evaluation, verification, or refinement loops
  • You must produce artifacts, traces, or audits
  • You are designing a platform, not a one-off agent
  • Multiple agents or steps must follow consistent reasoning contracts

2. Do NOT Use APS When

  • One-shot chatbots
  • Simple RAG Q&A
  • Latency-critical paths with no evaluation
  • Prompt-only experimentation
  • Systems where reasoning transparency does not matter

3. What APS Is NOT

  • Not an execution framework
  • Not a prompt library
  • Not a replacement for orchestration tools
  • Not required for all agent systems

Contributions

Contributions are welcome once the repository is publicly opened.

Areas of interest include:

  • New pattern definitions
  • Clarified invariants and contracts
  • Reference mappings to existing runtimes
  • Evaluation and traceability guidelines

About

Agent Pattern System (APS) is a design-level pattern language for building agentic AI systems with predictable reasoning, explicit control flow, and auditable decision-making.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published