Agent Pattern System (APS) is a design-level pattern language for building agentic AI systems with predictable reasoning, explicit control flow, and auditable decision-making.
APS defines reusable reasoning patterns, not an execution framework. It provides a conceptual abstraction layer that can be implemented on top of any agent runtime (for example, graph-based orchestrators or multi-agent workflow engines) without coupling system behaviour to a specific tool or SDK.
Modern agentic AI systems often focus on how agents are orchestrated—pipelines, supervisors, parallel execution, or message passing. APS addresses a different problem:
How should agents reason, evaluate, adapt, and explain their decisions—independent of how they are executed?
APS introduces a structured vocabulary for:
- Reasoning behaviors (generation, critique, verification)
- Control decisions (routing, fallback, escalation)
- Quality enforcement (evaluation gates, ranking)
- Transparency (traces and artifacts)
This allows agent systems to be designed intentionally, rather than emergently shaped by prompt complexity or framework defaults.
APS is designed around the following principles:
-
Architecture-agnostic
Patterns describe intent, not implementation. They can be mapped onto any runtime or orchestration model.
-
Explicit reasoning contracts
Inputs, outputs, and invariants are defined at the pattern level to reduce ambiguity.
-
Composable by default
Patterns are designed to be combined, nested, and reused across workflows.
-
Traceable and inspectable
Reasoning steps produce artifacts and traces suitable for debugging, evaluation, and governance.
-
Strategy-independent
Execution strategies (LLMs, rules, heuristics, retrieval methods) can be swapped without changing the pattern itself.
APS is intended for:
- AI system architects designing agentic workflows
- Platform and framework engineers building agent runtimes
- Researchers exploring structured reasoning in LLM systems
- Teams seeking consistency, auditability, and maintainability in agent behavior
A Pattern defines a reusable unit of reasoning behavior with a clear purpose, contract, and expected outcome.
Patterns specify what kind of reasoning occurs, not how it is executed.
Example
An agent generating user-facing content applies a Refinement Loop pattern:
- Produce an initial draft
- Evaluate against explicit quality criteria
- Revise deficient sections
- Repeat until acceptance thresholds are met
The pattern remains constant even if the underlying evaluation or generation strategy changes.
A Strategy is a concrete method used to realize a pattern’s intent.
Strategies are interchangeable and do not alter the pattern’s external contract.
Example
A Quality Gate pattern may use:
- An LLM-based evaluator with a rubric
- A deterministic rule checklist
- A hybrid scoring model
The decision logic changes, but the pattern’s role in the system does not.
A Policy defines persistent constraints that shape agent behavior across multiple patterns.
Policies are treated as explicit inputs rather than implicit prompt instructions.
Examples
- Safety constraints (e.g., prohibited advice domains)
- Tone and brand alignment rules
- Compliance or jurisdictional requirements
Policies are applied consistently across generation, evaluation, and routing decisions.
An Artifact is a structured data object produced or consumed by patterns.
Artifacts make intermediate reasoning steps explicit and machine-processable.
Example
A task decomposition artifact may include:
- Original task statement
- Subtasks
- Assumptions
- Success criteria
Downstream patterns consume this artifact directly rather than re-interpreting unstructured text.
A Trace is a human-readable explanation of decisions made during execution.
Traces capture why an outcome was accepted, rejected, or modified.
Traces support:
- Debugging and iteration
- Quality audits
- Explainability and trust
APS groups patterns by the type of reasoning behavior they encode.
Patterns responsible for producing or transforming candidate outputs.
-
Draft Generator
Produces an initial response without quality judgment.
Loadingflowchart TD A[Input task] --> B[Generate first draft] B --> C[Draft output] C --> D[Trace: rationale]
-
Candidate Generator
Produces multiple alternative outputs for downstream evaluation.
Loadingflowchart TD A[Input task] --> B[Generate multiple candidates] B --> C1[Candidate A] B --> C2[Candidate B] B --> C3[Candidate C] C1 --> T[Trace: why options differ] C2 --> T C3 --> T
-
Critique and Revise
Identifies weaknesses in an output and selectively improves it.
Loadingflowchart TD A[Initial draft] --> B[Critique against criteria] B --> C[Revision plan] C --> D[Revise draft] D --> E[Improved output] B --> T[Trace: issues found] D --> T[Trace: changes made]
Patterns responsible for assessing correctness, quality, or alignment.
-
Quality Gate
Determines whether an output meets predefined acceptance criteria.
Loadingflowchart TD A[Candidate output] --> B[Evaluate with rubric] B -->|Pass| C[Accept output] B -->|Fail| D[Reject or request revision] B --> T[Trace: scores and reasons]
-
Ranker
Orders candidates based on relevance or usefulness.
Loadingflowchart TD A[Candidate set] --> B[Score each candidate] B --> C[Sort by score] C --> D[Top ranked output] B --> T[Trace: ranking rationale]
-
Verifier
Confirms factual claims against retrieved or trusted evidence.
Loadingflowchart TD A[Draft with claims] --> B[Extract claims] B --> C[Retrieve evidence] C --> D[Check claim vs evidence] D -->|Supported| E[Mark verified] D -->|Unsupported| F[Flag or revise] D --> T[Trace: evidence links]
Patterns that reduce complexity by transforming vague requests into structured problems.
-
Task Decomposer
Breaks a high-level objective into executable subtasks.
Loadingflowchart TD A[High level objective] --> B[Identify subtasks] B --> C[Order and dependencies] C --> D[Structured task plan artifact] D --> T[Trace: assumptions]
-
Query Decomposer
Splits complex information needs into focused retrieval queries.
Loadingflowchart TD A[Complex information need] --> B[Split into subquestions] B --> C1[Query 1] B --> C2[Query 2] B --> C3[Query 3] C1 --> D[Retrieve per query] C2 --> D C3 --> D D --> E[Merge evidence] E --> T[Trace: coverage and gaps]
Patterns that determine execution flow.
-
Intent Router
Selects an appropriate reasoning pathway based on task type.
Loadingflowchart TD A[User request] --> B[Classify intent] B -->|Write| C[Reasoning path: generate] B -->|Answer| D[Reasoning path: retrieve then respond] B -->|Decide| E[Reasoning path: compare options] B --> T[Trace: routing decision]
-
Policy Router
Applies different constraints or behaviors based on context.
Loadingflowchart TD A[Task + context] --> B[Select applicable policies] B --> C[Apply constraints] C --> D[Proceed with constrained execution] B --> T[Trace: policy set applied]
-
Fallback Handler
Alters strategy when repeated attempts fail or confidence is low.
Loadingflowchart TD A[Attempt primary strategy] --> B{Success} B -->|Yes| C[Return result] B -->|No| D[Switch strategy] D --> E[Retry] E --> F{Success} F -->|Yes| C F -->|No| G[Escalate to human or fail safe] D --> T[Trace: failure reasons] G --> T
Patterns that manage continuity across interactions.
-
Memory Updater
Records explicit user preferences or constraints.
Loadingflowchart TD A[Interaction] --> B[Detect explicit preference] B --> C[Write memory entry] C --> D[Use memory in future steps] C --> T[Trace: what was stored]
-
Preference Learner
Infers implicit preferences from observed behavior.
Loadingflowchart TD A[User behavior signals] --> B[Infer preference hypothesis] B --> C[Validate over time] C -->|Confirmed| D[Persist preference] C -->|Uncertain| E[Keep tentative] D --> T[Trace: evidence for preference]
APS operates at a higher abstraction level than agent orchestration frameworks.
- Execution frameworks define how agents run
- APS defines how agents reason
APS can be mapped onto:
- Graph-based orchestrators
- Multi-agent workflow engines
- Custom runtime environments
It does not replace these systems; it complements them by providing a shared reasoning vocabulary.
APS formalizes reasoning semantics rather than execution topology.
This separation enables:
- Clearer system design
- Safer experimentation
- Improved maintainability
- Stronger audit and governance capabilities
APS is best understood as a design system for agent cognition, not an SDK.
- You care about predictable reasoning
- You need evaluation, verification, or refinement loops
- You must produce artifacts, traces, or audits
- You are designing a platform, not a one-off agent
- Multiple agents or steps must follow consistent reasoning contracts
- One-shot chatbots
- Simple RAG Q&A
- Latency-critical paths with no evaluation
- Prompt-only experimentation
- Systems where reasoning transparency does not matter
- Not an execution framework
- Not a prompt library
- Not a replacement for orchestration tools
- Not required for all agent systems
Contributions are welcome once the repository is publicly opened.
Areas of interest include:
- New pattern definitions
- Clarified invariants and contracts
- Reference mappings to existing runtimes
- Evaluation and traceability guidelines