Skip to content

Epic 2: Integrate structured output into core workflows (M28) #549

@bug-ops

Description

@bug-ops

Context

M27 (#474) added chat_typed<T>() to LlmProvider and Extractor<T> utility, but they are not yet used by any core workflow. This epic integrates structured output where it provides measurable benefit.

Analysis

Not candidates (plain text output, no JSON needed):

  • Self-learning (learning.rs) — LLM returns Markdown skill body, not structured data
  • Summarization (semantic.rs) — LLM returns free-form text summary
  • Legacy fenced blocks (MCP/scrape executors) — parse JSON from text response, but chat_typed operates at request level, not text extraction level

Candidates for integration:

  1. Skill matching / intent classification — when the agent decides which skill to invoke, a structured response with { skill_name, confidence, reasoning } would be more reliable than regex/text heuristics
  2. Orchestrator model selection — router currently selects models; structured output ensures deterministic { model, reason } responses
  3. Structured summarization — instead of free-form summaries, extract { key_facts, entities, sentiment } for richer semantic memory
  4. Self-learning evaluation — before improving a skill, evaluate with structured { should_improve, issues, severity } instead of heuristics

Issues

Priority

Intent classification (#550) > model selection (#551) > summarization (#552) > self-learning eval (#553)

Verification

Each issue must demonstrate that structured output produces more reliable results than the current approach, with unit tests using MockProvider.

Sub-issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    M28Milestone 28: VectorStore AbstractionepicMilestone-level tracking issuellmLLM provider related

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions