Skip to content

Conversation

@dixoxib
Copy link

@dixoxib dixoxib commented Jan 31, 2026

Fixes #11497

Summary

  • Add OPENCODE_EXPERIMENTAL_COMPACTION_PRESERVE_PREFIX flag to preserve agent prefix cache during compaction
  • Add OPENCODE_EXPERIMENTAL_COMPACTION_PROMPT environment variable for customizable compaction prompts
  • Update compaction logic to use detailed session summary format when no custom prompt is provided

Why Prefix Preservation Matters

When enabled, OPENCODE_EXPERIMENTAL_COMPACTION_PRESERVE_PREFIX reuses the original agent's prefix cache (tools + system prompts) instead of switching to the generic compaction agent. This provides significant benefits for providers that implement prefix caching:

  1. Cache Hit Optimization: Providers like Anthropic Claude and OpenAI GPT-4 use prefix caching to reduce token usage cost and latency. By preserving the agent prefix during compaction, context generates 99% cache hits instead of 100% cache miss tokens, preventing new cache build up, improving performance (especially with slower local LLM) and reducing costs up to 90%. (https://platform.openai.com/docs/guides/prompt-caching).

  2. Consistent Tool Context: Maintaining the same tool definitions and system prompts ensures the LLM continues with the exact same capabilities and behavior context, avoiding context-switching overhead.

  3. Seamless Continuation: The session continues with identical agent characteristics, preserving specialized instructions or model-specific optimizations from the original session.

Changes

  1. Flag additions: Two new experimental flags for compaction configuration
  2. Prefix cache preservation: When enabled, compaction reuses the original agent's prefix cache (tools + system prompts)
  3. Customizable default prompt: Option for individual session summary prompts adapted to project workflow
  4. API export: Made SessionPrompt.resolveTools exportable for reuse in compaction

Testing

Enable with:

export OPENCODE_EXPERIMENTAL_COMPACTION_PRESERVE_PREFIX=true
export OPENCODE_EXPERIMENTAL_COMPACTION_PROMPT="Custom prompt if needed"

Note

This is an experimental feature behind feature flags, allowing for gradual rollout and user feedback before potential stabilization.

@github-actions
Copy link
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search results, I found one potentially related PR that might be addressing a similar compaction issue:

PR #11453 - fix(opencode): prevent context overflow during compaction
#11453

This PR appears to be related to compaction and context handling, which is closely connected to the new compaction prompt and prefix preservation features in PR #11492.

However, the other compaction-related PRs (#4710, #4616, #7104, #7824) appear to be older and addressing different aspects of the compaction feature (token metadata, context arguments, hybrid pipelines, branching).

PR #11492 is the current PR you're checking, so it's excluded from the duplicate consideration.

dixoxib and others added 5 commits January 31, 2026 18:07
reverted change to default compaction prompt
usePrefixCache does not shorten context by removing tool, so "compact" flag is still set in turn without recalc. This fix sets it manually to "continue". longterm the recal should be performed before setting compact flag.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Experimental compaction improvements (prefix preservation + prompt customization)

1 participant