feat: add experimental compaction prompt and preserve prefix support #11492
+34
−6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #11497
Summary
OPENCODE_EXPERIMENTAL_COMPACTION_PRESERVE_PREFIXflag to preserve agent prefix cache during compactionOPENCODE_EXPERIMENTAL_COMPACTION_PROMPTenvironment variable for customizable compaction promptsWhy Prefix Preservation Matters
When enabled,
OPENCODE_EXPERIMENTAL_COMPACTION_PRESERVE_PREFIXreuses the original agent's prefix cache (tools + system prompts) instead of switching to the generic compaction agent. This provides significant benefits for providers that implement prefix caching:Cache Hit Optimization: Providers like Anthropic Claude and OpenAI GPT-4 use prefix caching to reduce token usage cost and latency. By preserving the agent prefix during compaction, context generates 99% cache hits instead of 100% cache miss tokens, preventing new cache build up, improving performance (especially with slower local LLM) and reducing costs up to 90%. (https://platform.openai.com/docs/guides/prompt-caching).
Consistent Tool Context: Maintaining the same tool definitions and system prompts ensures the LLM continues with the exact same capabilities and behavior context, avoiding context-switching overhead.
Seamless Continuation: The session continues with identical agent characteristics, preserving specialized instructions or model-specific optimizations from the original session.
Changes
SessionPrompt.resolveToolsexportable for reuse in compactionTesting
Enable with:
Note
This is an experimental feature behind feature flags, allowing for gradual rollout and user feedback before potential stabilization.