Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ Add to your OpenCode config:

Using `@latest` ensures you always get the newest version automatically when OpenCode starts.

> **Note:** If you use OAuth plugins (e.g., for Google or other services), place this plugin last in your `plugin` array to avoid interfering with their authentication flows.

Restart OpenCode. The plugin will automatically start optimizing your sessions.

## How Pruning Works
Expand Down Expand Up @@ -49,6 +47,8 @@ LLM providers like Anthropic and OpenAI cache prompts based on exact prefix matc

**Trade-off:** You lose some cache read benefits but gain larger token savings from reduced context size and performance improvements through reduced context poisoning. In most cases, the token savings outweigh the cache miss cost—especially in long sessions where context bloat becomes significant.

> **Note:** In testing, cache hit rates were approximately 65% with DCP enabled vs 85% without.

**Best use case:** Providers that count usage in requests, such as Github Copilot and Google Antigravity have no negative price impact.

## Configuration
Expand Down