Skip to content

Vertex AI - Context Length / Comprehension Broken #5830

@TheFynx

Description

@TheFynx

App Version

3.23.12

API Provider

Google Vertex AI

Model Used

Claude 4.0 Sonnet, Gemini 2.5 Pro

Roo Code Task Links (Optional)

No response

🔁 Steps to Reproduce

OS: 15.5 (24F74)
VSCode: 1.101.1

Settings: Using Codebase Indexing, Reasoning Enabled, Context Condensing at 75%

Exact Actions: General Use - any new tasks/convo; every sessions context is jacked/reporting incorrectly.

Context starts at 0 and just randomizes after, never going about 1500 was the highest I've seen so far. Last was at 371 and it just fell over due to prompt/context length

I switched to openrouter as a test and everything went back to fully functional, seems to only affect Vertex (at least for the services I have access to/use)

Normal conversations jump to 5-12K context on the first interaction - with vertex is has like 5-7 listed.

💥 Outcome Summary

Expected

to condense context and keep going

Got

400 {"type":"error","error":{"type":"invalid_request_error","message":"input length and max_tokens exceed context limit: 192514 + 16384 > 200000, decrease input length or max_tokens and try again"}}

Attempted to manually condense context

Failed to condense context
Context size increased during condensing; skipping this attempt

📄 Relevant Logs or Errors (Optional)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Issue - In ProgressSomeone is actively working on this. Should link to a PR soon.bugSomething isn't working

    Type

    No type

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions