Skip to content

Comments

fix: detect truncated LLM responses in apps extension#7354

Merged
DOsinga merged 2 commits intoblock:mainfrom
fresh3nough:fix/7239-cli-progress-max-tokens
Feb 23, 2026
Merged

fix: detect truncated LLM responses in apps extension#7354
DOsinga merged 2 commits intoblock:mainfrom
fresh3nough:fix/7239-cli-progress-max-tokens

Conversation

@fresh3nough
Copy link
Contributor

Summary

Adds truncation detection to the apps extension's inner LLM calls. When the LLM response hits the token limit before generating complete app content, users now get a clear error message instead of the cryptic missing field \html`` serde error.

Closes #7239

Changes

  • In generate_new_app_content() and generate_updated_app_content(), check if output_tokens >= max_tokens after the LLM call and return a descriptive error when truncation is detected
  • Added unit tests for extract_tool_response covering both truncated (missing html) and complete responses

Note: the max_tokens = Some(16384) fix for the inner LLM calls was already merged to main via #7247.

How to test

cargo test -p goose -- agents::platform_extensions::apps::tests

Reproduction

  1. Use a provider where default max output tokens is less than 16384 (e.g. Databricks with databricks-claude-opus-4-6)
  2. Ask goose to create a non-trivial app
  3. Before this fix: cryptic Error: Failed to parse tool response: missing field \html``
  4. After this fix: clear App content generation was truncated because the response hit the token limit error

@fresh3nough
Copy link
Contributor Author

should return a meaninful error message now @DOsinga

Copy link
Collaborator

@DOsinga DOsinga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks. can you remove the tests?

Cody and others added 2 commits February 23, 2026 16:01
Signed-off-by: Cody <166262726+kowirth@users.noreply.github.com>
Signed-off-by: Cody <251773092+fresh3nough@users.noreply.github.com>
Signed-off-by: Cody <251773092+fresh3nough@users.noreply.github.com>
@fresh3nough fresh3nough force-pushed the fix/7239-cli-progress-max-tokens branch from 369cf86 to ef69782 Compare February 23, 2026 21:01
@fresh3nough
Copy link
Contributor Author

removed @DOsinga

@DOsinga DOsinga added this pull request to the merge queue Feb 23, 2026
@DOsinga
Copy link
Collaborator

DOsinga commented Feb 23, 2026

thanks @fresh3nough !

@fresh3nough
Copy link
Contributor Author

np :)

Merged via the queue into block:main with commit e870375 Feb 23, 2026
20 checks passed
michaelneale added a commit that referenced this pull request Feb 23, 2026
…xt-edit

* origin/main: (35 commits)
  docs: generate manpages (#7443)
  Blog/goose v1 25 0 release (#7433)
  fix: detect truncated LLM responses in apps extension (#7354)
  fix: removed unnecessary version for goose acp macro dependency (#7428)
  add flag to hide select voice providers (#7406)
  New navigation settings layout options and styling (#6645)
  refactor: MCP-compliant theme tokens and CSS class rename (#7275)
  Redirect llama.cpp logs through tracing to avoid polluting CLI stdout/stderr (#7434)
  refactor: change open recipe in new window to pass recipe id (#7392)
  fix: handle truncated tool calls that break conversation alternation (#7424)
  streamline some github actions (#7430)
  Enable bedrock prompt cache (#6710)
  fix: use BEGIN IMMEDIATE to prevent SQLite deadlocks (#7429)
  Display working dir (#7419)
  dev: add cmake to hermitized env (#7399)
  refactor: remove allows_unlisted_models flag, always allow custom model entry (#7255)
  feat: expose context window utilization to agent via MOIM (#7418)
  Small model naming (#7394)
  chore(deps): bump ajv in /documentation (#7416)
  doc: groq models (#7404)
  ...
lifeizhou-ap added a commit that referenced this pull request Feb 24, 2026
* main:
  Simplified custom model flow with canonical models (#6934)
  feat: simplify the text editor to be more like pi (#7426)
  docs: add YouTube short embed to Neighborhood extension tutorial (#7456)
  fix: flake.nix build failure and deprecation warning (#7408)
  feat(claude-code): add permission prompt routing for approve mode (#7420)
  docs: generate manpages (#7443)
  Blog/goose v1 25 0 release (#7433)
  fix: detect truncated LLM responses in apps extension (#7354)
  fix: removed unnecessary version for goose acp macro dependency (#7428)
  add flag to hide select voice providers (#7406)
  New navigation settings layout options and styling (#6645)
  refactor: MCP-compliant theme tokens and CSS class rename (#7275)
  Redirect llama.cpp logs through tracing to avoid polluting CLI stdout/stderr (#7434)
  refactor: change open recipe in new window to pass recipe id (#7392)
  fix: handle truncated tool calls that break conversation alternation (#7424)
  streamline some github actions (#7430)
  Enable bedrock prompt cache (#6710)
  fix: use BEGIN IMMEDIATE to prevent SQLite deadlocks (#7429)
  Display working dir (#7419)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: Apps extension inner LLM call has no max_tokens, causing truncation and 'missing field html' error

2 participants