Skip to content

Fix agent output validation to prevent false verified status#807

Merged
gsxdsm merged 5 commits intoAutoMaker-Org:v0.15.0rcfrom
gsxdsm:fix/cursor-fix
Feb 25, 2026
Merged

Fix agent output validation to prevent false verified status#807
gsxdsm merged 5 commits intoAutoMaker-Org:v0.15.0rcfrom
gsxdsm:fix/cursor-fix

Conversation

@gsxdsm
Copy link
Collaborator

@gsxdsm gsxdsm commented Feb 24, 2026

Summary

  • Add agent output validation to prevent features from being marked as 'verified' when CLI providers (like Cursor) exit without doing meaningful work
  • Check for tool usage markers ('🔧 Tool:') and minimum output length (200 chars) to determine if agent performed real implementation work
  • Ensure features with insufficient agent work are routed to 'waiting_approval' for user review instead of being automatically verified

Changes

  • execution-service.ts:

    • Added two constants: TOOL_USE_MARKER ('🔧 Tool:') and MIN_MEANINGFUL_OUTPUT_LENGTH (200 characters)
    • Moved agent output reading earlier in the execution flow (before final status determination)
    • Implemented agent work validation logic that checks for tool usage markers and output length
    • Updated final status logic: features are marked 'waiting_approval' if skipTests is true OR if agent output lacks tool usage/sufficient length, otherwise 'verified'
    • Added warning log when agent produces insufficient output with detailed metrics
    • Removed duplicate agent output reading from the summary extraction section
  • execution-service.test.ts:

    • Updated default mock to include realistic agent output with tool usage markers and sufficient length
    • Added comprehensive test suite "executeFeature - agent output validation" with 15 test cases covering:
      • Output validation with varying tool usage and length combinations
      • Edge cases (empty output, whitespace-only, file missing)
      • Boundary testing at 200-character threshold
      • skipTests priority over output validation
      • Success recording and summary extraction behavior with invalid output
      • Event emission with waiting_approval routing
      • Realistic scenarios for Cursor CLI (quick exit) and Claude SDK (multiple tool uses)
      • Correct file path and encoding for agent output reading

Summary by CodeRabbit

  • New Features

    • Agent execution now validates work by detecting tool usage and output meaningfulness, improving final status decisions.
    • Thinking-level normalization utility applied across UI flows for consistent model settings.
  • Bug Fixes / UX

    • More informative provider error messages and stronger executor error handling.
    • Popovers made non-modal and click handlers hardened to avoid accidental interactions.
  • Style

    • Updated chart palette with more vibrant, distinct colors.
  • Tests

    • Extensive unit and end-to-end coverage for agent-output validation, error handling, and thinking-level behavior.
  • Chores

    • Test/CI ports and test env/configs updated; default test timeouts increased for stability.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc38439 and 94c9f08.

📒 Files selected for processing (3)
  • .github/workflows/e2e-tests.yml
  • apps/server/.env.example
  • apps/ui/src/electron/constants.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • .github/workflows/e2e-tests.yml
  • apps/ui/src/electron/constants.ts

📝 Walkthrough

Walkthrough

Adds agent-output-based validation to set feature final status (tool marker + length checks), exports normalizeThinkingLevelForModel and updates UI usage, enriches provider/executor error messages, shifts test/server ports to 3107/3108, updates chart tokens, and adds extensive unit and E2E tests.

Changes

Cohort / File(s) Summary
Execution service
apps/server/src/services/execution-service.ts
Adds TOOL_USE_MARKER and MIN_MEANINGFUL_OUTPUT_LENGTH; reads agent-output.md, detects tool usage and output length, computes agentDidWork, and sets final status to waiting_approval or verified with warning logs and comments.
Agent output & execution tests
apps/server/tests/unit/services/agent-output-validation.test.ts, apps/server/tests/unit/services/execution-service.test.ts
New and expanded tests for marker format, length thresholds, skipTests interactions, ENOENT/whitespace handling, multi-tool realistic outputs, summary extraction, auto-mode flows, and boundary conditions.
Agent executor error handling
apps/server/src/services/agent-executor.ts
Improves provider message error handling: sanitizes/logs errors, explicitly handles error subtypes, and throws sanitized/contextual errors.
Provider error enrichment
apps/server/src/providers/...
apps/server/src/providers/copilot-provider.ts, .../cursor-provider.ts, .../gemini-provider.ts
Builds richer fallback enrichedError messages using available fields (message, code, duration_ms, session_id) instead of generic "Unknown error".
Thinking-level normalization (types & usage)
libs/types/src/settings.ts, libs/types/src/index.ts, apps/server/tests/unit/lib/thinking-level-normalization.test.ts
Adds and re-exports normalizeThinkingLevelForModel(model, thinkingLevel); unit tests for Opus and non-Opus model behavior.
UI model-change updates
apps/ui/src/components/dialogs/pr-comment-resolution-dialog.tsx, apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx
Replaces ad-hoc adaptive-thinking checks with normalizeThinkingLevelForModel and updates imports.
UI behavior & event handling
apps/ui/src/components/views/settings-view/phase-model-selector.tsx
Adds modal={false} to Popovers and updates onClick handlers to stop propagation and prevent default to avoid parent interactions.
Test timeouts & utilities
apps/ui/tests/utils/core/waiting.ts, apps/ui/tests/utils/views/board.ts, apps/ui/tests/features/planning-mode-fix-verification.spec.ts
Introduces DEFAULT_ELEMENT_TIMEOUT_MS = 10000 and replaces hard-coded timeouts with the constant; makes add-feature flow more robust.
E2E / Playwright & test ports
apps/ui/tests/features/opus-thinking-level-none.spec.ts, apps/ui/playwright.config.ts, apps/ui/scripts/kill-test-servers.mjs, .github/workflows/e2e-tests.yml, start-automaker.sh, docker-compose.dev.yml, apps/server/.env.example
Adds Opus thinking-level none E2E test; changes default test ports from 3007/3008 → 3107/3108 and propagates env vars across configs and CI.
UI constants & test helpers
apps/ui/tests/utils/core/constants.ts, apps/ui/tests/utils/api/client.ts, apps/ui/tests/features/*
Introduces WEB_BASE_URL, derives API_BASE_URL from TEST_SERVER_PORT, and updates tests to import centralized base URLs.
Styling
apps/ui/src/styles/themes/paper.css
Updates --chart-1..--chart-5 to vibrant oklch color tokens with inline comments.
Provider message types
libs/types/src/provider.ts
Extends ProviderMessage.subtype union with error_during_execution and error_max_budget_usd.
Provider & executor tests
apps/server/tests/unit/providers/*.test.ts, apps/server/tests/unit/services/agent-executor.test.ts
Adds tests verifying enriched error fallbacks for Copilot/Cursor/Gemini and extensive AgentExecutor streaming error scenarios.
Public exports update
libs/types/src/index.ts
Re-exports normalizeThinkingLevelForModel.

Sequence Diagram

sequenceDiagram
    participant Executor as Agent Executor
    participant OutputFile as agent-output.md
    participant Service as Execution Service
    participant Validator as Output Validator
    participant StatusDecider as Status Decider

    Executor->>OutputFile: write tool-marked output
    Service->>Validator: validateAgentOutput(feature, skipTests)
    Validator->>OutputFile: read agent-output.md (utf-8)
    Validator->>Validator: detect "🔧 Tool:" marker
    Validator->>Validator: measure output length against MIN_MEANINGFUL_OUTPUT_LENGTH
    Validator->>StatusDecider: return agentDidWork (hasTool && length >= MIN)
    StatusDecider->>StatusDecider: if skipTests OR !agentDidWork -> waiting_approval else -> verified
    StatusDecider-->>Service: finalStatus
    Service-->>Service: continue downstream processing (summary, merge/notify)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

Bug, Tests

Poem

🐰
I sniffed the logs and counted every mark,
Checked tools and lengths until it was dark.
I hopped through ports and colors so bright,
Tests green, errors clearer — I’m happy tonight.
Binky-bop! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 76.92% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main change: fixing agent output validation to prevent false 'verified' status when work wasn't actually done.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @gsxdsm, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the reliability of agent-driven feature completion by introducing robust validation for agent output. It addresses scenarios where CLI-based agents might exit prematurely without performing substantial work, preventing features from being automatically marked as verified. By centralizing and applying thinking level normalization, the PR also enhances consistency in model configuration across the application, ensuring that selected thinking levels are always compatible with the chosen model.

Highlights

  • Agent Output Validation: Implemented agent output validation to prevent features from being falsely marked as 'verified' when CLI providers exit without meaningful work.
  • Meaningful Work Detection: Introduced checks for tool usage markers ('🔧 Tool:') and a minimum output length (200 characters) to determine if an agent performed real implementation work.
  • Status Routing: Ensured features with insufficient agent work are routed to 'waiting_approval' for user review instead of automatic verification.
  • Thinking Level Normalization: Refactored model thinking level normalization logic into a shared utility function and applied it across relevant UI components.
  • Comprehensive Testing: Added extensive unit and end-to-end tests for both agent output validation and thinking level normalization to ensure robustness.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • apps/server/src/services/execution-service.ts
    • Added constants for tool use marker and minimum meaningful output length.
    • Integrated agent output reading and validation logic into the feature execution flow.
    • Modified final status determination to consider agent work validation.
    • Removed redundant agent output reading from the summary extraction section.
  • apps/server/tests/unit/lib/thinking-level-normalization.test.ts
    • Added unit tests for the normalizeThinkingLevelForModel utility.
  • apps/server/tests/unit/services/agent-output-validation.test.ts
    • Added contract tests to verify agent output tool marker format and validation logic.
  • apps/server/tests/unit/services/execution-service.test.ts
    • Updated default mock for secureFs.readFile to include realistic agent output.
    • Added a new test suite for executeFeature focusing on agent output validation, covering various scenarios and edge cases.
  • apps/ui/src/components/dialogs/pr-comment-resolution-dialog.tsx
    • Refactored the handleModelChange function to utilize the new normalizeThinkingLevelForModel utility.
  • apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx
    • Refactored the handleModelChange function to utilize the new normalizeThinkingLevelForModel utility.
  • apps/ui/src/styles/themes/paper.css
    • Updated the color definitions for chart variables to use more distinct hues.
  • apps/ui/tests/features/opus-thinking-level-none.spec.ts
    • Added a new end-to-end test to ensure the 'none' thinking level persists correctly for Claude Opus models.
  • libs/types/src/index.ts
    • Exported the new normalizeThinkingLevelForModel function.
  • libs/types/src/settings.ts
    • Implemented the normalizeThinkingLevelForModel function to standardize thinking level selection based on model capabilities.
Activity
  • The pull request was created by gsxdsm.
  • No human activity (comments, reviews, or progress updates) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust validation mechanism for agent output to prevent incorrect 'verified' statuses, which is a significant improvement. The logic is well-implemented and thoroughly tested with new unit and contract tests. Additionally, the refactoring of the model thinking level normalization logic simplifies the UI code and centralizes business logic, improving maintainability. I've left one minor suggestion in a test file to improve variable naming for clarity. Overall, this is a high-quality pull request.

Comment on lines +40 to +41
const hasMinimalOutput = agentOutput.trim().length < MIN_OUTPUT_LENGTH;
const agentDidWork = hasToolUsage && !hasMinimalOutput;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable name hasMinimalOutput is a bit misleading. It's assigned true when agentOutput.trim().length < MIN_OUTPUT_LENGTH, which means the output lacks the minimal length. The production code in execution-service.ts uses isOutputTooShort, which is much clearer. For consistency and improved readability in these contract tests, I suggest renaming this variable.

Suggested change
const hasMinimalOutput = agentOutput.trim().length < MIN_OUTPUT_LENGTH;
const agentDidWork = hasToolUsage && !hasMinimalOutput;
const isOutputTooShort = agentOutput.trim().length < MIN_OUTPUT_LENGTH;
const agentDidWork = hasToolUsage && !isOutputTooShort;

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
libs/types/src/settings.ts (1)

352-371: Prefer model-capability lookup over string heuristics for thinking levels.

normalizeThinkingLevelForModel currently inherits isAdaptiveThinkingModel string checks via getThinkingLevelsForModel. Consider driving supported thinking levels from model definitions (e.g., per-model supportsThinking / maxThinkingLevel) to avoid drift as new models land.
Based on learnings: When modeling AI capabilities, add per-model flags to model definitions (e.g., supportsThinking: boolean) and check capabilities by model ID rather than assuming all models from a provider share the same features. Follow the pattern: extend model definitions with explicit flags and implement lookups in helpers (e.g., profileHasThinking) that query capabilities by model ID to determine support at runtime.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/types/src/settings.ts` around lines 352 - 371, The current
normalizeThinkingLevelForModel/getThinkingLevelsForModel logic relies on string
heuristics; update it to consult per-model capability flags instead: extend the
model definitions with supportsThinking:boolean and maxThinkingLevel (or
equivalent) for each model ID, add a helper like profileHasThinking(modelId)
that looks up those flags, and change getThinkingLevelsForModel to return levels
based on those flags/max level rather than provider name heuristics; then have
normalizeThinkingLevelForModel call that capability-driven
getThinkingLevelsForModel/profileHasThinking to pick the correct level
(preserving the selected level when supported, falling back to 'none' or the
model's minimum).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx`:
- Around line 30-32: The import removal of getThinkingLevelsForModel causes
runtime crashes because existing usages still call it; restore the import of
getThinkingLevelsForModel from '@automaker/types' at the top of
add-feature-dialog.tsx (or alternatively replace each call-site to use
normalizeThinkingLevelForModel/sufficient helpers) so that references used in
the component (the calls that currently reference getThinkingLevelsForModel) are
resolved; update imports to include getThinkingLevelsForModel alongside
supportsReasoningEffort and normalizeThinkingLevelForModel, or refactor the two
call sites to use normalizeThinkingLevelForModel and supportsReasoningEffort
consistently.

---

Nitpick comments:
In `@libs/types/src/settings.ts`:
- Around line 352-371: The current
normalizeThinkingLevelForModel/getThinkingLevelsForModel logic relies on string
heuristics; update it to consult per-model capability flags instead: extend the
model definitions with supportsThinking:boolean and maxThinkingLevel (or
equivalent) for each model ID, add a helper like profileHasThinking(modelId)
that looks up those flags, and change getThinkingLevelsForModel to return levels
based on those flags/max level rather than provider name heuristics; then have
normalizeThinkingLevelForModel call that capability-driven
getThinkingLevelsForModel/profileHasThinking to pick the correct level
(preserving the selected level when supported, falling back to 'none' or the
model's minimum).

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0330c70 and cb1da29.

📒 Files selected for processing (10)
  • apps/server/src/services/execution-service.ts
  • apps/server/tests/unit/lib/thinking-level-normalization.test.ts
  • apps/server/tests/unit/services/agent-output-validation.test.ts
  • apps/server/tests/unit/services/execution-service.test.ts
  • apps/ui/src/components/dialogs/pr-comment-resolution-dialog.tsx
  • apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx
  • apps/ui/src/styles/themes/paper.css
  • apps/ui/tests/features/opus-thinking-level-none.spec.ts
  • libs/types/src/index.ts
  • libs/types/src/settings.ts

gsxdsm and others added 2 commits February 24, 2026 17:25
…s test failure, fix port change, move playwright tests to different port
…alog.tsx

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (3)
apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx (1)

1413-1415: stopPropagation/preventDefault missing on the equivalent Codex and grouped-model variant buttons.

The same pattern was applied to Claude and ClaudeCompatibleProvider thinking-level buttons (mobile inline and desktop PopoverContent), but the structurally identical Codex reasoning-effort buttons and grouped-model variant buttons were not updated:

Location Mobile inline Desktop PopoverContent
Claude thinking level ✅ fixed (lines 1761–1763) ✅ fixed (lines 1918–1920)
Provider thinking level ✅ fixed (lines 1413–1415) ✅ fixed (lines 1556–1558)
Codex reasoning effort ❌ lines 916–925 ❌ lines 1069–1078
Grouped model variants ❌ lines 2023–2026 ❌ lines 2127–2130

The mobile inline case matters most: without stopPropagation, a click on a variant/effort option can bubble through the DOM and potentially interact with an ancestor interactive element before setOpen(false) runs.

♻️ Apply the same pattern to Codex mobile inline buttons (lines 916–925)
-                  onClick={() => {
+                  onClick={(e) => {
+                    e.stopPropagation();
+                    e.preventDefault();
                     onChange({
                       model: model.id as CodexModelId,
                       reasoningEffort: effort,
                     });
♻️ Apply the same pattern to grouped-model mobile variant buttons (lines 2023–2026)
-                  onClick={() => {
+                  onClick={(e) => {
+                    e.stopPropagation();
+                    e.preventDefault();
                     onChange({ model: variant.id });
                     setExpandedGroup(null);

Also applies to: 1556-1558, 1761-1763, 1918-1920

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx`
around lines 1413 - 1415, Add the same click-guard pattern used for the Provider
thinking-level buttons (the onClick handler that calls e.stopPropagation() and
e.preventDefault() before calling setOpen(false) / updating state) to the Codex
"reasoning-effort" option buttons and the grouped-model "variant" option buttons
in both their mobile-inline and PopoverContent desktop variants; locate the
Codex reasoning-effort handlers (the buttons rendered in the Codex section) and
the grouped-model variant buttons (the buttons rendered for grouped models) and
prepend e.stopPropagation() and e.preventDefault() to their onClick callbacks so
clicks don't bubble to ancestor interactive elements before the menu closes.
apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx (1)

310-313: Unify thinking-level fallback behavior across all initialization paths.

Line 311 correctly uses normalizeThinkingLevelForModel, but dialog open/reset paths still use manual fallback logic (getThinkingLevelsForModel + first element). That can resolve invalid defaults differently depending on entry path. Reusing the helper in all three paths would keep behavior consistent.

♻️ Suggested consistency refactor
-      const availableLevels = getThinkingLevelsForModel(modelId);
-      const effectiveThinkingLevel = availableLevels.includes(defaultThinkingLevel)
-        ? defaultThinkingLevel
-        : availableLevels[0];
+      const effectiveThinkingLevel = normalizeThinkingLevelForModel(
+        modelId,
+        defaultThinkingLevel
+      );
       setModelEntry({
         ...effectiveDefaultFeatureModel,
         thinkingLevel: effectiveThinkingLevel,
       });
-    const resetAvailableLevels = getThinkingLevelsForModel(resetModelId);
-    const resetThinkingLevel = resetAvailableLevels.includes(defaultThinkingLevel)
-      ? defaultThinkingLevel
-      : resetAvailableLevels[0];
+    const resetThinkingLevel = normalizeThinkingLevelForModel(
+      resetModelId,
+      defaultThinkingLevel
+    );
     setModelEntry({
       ...effectiveDefaultFeatureModel,
       thinkingLevel: resetThinkingLevel,
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx`
around lines 310 - 313, Other initialization paths still derive a fallback
thinkingLevel manually with getThinkingLevelsForModel + first element causing
inconsistent defaults; update those paths to call
normalizeThinkingLevelForModel(modelId, entry.thinkingLevel) instead and pass
the result into setModelEntry (or the equivalent state setter used in the dialog
open/reset flows) so all three initialization branches use the same
normalization logic; locate usages referencing getThinkingLevelsForModel and
replace the fallback computation with a call to normalizeThinkingLevelForModel
using the same modelId extraction as in the shown code.
apps/server/tests/unit/services/agent-executor.test.ts (1)

688-990: Add one parity test for executeTasksLoop subtype-error handling.

This suite covers execute() well, but the new subtype-error logic was also added in task execution. A targeted task-loop regression test would lock that path too.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/tests/unit/services/agent-executor.test.ts` around lines 688 -
990, Add a unit test that mirrors the existing "result subtype" failure cases
but targets the task-loop path by calling AgentExecutor.executeTasksLoop (on an
AgentExecutor instance) instead of execute; create a mockProvider with getName
and executeQuery that yields a { type: 'result', subtype:
'error_during_execution', ... } event, pass the same
AgentExecutionOptions/callbacks shape used elsewhere, and assert
executeTasksLoop rejects with an error message containing the subtype (e.g.,
'error_during_execution'); this ensures the subtype-error handling added to
executeTasksLoop is covered and parallels the existing execute() tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/server/src/providers/copilot-provider.ts`:
- Around line 392-399: The error throw in executeQuery still uses the raw event
text (new Error(errorEvent.data.message)) and can bypass the enriched fallback;
locate the throw inside executeQuery's event handler that references
errorEvent.data.message and change it to use the same normalized value as the
enrichment logic (use the enrichedError value or build the same fallback string
using errorEvent.data.code when message is falsy) so thrown Errors always
contain the enriched message that session.error receives.

In `@apps/server/src/services/agent-executor.ts`:
- Around line 299-303: The current log call in AgentExecutor is printing raw
provider errors (msg.error) which may contain sensitive data; update the
logger.error invocations (the one using featureId, session_id and the other
occurrence around lines 473-475) to remove raw="${msg.error}" and only include
the sanitized output returned by AgentExecutor.sanitizeProviderError(msg.error)
along with contextual fields (featureId, session_id) — i.e., call logger.error
with a message that references the sanitized variable and context, not the raw
msg.error, and ensure any other logger calls in this file that include msg.error
are changed the same way.
- Around line 308-315: The error handling branch in agent-executor.ts that
checks msg.subtype?.startsWith('error') (and the similar branch around the other
occurrence) currently throws only the subtype string and logs without including
provider error details; update the logger.error and the thrown Error in the
execute flow (refer to variables featureId, msg.subtype, msg.session_id, and
msg.error) to append or include msg.error when present so the thrown Error and
log message contain both the subtype and the provider error detail for easier
debugging.

In `@apps/ui/tests/utils/views/board.ts`:
- Around line 100-104: The test selects the first DOM match with .first() which
can be hidden; update the locator to filter visible buttons by using the
':visible' pseudo-class on the '[data-testid="add-feature-button"]' selector
before calling .first(), then use DEFAULT_ELEMENT_TIMEOUT_MS for both waitFor
and click timeouts (update calls referencing addButton.waitFor and
addButton.click) so the test waits for a visible element and uses consistent
timeout values.

In `@apps/ui/vite.config.mts`:
- Line 256: The Vite dev server launched by the Playwright tests doesn't get
AUTOMAKER_SERVER_PORT set, so the proxy in vite.config.mts still targets port
3008; update the webServer env object in playwright.config.ts to include
AUTOMAKER_SERVER_PORT: String(serverPort) (where serverPort is the variable used
to set TEST_SERVER_PORT) so the Vite dev server's env block passes the correct
port to the proxy; locate the webServer configuration and add the
AUTOMAKER_SERVER_PORT entry alongside existing env keys.

---

Nitpick comments:
In `@apps/server/tests/unit/services/agent-executor.test.ts`:
- Around line 688-990: Add a unit test that mirrors the existing "result
subtype" failure cases but targets the task-loop path by calling
AgentExecutor.executeTasksLoop (on an AgentExecutor instance) instead of
execute; create a mockProvider with getName and executeQuery that yields a {
type: 'result', subtype: 'error_during_execution', ... } event, pass the same
AgentExecutionOptions/callbacks shape used elsewhere, and assert
executeTasksLoop rejects with an error message containing the subtype (e.g.,
'error_during_execution'); this ensures the subtype-error handling added to
executeTasksLoop is covered and parallels the existing execute() tests.

In `@apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx`:
- Around line 310-313: Other initialization paths still derive a fallback
thinkingLevel manually with getThinkingLevelsForModel + first element causing
inconsistent defaults; update those paths to call
normalizeThinkingLevelForModel(modelId, entry.thinkingLevel) instead and pass
the result into setModelEntry (or the equivalent state setter used in the dialog
open/reset flows) so all three initialization branches use the same
normalization logic; locate usages referencing getThinkingLevelsForModel and
replace the fallback computation with a call to normalizeThinkingLevelForModel
using the same modelId extraction as in the shown code.

In
`@apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx`:
- Around line 1413-1415: Add the same click-guard pattern used for the Provider
thinking-level buttons (the onClick handler that calls e.stopPropagation() and
e.preventDefault() before calling setOpen(false) / updating state) to the Codex
"reasoning-effort" option buttons and the grouped-model "variant" option buttons
in both their mobile-inline and PopoverContent desktop variants; locate the
Codex reasoning-effort handlers (the buttons rendered in the Codex section) and
the grouped-model variant buttons (the buttons rendered for grouped models) and
prepend e.stopPropagation() and e.preventDefault() to their onClick callbacks so
clicks don't bubble to ancestor interactive elements before the menu closes.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb1da29 and d4e74a1.

📒 Files selected for processing (18)
  • apps/server/src/providers/copilot-provider.ts
  • apps/server/src/providers/cursor-provider.ts
  • apps/server/src/providers/gemini-provider.ts
  • apps/server/src/services/agent-executor.ts
  • apps/server/tests/unit/providers/copilot-provider.test.ts
  • apps/server/tests/unit/providers/cursor-provider.test.ts
  • apps/server/tests/unit/providers/gemini-provider.test.ts
  • apps/server/tests/unit/services/agent-executor.test.ts
  • apps/ui/playwright.config.ts
  • apps/ui/scripts/kill-test-servers.mjs
  • apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx
  • apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx
  • apps/ui/tests/features/opus-thinking-level-none.spec.ts
  • apps/ui/tests/features/planning-mode-fix-verification.spec.ts
  • apps/ui/tests/utils/core/waiting.ts
  • apps/ui/tests/utils/views/board.ts
  • apps/ui/vite.config.mts
  • libs/types/src/provider.ts
✅ Files skipped from review due to trivial changes (1)
  • apps/ui/playwright.config.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/ui/tests/features/opus-thinking-level-none.spec.ts

Comment on lines +392 to +399
const enrichedError =
errorEvent.data.message ||
(errorEvent.data.code
? `Copilot agent error (code: ${errorEvent.data.code})`
: 'Copilot agent error');
return {
type: 'error',
error: errorEvent.data.message || 'Unknown error',
error: enrichedError,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enriched session.error text can be bypassed at runtime.

Line 392-Line 399 improves normalization, but executeQuery still throws from the event handler using new Error(errorEvent.data.message) before normalized events are consumed. Empty message still loses your new fallback.

🔧 Suggested patch (align throw-path with enrichment)
-        } else if (event.type === 'session.error') {
-          const errorEvent = event as SdkSessionErrorEvent;
-          sessionError = new Error(errorEvent.data.message);
+        } else if (event.type === 'session.error') {
+          const errorEvent = event as SdkSessionErrorEvent;
+          const enrichedError =
+            errorEvent.data.message ||
+            (errorEvent.data.code
+              ? `Copilot agent error (code: ${errorEvent.data.code})`
+              : 'Copilot agent error');
+          sessionError = new Error(enrichedError);
           sessionComplete = true;
           pushEvent(event);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/providers/copilot-provider.ts` around lines 392 - 399, The
error throw in executeQuery still uses the raw event text (new
Error(errorEvent.data.message)) and can bypass the enriched fallback; locate the
throw inside executeQuery's event handler that references
errorEvent.data.message and change it to use the same normalized value as the
enrichment logic (use the enrichedError value or build the same fallback string
using errorEvent.data.code when message is falsy) so thrown Errors always
contain the enriched message that session.error receives.

Comment on lines +299 to +303
const sanitized = AgentExecutor.sanitizeProviderError(msg.error);
logger.error(
`[execute] Feature ${featureId} received error from provider. ` +
`raw="${msg.error}", sanitized="${sanitized}", session_id=${msg.session_id ?? 'none'}`
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stop logging raw provider error payloads.

Line 301 and Line 473 include raw="${msg.error}". Provider errors can contain sensitive content (tokens, file snippets, user data). Log only sanitized output.

🔧 Suggested patch
-          logger.error(
-            `[execute] Feature ${featureId} received error from provider. ` +
-              `raw="${msg.error}", sanitized="${sanitized}", session_id=${msg.session_id ?? 'none'}`
-          );
+          logger.error(
+            `[execute] Feature ${featureId} received error from provider. ` +
+              `error="${sanitized}", session_id=${msg.session_id ?? 'none'}`
+          );
-          logger.error(
-            `[executeTasksLoop] Feature ${featureId} task ${task.id} received error from provider. ` +
-              `raw="${msg.error}", sanitized="${sanitized}", session_id=${msg.session_id ?? 'none'}`
-          );
+          logger.error(
+            `[executeTasksLoop] Feature ${featureId} task ${task.id} received error from provider. ` +
+              `error="${sanitized}", session_id=${msg.session_id ?? 'none'}`
+          );

Also applies to: 473-475

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/services/agent-executor.ts` around lines 299 - 303, The
current log call in AgentExecutor is printing raw provider errors (msg.error)
which may contain sensitive data; update the logger.error invocations (the one
using featureId, session_id and the other occurrence around lines 473-475) to
remove raw="${msg.error}" and only include the sanitized output returned by
AgentExecutor.sanitizeProviderError(msg.error) along with contextual fields
(featureId, session_id) — i.e., call logger.error with a message that references
the sanitized variable and context, not the raw msg.error, and ensure any other
logger calls in this file that include msg.error are changed the same way.

Comment on lines +308 to +315
} else if (msg.subtype?.startsWith('error')) {
// Non-success result subtypes from the SDK (error_max_turns, error_during_execution, etc.)
logger.error(
`[execute] Feature ${featureId} ended with error subtype: ${msg.subtype}. ` +
`session_id=${msg.session_id ?? 'none'}`
);
throw new Error(`Agent execution ended with: ${msg.subtype}`);
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve provider error detail for result error subtypes.

Line 314 and Line 486 throw only the subtype string. If msg.error is present, that context is dropped and debugging becomes harder.

🔧 Suggested patch
-            throw new Error(`Agent execution ended with: ${msg.subtype}`);
+            const detail = AgentExecutor.sanitizeProviderError(msg.error);
+            const suffix = detail !== 'Unknown error' ? ` - ${detail}` : '';
+            throw new Error(`Agent execution ended with: ${msg.subtype}${suffix}`);
-            throw new Error(`Agent execution ended with: ${msg.subtype}`);
+            const detail = AgentExecutor.sanitizeProviderError(msg.error);
+            const suffix = detail !== 'Unknown error' ? ` - ${detail}` : '';
+            throw new Error(`Agent execution ended with: ${msg.subtype}${suffix}`);

Also applies to: 481-487

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/services/agent-executor.ts` around lines 308 - 315, The error
handling branch in agent-executor.ts that checks
msg.subtype?.startsWith('error') (and the similar branch around the other
occurrence) currently throws only the subtype string and logs without including
provider error details; update the logger.error and the thrown Error in the
execute flow (refer to variables featureId, msg.subtype, msg.session_id, and
msg.error) to append or include msg.error when present so the thrown Error and
log message contain both the subtype and the provider error detail for easier
debugging.

Comment on lines +100 to +104
// There may be multiple add-feature buttons on the page (header, empty state).
// Use .first() to click the first visible one.
const addButton = page.locator('[data-testid="add-feature-button"]').first();
await addButton.waitFor({ state: 'visible', timeout: DEFAULT_ELEMENT_TIMEOUT_MS });
await addButton.click({ timeout: 5000 });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Playwright locator first() visibility selector behavior CSS pseudo-selector

💡 Result:

  • locator.first() does not mean “first visible”—it returns a new locator targeting the first match in DOM order from the current locator’s match set, regardless of visibility. (playwright.dev)
  • If that first match is hidden, an action like click() will typically auto-wait for actionability (including being visible) and can timeout if it never becomes visible. (This is why relying on first() can be flaky when multiple matches exist.) (playwright.dev)
  • To select only visible elements, use either:
    • Playwright locator filtering: page.locator('button').filter({ visible: true }).first() (playwright.dev)
    • Playwright’s custom CSS pseudo-class :visible: page.locator('button:visible').first() (or just .click() if that becomes unique). (playwright.dev)
  • Note: :visible is Playwright-specific (not standard CSS), available in Playwright’s CSS selector engine. (playwright.dev)

Citations:


.first() here does not guarantee "first visible" and can cause test flakiness.

At Line 102, .first() selects the first DOM match, not the first visible one. If the first element is hidden, Line 103's waitFor() can timeout even when another add button is visible elsewhere on the page. This is a reliability issue when multiple add-feature buttons exist (header and empty state).

Proposed fix
-  const addButton = page.locator('[data-testid="add-feature-button"]').first();
+  const addButton = page.locator('[data-testid="add-feature-button"]:visible').first();
   await addButton.waitFor({ state: 'visible', timeout: DEFAULT_ELEMENT_TIMEOUT_MS });
-  await addButton.click({ timeout: 5000 });
+  await addButton.click({ timeout: DEFAULT_ELEMENT_TIMEOUT_MS });

Use the :visible pseudo-class to filter only visible buttons before calling .first(). Also align the click timeout with DEFAULT_ELEMENT_TIMEOUT_MS for consistency.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// There may be multiple add-feature buttons on the page (header, empty state).
// Use .first() to click the first visible one.
const addButton = page.locator('[data-testid="add-feature-button"]').first();
await addButton.waitFor({ state: 'visible', timeout: DEFAULT_ELEMENT_TIMEOUT_MS });
await addButton.click({ timeout: 5000 });
// There may be multiple add-feature buttons on the page (header, empty state).
// Use .first() to click the first visible one.
const addButton = page.locator('[data-testid="add-feature-button"]:visible').first();
await addButton.waitFor({ state: 'visible', timeout: DEFAULT_ELEMENT_TIMEOUT_MS });
await addButton.click({ timeout: DEFAULT_ELEMENT_TIMEOUT_MS });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/ui/tests/utils/views/board.ts` around lines 100 - 104, The test selects
the first DOM match with .first() which can be hidden; update the locator to
filter visible buttons by using the ':visible' pseudo-class on the
'[data-testid="add-feature-button"]' selector before calling .first(), then use
DEFAULT_ELEMENT_TIMEOUT_MS for both waitFor and click timeouts (update calls
referencing addButton.waitFor and addButton.click) so the test waits for a
visible element and uses consistent timeout values.

proxy: {
'/api': {
target: 'http://localhost:' + (process.env.AUTOMAKER_SERVER_PORT ?? '5008'),
target: 'http://localhost:' + (process.env.AUTOMAKER_SERVER_PORT ?? '3008'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n "AUTOMAKER_SERVER_PORT" --type ts --type mjs --type js -C3

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

rg -n "AUTOMAKER_SERVER_PORT" -C3

Repository: AutoMaker-Org/automaker

Length of output: 2550


🏁 Script executed:

fd -n "playwright.config.ts" -x cat -n {}

Repository: AutoMaker-Org/automaker

Length of output: 297


🏁 Script executed:

fd "playwright.config.ts" -x cat {}

Repository: AutoMaker-Org/automaker

Length of output: 3110


🏁 Script executed:

fd "kill-test-servers.mjs" -x cat {}

Repository: AutoMaker-Org/automaker

Length of output: 1917


Add AUTOMAKER_SERVER_PORT=3108 to the Vite dev server environment in playwright.config.ts.

The proxy in vite.config.mts defaults to port 3008, but the test backend runs on port 3108 (via TEST_SERVER_PORT). Currently, playwright.config.ts does not pass AUTOMAKER_SERVER_PORT=3108 to the Vite dev server's env block, causing API calls to be proxied to the wrong port and fail with ECONNREFUSED. Add AUTOMAKER_SERVER_PORT: String(serverPort), to the Vite webServer env configuration.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/ui/vite.config.mts` at line 256, The Vite dev server launched by the
Playwright tests doesn't get AUTOMAKER_SERVER_PORT set, so the proxy in
vite.config.mts still targets port 3008; update the webServer env object in
playwright.config.ts to include AUTOMAKER_SERVER_PORT: String(serverPort) (where
serverPort is the variable used to set TEST_SERVER_PORT) so the Vite dev
server's env block passes the correct port to the proxy; locate the webServer
configuration and add the AUTOMAKER_SERVER_PORT entry alongside existing env
keys.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
apps/ui/vite.config.mts (1)

252-260: Verify AUTOMAKER_SERVER_PORT is wired into the Playwright webServer env block.

The proxy target now correctly reads process.env.AUTOMAKER_SERVER_PORT ?? '3008'. However, if playwright.config.ts does not pass AUTOMAKER_SERVER_PORT into the Vite dev server's env block when spinning up the test frontend, the proxy will still target the default port (3008) rather than the test backend on 3108, causing ECONNREFUSED for all API calls in E2E tests.

#!/bin/bash
# Confirm AUTOMAKER_SERVER_PORT is set in the webServer env block of playwright.config.ts
fd "playwright.config.ts" -x grep -n "AUTOMAKER_SERVER_PORT" {}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/ui/vite.config.mts` around lines 252 - 260, The Vite dev server proxy in
vite.config.mts now reads process.env.AUTOMAKER_SERVER_PORT but Playwright may
not be passing that env into its webServer, so update playwright.config.ts to
include AUTOMAKER_SERVER_PORT in the webServer.env block (the same env name used
by the proxy) so the test frontend proxies to the test backend port (e.g., 3108)
instead of defaulting to 3008; locate the webServer configuration in
playwright.config.ts and add AUTOMAKER_SERVER_PORT with the correct value (or
forward process.env.AUTOMAKER_SERVER_PORT) to ensure the proxy target in
vite.config.mts ('/api' target using AUTOMAKER_SERVER_PORT) points to the test
backend during E2E runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/e2e-tests.yml:
- Line 119: Fix the typo in the health-check curl command where a stray backtick
turns `08/api/health` into command substitution; update the command used in the
workflow step (the curl invocation) to use a proper URL (e.g.,
http://localhost:3108/api/health) so the shell does not try to execute a
subshell and the health endpoint is requested correctly.

In `@apps/server/.env.example`:
- Around line 58-59: The .env.example uses TEST_UI_PORT but the scripts and
compose expect TEST_PORT; rename the variable from TEST_UI_PORT to TEST_PORT in
the example so environment overrides are picked up by start-automaker.sh (which
reads TEST_PORT / TEST_WEB_PORT) and match docker-compose.dev.yml which sets
TEST_PORT=3107; update the key name only (preserve the value 3107) so consumers
of TEST_PORT behave correctly.

In `@apps/ui/src/electron/constants.ts`:
- Around line 23-29: Update the stale comment above the port parsing to match
the current env-var mapping: note that SERVER_PORT controls the backend and PORT
controls the static/vite server; ensure the comment references SERVER_PORT and
PORT (not PORT/TEST_PORT) and briefly mention that parsedServerPort and
parsedStaticPort are computed from process.env.SERVER_PORT and process.env.PORT
respectively, which then determine DEFAULT_SERVER_PORT and DEFAULT_STATIC_PORT.

---

Duplicate comments:
In `@apps/ui/vite.config.mts`:
- Around line 252-260: The Vite dev server proxy in vite.config.mts now reads
process.env.AUTOMAKER_SERVER_PORT but Playwright may not be passing that env
into its webServer, so update playwright.config.ts to include
AUTOMAKER_SERVER_PORT in the webServer.env block (the same env name used by the
proxy) so the test frontend proxies to the test backend port (e.g., 3108)
instead of defaulting to 3008; locate the webServer configuration in
playwright.config.ts and add AUTOMAKER_SERVER_PORT with the correct value (or
forward process.env.AUTOMAKER_SERVER_PORT) to ensure the proxy target in
vite.config.mts ('/api' target using AUTOMAKER_SERVER_PORT) points to the test
backend during E2E runs.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d4e74a1 and bc38439.

📒 Files selected for processing (14)
  • .github/workflows/e2e-tests.yml
  • apps/server/.env.example
  • apps/ui/playwright.config.ts
  • apps/ui/scripts/kill-test-servers.mjs
  • apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx
  • apps/ui/src/electron/constants.ts
  • apps/ui/tests/e2e-testing-guide.md
  • apps/ui/tests/features/feature-manual-review-flow.spec.ts
  • apps/ui/tests/features/running-task-card-display.spec.ts
  • apps/ui/tests/utils/api/client.ts
  • apps/ui/tests/utils/core/constants.ts
  • apps/ui/vite.config.mts
  • docker-compose.dev.yml
  • start-automaker.sh
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/ui/scripts/kill-test-servers.mjs
  • apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx

@gsxdsm
Copy link
Collaborator Author

gsxdsm commented Feb 25, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 25, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@gsxdsm gsxdsm merged commit 51e9a23 into AutoMaker-Org:v0.15.0rc Feb 25, 2026
6 checks passed
@gsxdsm gsxdsm deleted the fix/cursor-fix branch February 25, 2026 04:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant