Skip to content

Conversation

@st4rnin3
Copy link

@st4rnin3 st4rnin3 commented Jan 9, 2026

Summary

Adds the ability to add multiple features to an existing project through a natural language conversation with Claude, similar to how initial spec creation works.

Problem: Currently, adding features requires adding them one-by-one through the Add Feature modal, which is time-consuming when you need to add many related features.

Solution: New "Expand Project" feature that lets you describe changes in natural language, and Claude generates the features for you.

Features

  • Expand button in header with keyboard shortcut E
  • Full-screen chat interface for describing new features
  • Context-aware: Claude reads existing app_spec.txt to understand your project
  • Bulk creation: Features created directly in database after user approval
  • New API endpoint: POST /api/projects/{name}/features/bulk for batch operations

How to Use

  1. Select a project with existing features
  2. Click "Expand" button (cyan) or press E
  3. Describe what features you want to add in natural language
  4. Claude asks clarifying questions about your requirements
  5. Review and approve the feature breakdown
  6. Features are created and appear in the Pending column

Files Changed

New files:

  • .claude/commands/expand-project.md - Claude skill for expansion conversations
  • server/services/expand_chat_session.py - WebSocket chat session service
  • server/routers/expand_project.py - WebSocket endpoint at /api/expand/ws/{project_name}
  • ui/src/components/ExpandProjectChat.tsx - Full chat UI component
  • ui/src/components/ExpandProjectModal.tsx - Modal wrapper
  • ui/src/hooks/useExpandChat.ts - WebSocket hook for expansion chat

Modified:

  • server/routers/features.py - Added bulk creation endpoint
  • server/schemas.py - Added FeatureBulkCreate schemas
  • ui/src/App.tsx - Integrated Expand button and modal

Test Plan

  • Select a project with existing features
  • Click "Expand" button or press E
  • Describe new features in natural language
  • Verify Claude reads existing spec and asks clarifying questions
  • Approve feature creation
  • Verify new features appear in Pending column
  • Verify agent can implement the new features

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • AI-powered project expansion: interactive streaming chat to propose and create features, with image attachments and completion summaries.
    • Bulk feature creation: add multiple features at once and auto-refresh the board.
    • UI: full-screen expansion modal, chat panel, header Expand button and keyboard shortcut (E).
  • Documentation

    • Added a user-facing workflow guide for expanding projects.
  • Improvements

    • Stronger project-name validation and improved session lifecycle cleanup.

✏️ Tip: You can customize this high-level summary in your review settings.

Adds the ability to add multiple features to an existing project through
a natural language conversation with Claude, similar to how initial spec
creation works.

Features:
- New "Expand" button in header (keyboard shortcut: E)
- Full-screen chat interface for describing new features
- Claude reads existing app_spec.txt for context
- Features created directly in database after user approval
- Bulk feature creation endpoint for batch operations

New files:
- .claude/commands/expand-project.md - Claude skill for expansion
- server/services/expand_chat_session.py - Chat session service
- server/routers/expand_project.py - WebSocket endpoint
- ui/src/components/ExpandProjectChat.tsx - Chat UI
- ui/src/components/ExpandProjectModal.tsx - Modal wrapper
- ui/src/hooks/useExpandChat.ts - WebSocket hook

Modified:
- Added POST /bulk endpoint to features router
- Added FeatureBulkCreate schemas
- Integrated Expand button and modal in App.tsx

Co-Authored-By: Claude <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Jan 9, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Adds an interactive AI-driven "expand project" feature: new backend expand-chat session service with REST/WebSocket router, multimodal streaming integration with Claude, bulk feature creation and validation, session registry and cleanup, plus matching frontend modal, chat UI, hook, types, and API to run conversational project expansion and persist generated features.

Changes

Cohort / File(s) Summary
Specification
\.claude/commands/expand-project.md
New doc describing the 3-phase conversational expansion flow, required inputs, <features_to_create> JSON contract, strict formatting/validation, feature quality/test guidance, and exact post-creation messaging.
Backend: Session Service
server/services/expand_chat_session.py
New ExpandChatSession and session registry: Claude client integration, multimodal message construction (images), streaming responses, detection/parsing of <features_to_create> blocks, bulk DB insertion, lifecycle (start/send/close) and cleanup utilities.
Backend: Router & Models
server/routers/expand_project.py, server/routers/__init__.py
New router with REST endpoints (list/status/cancel sessions) and WebSocket /api/expand/ws/{project_name} implementing start/message/ping/done lifecycle; adds ExpandSessionStatus model and exports router.
Backend: Features Bulk API & Schemas
server/routers/features.py, server/schemas.py
New bulk feature creation endpoint POST /projects/{name}/features/bulk and schemas FeatureBulkCreate / FeatureBulkCreateResponse; handles sequential priority assignment and batched DB insertion.
Backend: App Integration
server/main.py
Registers expand_project_router and awaits cleanup_all_expand_sessions() during shutdown.
Backend: Validation / Utils
server/utils/validation.py, server/utils/__init__.py
New validate_project_name helper enforcing allowed chars/length; package initializer added.
Frontend: Chat UI & Modal
ui/src/components/ExpandProjectChat.tsx, ui/src/components/ExpandProjectModal.tsx
New full-screen modal and chat UI: streaming messages, typing indicator, image attachment upload/preview/validation, send flow, error banner, and completion summary that triggers refresh.
Frontend: Hook & Types
ui/src/hooks/useExpandChat.ts, ui/src/lib/types.ts
New useExpandChat hook managing WebSocket lifecycle (reconnect/backoff, ping), streaming parsing of server messages and events; TS types for expand messages and bulk payloads.
Frontend: API & App Integration
ui/src/lib/api.ts, ui/src/App.tsx
New createFeaturesBulk API function; App adds ExpandProject modal state, keyboard shortcut (E), header Expand button, hides assistant while modal open, and invalidates features query when expansion adds features.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant UI as ExpandProjectChat
    participant WS as WebSocket Endpoint
    participant Service as ExpandChatSession
    participant Claude as Claude API
    participant DB as Database

    User->>UI: open modal / send message (+ optional image)
    UI->>WS: connect and send "start"
    WS->>Service: create or resume session
    Service->>Claude: send system prompt / multimodal user message
    Claude-->>Service: stream assistant chunks
    Service-->>WS: emit streaming chunks/events
    WS-->>UI: forward chunks/events to client
    UI->>User: render streaming assistant content
    Service->>Service: detect `<features_to_create>` block and parse JSON
    Service->>DB: bulk insert parsed features
    DB-->>Service: return created feature metadata
    Service-->>WS: emit features-created event
    WS-->>UI: notify UI of features added
    User->>UI: send "done"/close
    UI->>WS: send "done"
    WS->>Service: close session
    Service-->>WS: final completion event
    WS-->>UI: completion summary
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~65 minutes

Possibly related PRs

"I hopped through files with carrot cheer,
Claude and features now appear,
Images, JSON, priorities set,
I planted features — no regrets! 🐇✨"

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 78.13% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: Add Expand Project for bulk AI-powered feature creation' directly and clearly summarizes the main change: adding a feature for bulk AI-powered project expansion.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cdcbd11 and 56202fa.

📒 Files selected for processing (6)
  • server/main.py
  • server/routers/__init__.py
  • server/schemas.py
  • ui/src/App.tsx
  • ui/src/lib/api.ts
  • ui/src/lib/types.ts

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server/main.py (1)

38-47: Shutdown comment is now slightly misleading (“agents and sessions”): include “expand sessions” explicitly or keep generic.

🤖 Fix all issues with AI agents
In @.claude/commands/expand-project.md:
- Around line 145-173: The fenced code block containing the sample XML/JSON
hybrid starting with "<features_to_create>" should specify a language to satisfy
markdownlint — change the opening triple backticks to "```json" (i.e., use a
JSON code fence for the block that contains the features array) so the block is
highlighted as JSON; ensure the closing triple backticks remain and no other
content changes.

In @server/routers/expand_project.py:
- Around line 184-197: Remove detailed exception strings from WebSocket
responses and keep only generic messages while preserving server-side logs: in
the block that parses attachments (where ImageAttachment is used and
logger.warning is called) change the websocket.send_json payload to use
"content": "Invalid attachment" instead of including str(e); similarly, in the
other error handling block that calls logger.exception(...) and sends a server
error over websocket, change the websocket.send_json payload to use "content":
"Server error" rather than f"Server error: {str(e)}". Ensure
logger.warning/logger.exception remain unchanged so full details stay in server
logs.
- Around line 163-170: The "start" branch currently always calls
create_expand_session which replaces any existing session and loses state;
instead, first call get_expand_session(project_name, project_dir) and if it
returns a session reuse it, otherwise call create_expand_session to create a new
one; then stream the initial greeting from that session (avoid calling
session.start() on an existing session if start() resets state—call a
non-resetting method or make session.start() idempotent) so retries will not
replace/close an active session.

In @server/routers/features.py:
- Around line 302-374: The create_features_bulk function has a redundant
discarded query and a race condition when re-querying by priority after commit;
also starting_priority in FeatureBulkCreate lacks validation. Fix by validating
bulk.starting_priority (ensure positive int >=1) before use, then when creating
each Feature instance call session.add(...) followed by session.flush() and
session.refresh(db_feature) to get assigned IDs immediately (or collect the ORM
instances returned) instead of committing and re-querying by priority; build the
created_features list directly from these refreshed instances and finally commit
once. Use the symbols create_features_bulk, bulk.starting_priority,
current_priority, session.flush(), session.refresh(db_feature), and Feature to
locate and apply the changes.

In @server/services/expand_chat_session.py:
- Around line 216-309: The _query_claude method currently only searches a single
<features_to_create> block, may create duplicate features on repeated blocks,
and appends an assistant message per TextBlock; change it to iterate all feature
blocks (use re.finditer on full_response) and for each parsed feature JSON skip
any features whose IDs are already in self.created_feature_ids before calling
_create_features_bulk (or de-duplicate the features_data list), update
self.features_created and self.created_feature_ids only with newly created IDs,
and collapse streamed text by accumulating all TextBlock text into a single
assistant string and appending one message to self.messages at the end of the
response handling (keep streaming yields for text chunks but only persist one
assistant entry per turn). Ensure exceptions and yields for errors remain as-is
and reference symbols: _query_claude, receive_response, AssistantMessage,
TextBlock, full_response, _create_features_bulk, created_feature_ids,
features_created, created_feature_ids, and messages.

In @ui/src/components/ExpandProjectChat.tsx:
- Around line 90-126: handleFileSelect currently reads files with FileReader but
lacks error handling and a guard for extracting base64, which can push invalid
attachments; add a reader.onerror handler to call setError with the FileReader
error and abort adding that file, and in reader.onload validate e.target?.result
is a string and that dataUrl.split(',')[1] exists before creating the
ImageAttachment and calling setPendingAttachments; also ensure you clear the
input value after processing (so re-selecting the same file triggers onChange) —
relate changes to handleFileSelect, reader.onload, reader.onerror, base64Data,
and setPendingAttachments.

In @ui/src/hooks/useExpandChat.ts:
- Around line 104-117: The onclose handler schedules reconnects but disconnect()
doesn’t clear that pending timeout or mark the closure as intentional, so a
later timeout can call connect() and reopen the socket; update disconnect() to
clear reconnectTimeoutRef.current and set a manual-disconnect flag (e.g.,
manualDisconnectRef or isManualDisconnectRef.current = true), add a guard in
ws.onclose before scheduling (check !isManualDisconnectRef.current) and in
connect() at the top (return early if isManualDisconnectRef.current is true),
and ensure reconnectTimeoutRef.current is nulled when cleared.
🧹 Nitpick comments (7)
server/schemas.py (1)

99-109: Consider basic validation on bulk requests (empty list / invalid priority).

If you don’t want no-op bulks, add a min_length=1 constraint on features. Also consider constraining starting_priority to your allowed range (e.g., ge=1) to avoid negative/zero priorities slipping through.

.claude/commands/expand-project.md (1)

83-97: Minor doc polish: Phase 2 bullets repeat “What …”; consider varying the phrasing to scan faster.

ui/src/hooks/useExpandChat.ts (1)

244-258: start(): consider capping the CONNECTING poll loop (avoid unbounded timeouts if the socket never opens).

ui/src/App.tsx (2)

74-126: Keyboard shortcut wiring is fine; consider also ignoring contentEditable targets (and/or Select) to avoid surprise triggers.


94-99: Avoid repeating the “has features” sum expression; compute once (readability + fewer dependency surprises).

Proposed diff
 function App() {
@@
   const { data: features } = useFeatures(selectedProject)
@@
+  const hasFeatures =
+    !!features &&
+    (features.pending.length + features.in_progress.length + features.done.length) > 0
@@
       // E : Expand project with AI (when project selected and has features)
-      if ((e.key === 'e' || e.key === 'E') && selectedProject && features &&
-          (features.pending.length + features.in_progress.length + features.done.length) > 0) {
+      if ((e.key === 'e' || e.key === 'E') && selectedProject && hasFeatures) {
         e.preventDefault()
         setShowExpandProject(true)
       }
@@
-                  {features && (features.pending.length + features.in_progress.length + features.done.length) > 0 && (
+                  {hasFeatures && (
                     <button
                       onClick={() => setShowExpandProject(true)}

Also applies to: 176-190

server/routers/expand_project.py (1)

244-246: Session retention on disconnect is reasonable for “resume”, but add a pruning strategy (TTL or remove-on-complete) to avoid unbounded in-memory sessions.

server/services/expand_chat_session.py (1)

383-444: Session registry: threading.Lock is OK here, but prefer an async lock if contention is possible

You don’t await while holding _expand_sessions_lock (good), so correctness is fine. If this can be hit frequently under async load, threading.Lock can still block the event loop briefly; consider asyncio.Lock (or anyio.Lock) to keep it idiomatic.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a195d6d and 5f06dcf.

📒 Files selected for processing (13)
  • .claude/commands/expand-project.md
  • server/main.py
  • server/routers/__init__.py
  • server/routers/expand_project.py
  • server/routers/features.py
  • server/schemas.py
  • server/services/expand_chat_session.py
  • ui/src/App.tsx
  • ui/src/components/ExpandProjectChat.tsx
  • ui/src/components/ExpandProjectModal.tsx
  • ui/src/hooks/useExpandChat.ts
  • ui/src/lib/api.ts
  • ui/src/lib/types.ts
🧰 Additional context used
🧬 Code graph analysis (6)
ui/src/components/ExpandProjectModal.tsx (1)
ui/src/components/ExpandProjectChat.tsx (1)
  • ExpandProjectChat (25-375)
ui/src/lib/api.ts (1)
server/schemas.py (2)
  • FeatureBulkCreate (99-102)
  • FeatureBulkCreateResponse (105-108)
ui/src/App.tsx (2)
ui/src/lib/types.ts (1)
  • Feature (60-67)
ui/src/components/ExpandProjectModal.tsx (1)
  • ExpandProjectModal (17-41)
ui/src/components/ExpandProjectChat.tsx (5)
server/schemas.py (1)
  • ImageAttachment (187-208)
ui/src/hooks/useExpandChat.ts (1)
  • useExpandChat (38-323)
server/services/expand_chat_session.py (1)
  • start (82-169)
server/websocket.py (1)
  • disconnect (67-73)
ui/src/components/ChatMessage.tsx (1)
  • ChatMessage (15-192)
server/routers/features.py (4)
server/schemas.py (2)
  • FeatureBulkCreate (99-102)
  • FeatureBulkCreateResponse (105-108)
server/routers/projects.py (1)
  • validate_project_name (77-84)
server/websocket.py (1)
  • validate_project_name (107-109)
ui/src/lib/types.ts (1)
  • Feature (60-67)
server/main.py (5)
server/schemas.py (1)
  • SetupStatus (139-144)
server/services/assistant_chat_session.py (1)
  • cleanup_all_sessions (361-373)
server/services/spec_chat_session.py (1)
  • cleanup_all_sessions (476-488)
server/services/expand_chat_session.py (1)
  • cleanup_all_expand_sessions (432-444)
server/services/process_manager.py (1)
  • cleanup_all_managers (412-425)
🪛 LanguageTool
.claude/commands/expand-project.md

[style] ~91-~91: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ser take? - What's the success state? - What's the error state? Integration: - ...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🪛 markdownlint-cli2 (0.18.1)
.claude/commands/expand-project.md

147-147: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (12)
server/routers/__init__.py (1)

10-24: Router export wiring looks correct.

Importing router as expand_project_router and re-exporting via __all__ is consistent with the existing router pattern here; assuming server/routers/expand_project.py defines router, this should integrate cleanly.

ui/src/components/ExpandProjectModal.tsx (1)

23-39: Verify Tailwind v4 CSS-variable syntax for the modal background.

bg-[var(--color-neo-bg)] may be impacted by Tailwind v4’s arbitrary-value/CSS-variable syntax changes; if you hit build-time class parsing issues, prefer switching to a v4-friendly form (e.g. bg-(--color-neo-bg) if that’s the intended token).

ui/src/lib/types.ts (2)

303-321: ExpandChatServerMessage: confirm whether “question” messages can appear.

Given the expand flow includes “clarifying questions”, double-check whether the server ever emits SpecChatQuestionMessage (type: 'question'). If yes, include it in ExpandChatServerMessage to keep client-side handling type-safe and exhaustive.


323-331: Bulk feature create typings look consistent with the server schemas.

ui/src/lib/api.ts (1)

116-124: Bulk-create API wrapper is straightforward and consistent with existing helpers.

server/main.py (2)

18-30: Router + shutdown cleanup wiring looks consistent with existing session cleanup patterns.


92-99: Including expand_project_router in main router list is the right integration point.

ui/src/App.tsx (1)

276-287: Invalidating the features query on completion is the right refresh mechanism.

ui/src/components/ExpandProjectChat.tsx (1)

52-60: Effect deps are suppressed; OK if projectName is immutable while mounted—otherwise you’ll keep talking to the wrong session.

server/routers/expand_project.py (1)

64-101: REST session status/cancel endpoints are straightforward and match the WS session model.

server/services/expand_chat_session.py (2)

310-369: Database isolation is correct; no project scoping required in Feature model

The database isolation happens at the file level—each project_dir corresponds to a separate SQLite database file at {project_dir}/features.db. The Feature model does not need a project_id or project_name field because different projects maintain entirely separate database instances. This approach matches the existing bulk feature creation endpoint in server/routers/features.py (lines 302–374), which uses the identical pattern without issues.


25-35: Clarify multimodal message payload structure for ClaudeSDKClient.query()

The async generator pattern itself is supported by query() (not an issue). However, the payload structure appears inconsistent with documented usage. The code wraps the message in a dict envelope with keys (type, message, parent_tool_use_id, session_id), whereas examples in claude-agent-sdk docs show a simpler structure with direct role and content keys. Confirm that this wrapper format is the correct input for multimodal messages in version 0.1.0, and verify whether hardcoding "session_id": "default" is appropriate or causes cross-session routing issues.

Comment on lines 184 to 197
# Parse attachments if present
attachments: list[ImageAttachment] = []
raw_attachments = message.get("attachments", [])
if raw_attachments:
try:
for raw_att in raw_attachments:
attachments.append(ImageAttachment(**raw_att))
except (ValidationError, Exception) as e:
logger.warning(f"Invalid attachment data: {e}")
await websocket.send_json({
"type": "error",
"content": f"Invalid attachment: {str(e)}"
})
continue
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd /tmp && find . -name "expand_project.py" -type f 2>/dev/null | head -5

Repository: leonvanzyl/autocoder

Length of output: 46


🏁 Script executed:

find . -name "expand_project.py" -o -name "*expand*" -type f 2>/dev/null | grep -E "\.py$"

Repository: leonvanzyl/autocoder

Length of output: 139


🏁 Script executed:

git ls-files | grep -i expand

Repository: leonvanzyl/autocoder

Length of output: 281


🏁 Script executed:

wc -l server/routers/expand_project.py

Repository: leonvanzyl/autocoder

Length of output: 100


🏁 Script executed:

sed -n '184,197p' server/routers/expand_project.py

Repository: leonvanzyl/autocoder

Length of output: 841


🏁 Script executed:

sed -n '234,242p' server/routers/expand_project.py

Repository: leonvanzyl/autocoder

Length of output: 363


Remove exception details from WebSocket error messages sent to clients; log details server-side instead.

Exception strings expose internal system details (paths, library versions, implementation specifics) to end users. This information should only be logged server-side. The client should receive generic error messages.

Two instances need fixing:

  • Lines 184-197: "content": f"Invalid attachment: {str(e)}""content": "Invalid attachment"
  • Lines 234-242: "content": f"Server error: {str(e)}""content": "Server error"

The server-side logging (via logger.warning() and logger.exception()) is already in place and should be preserved.

🤖 Prompt for AI Agents
In @server/routers/expand_project.py around lines 184 - 197, Remove detailed
exception strings from WebSocket responses and keep only generic messages while
preserving server-side logs: in the block that parses attachments (where
ImageAttachment is used and logger.warning is called) change the
websocket.send_json payload to use "content": "Invalid attachment" instead of
including str(e); similarly, in the other error handling block that calls
logger.exception(...) and sends a server error over websocket, change the
websocket.send_json payload to use "content": "Server error" rather than
f"Server error: {str(e)}". Ensure logger.warning/logger.exception remain
unchanged so full details stay in server logs.

Comment on lines 216 to 309
async def _query_claude(
self,
message: str,
attachments: list[ImageAttachment] | None = None
) -> AsyncGenerator[dict, None]:
"""
Internal method to query Claude and stream responses.

Handles text responses and detects feature creation blocks.
"""
if not self.client:
return

# Build the message content
if attachments and len(attachments) > 0:
content_blocks = []
if message:
content_blocks.append({"type": "text", "text": message})
for att in attachments:
content_blocks.append({
"type": "image",
"source": {
"type": "base64",
"media_type": att.mimeType,
"data": att.base64Data,
}
})
await self.client.query(_make_multimodal_message(content_blocks))
logger.info(f"Sent multimodal message with {len(attachments)} image(s)")
else:
await self.client.query(message)

# Accumulate full response to detect feature blocks
full_response = ""

# Stream the response
async for msg in self.client.receive_response():
msg_type = type(msg).__name__

if msg_type == "AssistantMessage" and hasattr(msg, "content"):
for block in msg.content:
block_type = type(block).__name__

if block_type == "TextBlock" and hasattr(block, "text"):
text = block.text
if text:
full_response += text
yield {"type": "text", "content": text}

self.messages.append({
"role": "assistant",
"content": text,
"timestamp": datetime.now().isoformat()
})

# Check for feature creation block in full response
features_match = re.search(
r'<features_to_create>\s*(\[[\s\S]*?\])\s*</features_to_create>',
full_response
)

if features_match:
try:
features_json = features_match.group(1)
features_data = json.loads(features_json)

if features_data and isinstance(features_data, list):
# Create features via REST API
created = await self._create_features_bulk(features_data)

if created:
self.features_created += len(created)
self.created_feature_ids.extend([f["id"] for f in created])

yield {
"type": "features_created",
"count": len(created),
"features": created
}

logger.info(f"Created {len(created)} features for {self.project_name}")
except json.JSONDecodeError as e:
logger.error(f"Failed to parse features JSON: {e}")
yield {
"type": "error",
"content": f"Failed to parse feature definitions: {str(e)}"
}
except Exception as e:
logger.exception("Failed to create features")
yield {
"type": "error",
"content": f"Failed to create features: {str(e)}"
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Feature block parsing is duplication-prone and message persistence is too granular

  • Duplication risk: If Claude repeats a <features_to_create> block (or you re-send/refresh), _create_features_bulk() will run again and create duplicates; you already track created_feature_ids but don’t use it to prevent replays.
  • Parsing only first match: re.search(...) finds a single block; if the skill ever emits multiple blocks, you’ll ignore later ones.
  • Message history: you append an "assistant" message for every streamed TextBlock (Lines 265-269). That makes self.messages fragmented and hard to replay; usually you want one assistant message per turn (accumulate blocks, append once at end).
Suggested approach (loop matches + add replay protection)
@@
-        features_match = re.search(
-            r'<features_to_create>\s*(\[[\s\S]*?\])\s*</features_to_create>',
-            full_response
-        )
-
-        if features_match:
+        for features_match in re.finditer(
+            r'<features_to_create>\s*(\[[\s\S]*?\])\s*</features_to_create>',
+            full_response,
+        ):
             try:
                 features_json = features_match.group(1)
                 features_data = json.loads(features_json)
@@
-                    created = await self._create_features_bulk(features_data)
+                    created = await self._create_features_bulk(features_data)
🤖 Prompt for AI Agents
In @server/services/expand_chat_session.py around lines 216 - 309, The
_query_claude method currently only searches a single <features_to_create>
block, may create duplicate features on repeated blocks, and appends an
assistant message per TextBlock; change it to iterate all feature blocks (use
re.finditer on full_response) and for each parsed feature JSON skip any features
whose IDs are already in self.created_feature_ids before calling
_create_features_bulk (or de-duplicate the features_data list), update
self.features_created and self.created_feature_ids only with newly created IDs,
and collapse streamed text by accumulating all TextBlock text into a single
assistant string and appending one message to self.messages at the end of the
response handling (keep streaming yields for text chunks but only persist one
assistant entry per turn). Ensure exceptions and yields for errors remain as-is
and reference symbols: _query_claude, receive_response, AssistantMessage,
TextBlock, full_response, _create_features_bulk, created_feature_ids,
features_created, created_feature_ids, and messages.

Comment on lines 90 to 126
// File handling for image attachments
const handleFileSelect = useCallback((files: FileList | null) => {
if (!files) return

Array.from(files).forEach((file) => {
// Validate file type
if (!ALLOWED_TYPES.includes(file.type)) {
setError(`Invalid file type: ${file.name}. Only JPEG and PNG are supported.`)
return
}

// Validate file size
if (file.size > MAX_FILE_SIZE) {
setError(`File too large: ${file.name}. Maximum size is 5 MB.`)
return
}

// Read and convert to base64
const reader = new FileReader()
reader.onload = (e) => {
const dataUrl = e.target?.result as string
const base64Data = dataUrl.split(',')[1]

const attachment: ImageAttachment = {
id: `${Date.now()}-${Math.random().toString(36).substring(2, 9)}`,
filename: file.name,
mimeType: file.type as 'image/jpeg' | 'image/png',
base64Data,
previewUrl: dataUrl,
size: file.size,
}

setPendingAttachments((prev) => [...prev, attachment])
}
reader.readAsDataURL(file)
})
}, [])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Attachment ingestion needs FileReader error handling + base64 extraction guard (avoid pushing undefined/invalid payloads).

Proposed diff
   const handleFileSelect = useCallback((files: FileList | null) => {
     if (!files) return
 
     Array.from(files).forEach((file) => {
@@
       // Read and convert to base64
       const reader = new FileReader()
+      reader.onerror = () => {
+        setError(`Failed to read file: ${file.name}`)
+      }
       reader.onload = (e) => {
-        const dataUrl = e.target?.result as string
-        const base64Data = dataUrl.split(',')[1]
+        const dataUrl = e.target?.result
+        if (typeof dataUrl !== 'string') {
+          setError(`Failed to read file: ${file.name}`)
+          return
+        }
+        const base64Data = dataUrl.includes(',') ? dataUrl.split(',')[1] : null
+        if (!base64Data) {
+          setError(`Invalid file encoding: ${file.name}`)
+          return
+        }
@@
         setPendingAttachments((prev) => [...prev, attachment])
       }
       reader.readAsDataURL(file)
     })
   }, [])
@@
             <input
               ref={fileInputRef}
               type="file"
               accept="image/jpeg,image/png"
               multiple
               onChange={(e) => handleFileSelect(e.target.files)}
               className="hidden"
             />

Also consider clearing the input value so selecting the same file again re-triggers onChange:

Optional follow-up diff
- onChange={(e) => handleFileSelect(e.target.files)}
+ onChange={(e) => {
+   handleFileSelect(e.target.files)
+   e.currentTarget.value = ''
+ }}

Also applies to: 301-308

🤖 Prompt for AI Agents
In @ui/src/components/ExpandProjectChat.tsx around lines 90 - 126,
handleFileSelect currently reads files with FileReader but lacks error handling
and a guard for extracting base64, which can push invalid attachments; add a
reader.onerror handler to call setError with the FileReader error and abort
adding that file, and in reader.onload validate e.target?.result is a string and
that dataUrl.split(',')[1] exists before creating the ImageAttachment and
calling setPendingAttachments; also ensure you clear the input value after
processing (so re-selecting the same file triggers onChange) — relate changes to
handleFileSelect, reader.onload, reader.onerror, base64Data, and
setPendingAttachments.

Comment on lines 104 to 117
ws.onclose = () => {
setConnectionStatus('disconnected')
if (pingIntervalRef.current) {
clearInterval(pingIntervalRef.current)
pingIntervalRef.current = null
}

// Attempt reconnection if not intentionally closed
if (reconnectAttempts.current < maxReconnectAttempts && !isCompleteRef.current) {
reconnectAttempts.current++
const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 10000)
reconnectTimeoutRef.current = window.setTimeout(connect, delay)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n ui/src/hooks/useExpandChat.ts | sed -n '95,125p'

Repository: leonvanzyl/autocoder

Length of output: 1237


🏁 Script executed:

cat -n ui/src/hooks/useExpandChat.ts | sed -n '290,320p'

Repository: leonvanzyl/autocoder

Length of output: 972


🏁 Script executed:

wc -l ui/src/hooks/useExpandChat.ts

Repository: leonvanzyl/autocoder

Length of output: 97


🏁 Script executed:

cat -n ui/src/hooks/useExpandChat.ts | sed -n '1,90p'

Repository: leonvanzyl/autocoder

Length of output: 3383


🏁 Script executed:

cat -n ui/src/hooks/useExpandChat.ts | sed -n '50,103p'

Repository: leonvanzyl/autocoder

Length of output: 2008


disconnect() doesn't cancel a pending reconnect timeout, allowing the WebSocket to reopen after user disconnection.

When ws.onclose fires and schedules a reconnect timeout, if disconnect() is called before that timeout executes, the scheduled reconnection still fires and calls connect(). Since disconnect() nullifies wsRef.current, the check in connect() at line 79 (wsRef.current?.readyState === WebSocket.OPEN) doesn't prevent the reconnection—the WebSocket is recreated anyway.

Proposed diff (add manual disconnect guard + clear pending timeout)
 export function useExpandChat({
@@
   const reconnectTimeoutRef = useRef<number | null>(null)
   const isCompleteRef = useRef(false)
+  const manuallyDisconnectedRef = useRef(false)
@@
   const connect = useCallback(() => {
+    if (manuallyDisconnectedRef.current) return
     if (wsRef.current?.readyState === WebSocket.OPEN) {
       return
     }
@@
     ws.onopen = () => {
       setConnectionStatus('connected')
       reconnectAttempts.current = 0
+      manuallyDisconnectedRef.current = false
@@
     ws.onclose = () => {
       setConnectionStatus('disconnected')
@@
       // Attempt reconnection if not intentionally closed
-      if (reconnectAttempts.current < maxReconnectAttempts && !isCompleteRef.current) {
+      if (
+        !manuallyDisconnectedRef.current &&
+        reconnectAttempts.current < maxReconnectAttempts &&
+        !isCompleteRef.current
+      ) {
         reconnectAttempts.current++
         const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 10000)
         reconnectTimeoutRef.current = window.setTimeout(connect, delay)
       }
     }
@@
   const disconnect = useCallback(() => {
+    manuallyDisconnectedRef.current = true
     reconnectAttempts.current = maxReconnectAttempts // Prevent reconnection
+    if (reconnectTimeoutRef.current) {
+      clearTimeout(reconnectTimeoutRef.current)
+      reconnectTimeoutRef.current = null
+    }
     if (pingIntervalRef.current) {
       clearInterval(pingIntervalRef.current)
       pingIntervalRef.current = null
     }
     if (wsRef.current) {
       wsRef.current.close()
       wsRef.current = null
     }
     setConnectionStatus('disconnected')
   }, [])

Also applies to: 299-310

🤖 Prompt for AI Agents
In @ui/src/hooks/useExpandChat.ts around lines 104 - 117, The onclose handler
schedules reconnects but disconnect() doesn’t clear that pending timeout or mark
the closure as intentional, so a later timeout can call connect() and reopen the
socket; update disconnect() to clear reconnectTimeoutRef.current and set a
manual-disconnect flag (e.g., manualDisconnectRef or
isManualDisconnectRef.current = true), add a guard in ws.onclose before
scheduling (check !isManualDisconnectRef.current) and in connect() at the top
(return early if isManualDisconnectRef.current is true), and ensure
reconnectTimeoutRef.current is nulled when cleared.

- Add language specifier to fenced code block in expand-project.md
- Remove detailed exception strings from WebSocket responses (security)
- Make WebSocket "start" message idempotent to avoid session reset
- Fix race condition in bulk feature creation with row-level lock
- Add validation for starting_priority (must be >= 1)
- Fix _query_claude to handle multiple feature blocks and deduplicate
- Add FileReader error handling in ExpandProjectChat
- Fix disconnect() to clear pending reconnect timeout
- Enable sandbox mode and validate CLI path in expand_chat_session
- Clean up temporary settings file on session close

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Fix all issues with AI agents
In @server/routers/expand_project.py:
- Around line 46-48: The project has four inconsistent validate_project_name
implementations (one in expand_project.py returning bool and others returning
str/raising HTTPException); consolidate them by creating a single shared
validator (e.g., validate_project_name) in a common util module and update all
routers (expand_project, features, agent, projects) to import and call that
validator; the shared function should perform the regex check
r'^[a-zA-Z0-9_-]{1,50}$' and raise an HTTPException with a 400 status and clear
message on failure, otherwise return the validated name so callers don't
duplicate error handling.

In @server/services/expand_chat_session.py:
- Around line 186-230: Concurrent send_message() calls can interleave
_query_claude() streams and corrupt responses; introduce a per-session
asyncio.Lock (e.g., self._query_lock) initialized when session starts and use
"async with self._query_lock" around calls to _query_claude() (and the analogous
call in receive_response/other send_message overloads) so only one query/stream
runs at a time, ensuring serialized streaming and correct feature parsing.
- Around line 334-393: The _create_features_bulk function is racy and re-queries
by priority; instead, add proper transaction handling: wrap the insert loop in
the session transaction, collect created ORM objects (e.g., db_feature) into a
local list, call session.flush() after adding them to populate IDs (no re-query
by priority), build and return the created_features from those flushed objects,
and ensure any exception calls session.rollback() in the except block before
re-raising; also document that robust concurrency requires either a DB
unique/sequence on priority or a locking strategy around the max-priority
selection if you must keep the max+1 approach.

In @ui/src/components/ExpandProjectChat.tsx:
- Around line 52-59: The start function is unstable because its dependency
connect depends on an inline onError callback recreated each render; memoize the
error handler by creating a stable callback (e.g., wrap setError in useCallback
as handleError) and pass that handler as onError to useExpandChat so
connect/start remain stable; keep the mount/unmount useEffect with the existing
eslint-disable comment.
🧹 Nitpick comments (2)
.claude/commands/expand-project.md (1)

87-92: Consider varying sentence starters for better readability.

Three consecutive questions begin with "What's" which slightly impacts flow. Consider rephrasing one for variety, e.g., "How does success look?" instead of "What's the success state?"

server/services/expand_chat_session.py (1)

407-468: Registry uses threading.Lock inside async fns; consider asyncio.Lock if this runs on the event loop.

The critical section is short, so it’s probably fine, but threading.Lock can still block the loop under contention.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5f06dcf and 75f2bf2.

📒 Files selected for processing (6)
  • .claude/commands/expand-project.md
  • server/routers/expand_project.py
  • server/routers/features.py
  • server/services/expand_chat_session.py
  • ui/src/components/ExpandProjectChat.tsx
  • ui/src/hooks/useExpandChat.ts
🧰 Additional context used
🧬 Code graph analysis (4)
ui/src/hooks/useExpandChat.ts (4)
ui/src/components/ChatMessage.tsx (1)
  • ChatMessage (15-192)
server/schemas.py (1)
  • ImageAttachment (187-208)
server/websocket.py (2)
  • connect (58-65)
  • disconnect (67-73)
server/services/expand_chat_session.py (1)
  • start (90-184)
server/routers/features.py (7)
server/schemas.py (2)
  • FeatureBulkCreate (99-102)
  • FeatureBulkCreateResponse (105-108)
server/routers/expand_project.py (2)
  • validate_project_name (46-48)
  • _get_project_path (35-43)
server/routers/agent.py (2)
  • validate_project_name (35-42)
  • _get_project_path (18-26)
server/routers/projects.py (1)
  • validate_project_name (77-84)
server/websocket.py (2)
  • validate_project_name (107-109)
  • _get_project_path (26-34)
api/database.py (1)
  • Feature (19-44)
ui/src/lib/types.ts (1)
  • Feature (60-67)
server/routers/expand_project.py (4)
server/schemas.py (1)
  • ImageAttachment (187-208)
server/services/expand_chat_session.py (11)
  • ExpandChatSession (41-404)
  • create_expand_session (418-433)
  • get_expand_session (412-415)
  • list_expand_sessions (450-453)
  • remove_expand_session (436-447)
  • is_complete (398-400)
  • get_features_created (394-396)
  • get_messages (402-404)
  • close (72-88)
  • start (90-184)
  • send_message (186-229)
server/routers/features.py (2)
  • _get_project_path (30-38)
  • validate_project_name (59-66)
registry.py (1)
  • get_project_path (195-213)
server/services/expand_chat_session.py (3)
server/schemas.py (1)
  • ImageAttachment (187-208)
ui/src/lib/types.ts (1)
  • Feature (60-67)
api/database.py (1)
  • create_database (76-94)
🪛 LanguageTool
.claude/commands/expand-project.md

[style] ~91-~91: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ser take? - What's the success state? - What's the error state? Integration: - ...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🔇 Additional comments (9)
server/routers/features.py (1)

302-385: LGTM - Well-designed bulk creation endpoint.

The implementation properly handles:

  • Row-level locking with with_for_update() to prevent race conditions when determining starting priority
  • Sequential priority assignment with proper validation
  • Efficient ID-based retrieval of created features
  • Appropriate error handling and empty input handling

The concurrency safety and validation logic are particularly well-executed.

ui/src/hooks/useExpandChat.ts (3)

78-242: LGTM - Robust WebSocket connection management.

The implementation includes:

  • Exponential backoff for reconnection (capped at 10s)
  • Keep-alive ping every 30 seconds
  • Proper cleanup of intervals
  • Respects completion state to prevent unnecessary reconnections
  • Comprehensive handling of all server message types

260-297: LGTM - Message sending properly implemented.

Correctly validates connection state, builds payload with optional attachments, and maintains chat state.


299-314: LGTM - Clean disconnect implementation.

Properly prevents reconnection and cleans up all resources (timers and WebSocket).

ui/src/components/ExpandProjectChat.tsx (2)

91-129: LGTM - Robust file upload handling.

Properly validates file type and size, handles FileReader errors, and correctly extracts base64 data from the data URL.


181-377: LGTM - Well-designed chat UI component.

The component demonstrates excellent UX patterns:

  • Clear connection status indicators
  • Drag-and-drop support with visual feedback
  • Keyboard shortcuts (Enter to send, Shift+Enter for newlines)
  • Auto-scroll and auto-focus behavior
  • Comprehensive error handling with dismissible error banner
  • Clear completion state with feature count
server/routers/expand_project.py (2)

107-256: LGTM - Robust WebSocket implementation.

The endpoint demonstrates solid design:

  • Validates project name, path, and spec existence before accepting connection
  • Idempotent start message handling (resumes existing sessions)
  • Comprehensive message type handling (ping/pong, start, message, done)
  • Proper attachment validation using Pydantic schemas
  • Intentionally preserves session on disconnect to enable resume
  • Good error handling for JSON decode errors and validation failures

89-100: No issue found. The remove_expand_session function is defined as async in server/services/expand_chat_session.py, so the await usage on line 99 is correct.

Likely an incorrect or invalid review comment.

server/services/expand_chat_session.py (1)

25-34: Verify the multimodal message envelope matches claude_agent_sdk expectations (and avoid hard-coded session_id="default").

If the SDK uses session_id to correlate streams, a constant value risks cross-session bleed if the underlying client shares state.

Comment on lines 52 to 71
def __init__(self, project_name: str, project_dir: Path):
"""
Initialize the session.

Args:
project_name: Name of the project being expanded
project_dir: Absolute path to the project directory
"""
self.project_name = project_name
self.project_dir = project_dir
self.client: Optional[ClaudeSDKClient] = None
self.messages: list[dict] = []
self.complete: bool = False
self.created_at = datetime.now()
self._conversation_id: Optional[str] = None
self._client_entered: bool = False
self.features_created: int = 0
self.created_feature_ids: list[int] = []
self._settings_file: Optional[Path] = None

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

self.complete is never set; expansion_complete is documented but never emitted.

Right now is_complete() will always be False, and callers expecting the "expansion_complete" event (per docstring) won’t ever receive it. Please either implement completion detection/emit or remove the API surface/doc.

Also applies to: 398-405

Comment on lines +72 to +89
async def close(self) -> None:
"""Clean up resources and close the Claude client."""
if self.client and self._client_entered:
try:
await self.client.__aexit__(None, None, None)
except Exception as e:
logger.warning(f"Error closing Claude client: {e}")
finally:
self._client_entered = False
self.client = None

# Clean up temporary settings file
if self._settings_file and self._settings_file.exists():
try:
self._settings_file.unlink()
except Exception as e:
logger.warning(f"Error removing settings file: {e}")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t overwrite a project’s existing .claude_settings.json; create a temp settings file and always clean it up on early returns.

This currently clobbers any user/project Claude settings (Line 131-135) and leaves the file behind if start() returns early before the session is closed.

Proposed fix (temp settings file + encoding + restrictive perms + early cleanup)
@@
 import json
 import logging
 import re
 import shutil
 import threading
+import uuid
 from datetime import datetime
 from pathlib import Path
 from typing import AsyncGenerator, Optional
@@
     async def start(self) -> AsyncGenerator[dict, None]:
@@
-        # Create security settings file
+        # Find and validate Claude CLI (before writing temp files)
+        system_cli = shutil.which("claude")
+        if not system_cli:
+            yield {
+                "type": "error",
+                "content": "Claude CLI not found. Please install Claude Code."
+            }
+            return
+
+        # Create *temporary* security settings file for this session
         security_settings = {
             "sandbox": {"enabled": True},
             "permissions": {
                 "defaultMode": "acceptEdits",
                 "allow": [
                     "Read(./**)",
                     "Glob(./**)",
                 ],
             },
         }
-        settings_file = self.project_dir / ".claude_settings.json"
+        settings_file = self.project_dir / f".claude_settings.expand.{uuid.uuid4().hex}.json"
         self._settings_file = settings_file
-        with open(settings_file, "w") as f:
+        with open(settings_file, "w", encoding="utf-8") as f:
             json.dump(security_settings, f, indent=2)
+        try:
+            settings_file.chmod(0o600)
+        except Exception:
+            logger.debug("Unable to chmod settings file", exc_info=True)
@@
-        # Find and validate Claude CLI
-        system_cli = shutil.which("claude")
-        if not system_cli:
-            yield {
-                "type": "error",
-                "content": "Claude CLI not found. Please install Claude Code."
-            }
-            return
         try:
@@
         except Exception as e:
             logger.exception("Failed to create Claude client")
             yield {
                 "type": "error",
-                "content": f"Failed to initialize Claude: {str(e)}"
+                "content": "Failed to initialize Claude."
             }
+            await self.close()
             return
@@
         except Exception as e:
             logger.exception("Failed to start expand chat")
             yield {
                 "type": "error",
-                "content": f"Failed to start conversation: {str(e)}"
+                "content": "Failed to start conversation."
             }
+            await self.close()

Also applies to: 90-185

Backend improvements:
- Create shared validation utility for project name validation
- Add asyncio.Lock to prevent concurrent _query_claude calls
- Fix _create_features_bulk: use flush() for IDs, add rollback on error
- Use unique temp settings file instead of overwriting .claude_settings.json
- Remove exception details from error messages (security)

Frontend improvements:
- Memoize onError callback in ExpandProjectChat for stable dependencies
- Add timeout to start() checkAndSend loop to prevent infinite retries
- Add manuallyDisconnectedRef to prevent reconnection after explicit disconnect
- Clear pending reconnect timeout in disconnect()

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @server/services/expand_chat_session.py:
- Around line 368-386: The code in the try block determines starting priority
with session.query(Feature).order_by(Feature.priority.desc()).first() which can
race under concurrent requests; change this to acquire a row-level lock on the
max-priority row via the session.query(...).with_for_update() (or equivalent ORM
for_update) call inside the same transaction before computing current_priority
so concurrent transactions serialize, or alternatively delegate priority
assignment to the router's bulk endpoint; update the priority-read logic (the
query that assigns current_priority) to use with_for_update() on the Feature
model so each transaction reserves the max priority before inserting new Feature
instances.
🧹 Nitpick comments (4)
ui/src/hooks/useExpandChat.ts (2)

97-108: Consider removing the reset of manuallyDisconnectedRef on connection open.

Setting manuallyDisconnectedRef.current = false on every successful connection (line 100) could allow unwanted reconnection attempts if the WebSocket happens to reconnect after a manual disconnect was initiated. The guard at line 81 prevents connect() from being called, but if a connection was already in progress when disconnect() was called, this reset would undo the manual disconnect flag.

Consider removing line 100 to ensure manual disconnect intent is preserved.

Suggested fix
     ws.onopen = () => {
       setConnectionStatus('connected')
       reconnectAttempts.current = 0
-      manuallyDisconnectedRef.current = false

       // Start ping interval to keep connection alive

254-275: Timeout chain in start() is not cleaned up on unmount.

The recursive setTimeout calls in checkAndSend are not tracked or cleared when the component unmounts. If the component unmounts while waiting for the connection, these timeouts will continue executing and may call onError or attempt to send messages on an unmounted component.

Consider tracking the timeout ID and clearing it in the cleanup effect.

Suggested approach
+  const startTimeoutRef = useRef<number | null>(null)
+
   // Clean up on unmount
   useEffect(() => {
     return () => {
       if (pingIntervalRef.current) {
         clearInterval(pingIntervalRef.current)
       }
       if (reconnectTimeoutRef.current) {
         clearTimeout(reconnectTimeoutRef.current)
       }
+      if (startTimeoutRef.current) {
+        clearTimeout(startTimeoutRef.current)
+      }
       if (wsRef.current) {
         wsRef.current.close()
       }
     }
   }, [])

Then use startTimeoutRef.current = window.setTimeout(checkAndSend, 100) in the start function.

server/services/expand_chat_session.py (1)

286-293: Multiple assistant messages created for a single response.

Each TextBlock in the Claude response creates a separate entry in self.messages. This could result in many fragmented assistant messages instead of a single coherent one. Consider accumulating the text and appending a single message after the response stream completes.

Suggested approach
+        # Accumulate assistant message
+        assistant_text = ""
+
         # Stream the response
         async for msg in self.client.receive_response():
             msg_type = type(msg).__name__

             if msg_type == "AssistantMessage" and hasattr(msg, "content"):
                 for block in msg.content:
                     block_type = type(block).__name__

                     if block_type == "TextBlock" and hasattr(block, "text"):
                         text = block.text
                         if text:
                             full_response += text
+                            assistant_text += text
                             yield {"type": "text", "content": text}

-                            self.messages.append({
-                                "role": "assistant",
-                                "content": text,
-                                "timestamp": datetime.now().isoformat()
-                            })
+
+        # Store complete assistant message
+        if assistant_text:
+            self.messages.append({
+                "role": "assistant",
+                "content": assistant_text,
+                "timestamp": datetime.now().isoformat()
+            })
ui/src/components/ExpandProjectChat.tsx (1)

150-182: ConnectionIndicator is recreated on every render.

The ConnectionIndicator function component is defined inside ExpandProjectChat, causing it to be recreated on every render. While React is smart about this, extracting it outside or using useMemo would be cleaner.

Option 1: Extract outside component
function ConnectionIndicator({ status }: { status: ConnectionStatus }) {
  switch (status) {
    case 'connected':
      return (/* ... */)
    // ... other cases
  }
}

Then use: <ConnectionIndicator status={connectionStatus} />

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 75f2bf2 and cdcbd11.

📒 Files selected for processing (7)
  • server/routers/expand_project.py
  • server/routers/features.py
  • server/services/expand_chat_session.py
  • server/utils/__init__.py
  • server/utils/validation.py
  • ui/src/components/ExpandProjectChat.tsx
  • ui/src/hooks/useExpandChat.ts
✅ Files skipped from review due to trivial changes (1)
  • server/utils/init.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • server/routers/expand_project.py
🧰 Additional context used
🧬 Code graph analysis (2)
ui/src/hooks/useExpandChat.ts (4)
ui/src/components/ChatMessage.tsx (1)
  • ChatMessage (15-192)
server/schemas.py (1)
  • ImageAttachment (187-208)
server/websocket.py (2)
  • connect (58-65)
  • disconnect (67-73)
server/services/expand_chat_session.py (1)
  • start (93-190)
server/routers/features.py (4)
server/schemas.py (4)
  • FeatureBulkCreate (99-102)
  • FeatureBulkCreateResponse (105-108)
  • FeatureCreate (76-78)
  • FeatureResponse (81-89)
server/utils/validation.py (1)
  • validate_project_name (10-28)
server/routers/expand_project.py (1)
  • _get_project_path (35-43)
api/database.py (1)
  • Feature (19-44)
🔇 Additional comments (13)
ui/src/hooks/useExpandChat.ts (2)

277-314: LGTM!

The sendMessage function correctly validates connection state, builds the message payload, and properly strips the previewUrl from attachments before sending to avoid unnecessary data transfer.


316-332: LGTM!

The disconnect function properly cleans up all resources, sets the manual disconnect flag, and prevents any reconnection attempts.

server/services/expand_chat_session.py (4)

27-37: LGTM!

The _make_multimodal_message helper correctly creates an async generator for multimodal content blocks.


54-74: LGTM!

The __init__ method properly initializes session state with appropriate tracking for messages, features, and concurrency control via asyncio.Lock.


192-238: LGTM!

The send_message method properly validates state, uses the query lock to prevent concurrent Claude queries, and handles errors gracefully.


423-449: LGTM!

The session registry functions are thread-safe and properly handle session lifecycle, including cleanup of existing sessions when creating new ones.

server/utils/validation.py (1)

10-28: LGTM!

The validate_project_name function provides solid protection against path traversal attacks by restricting to alphanumeric characters, hyphens, and underscores with a reasonable length limit.

server/routers/features.py (2)

14-21: LGTM!

The new imports for bulk creation schemas and shared validation utility are properly added.


292-375: LGTM!

The bulk creation endpoint is well-implemented with:

  • Proper input validation for starting_priority
  • Race condition prevention using with_for_update() when calculating max priority
  • Efficient ID retrieval via flush() without relying on priority range queries
  • Comprehensive error handling
ui/src/components/ExpandProjectChat.tsx (4)

55-62: Intentional exhaustive-deps bypass is acceptable but consider documenting.

The empty dependency array with eslint-disable is intentional for one-time mount/unmount behavior. This is acceptable, though the start and disconnect functions should be stable (they are via useCallback). Consider adding a brief comment explaining why.


76-91: LGTM!

The handleSendMessage function properly handles both text-only and attachment-only messages, and the keyboard handler correctly implements Enter-to-send with Shift+Enter for newlines.


116-123: Type assertion for mimeType assumes validation passed.

The type assertion file.type as 'image/jpeg' | 'image/png' relies on the earlier validation check at line 99. While this should be safe given the control flow, a more defensive approach would use a type guard or mapped type.

This is acceptable given the validation occurs first, but worth noting for maintainability.


184-380: LGTM!

The JSX structure is well-organized with clear sections for header, error banner, messages, input, and completion footer. The conditional rendering logic properly handles all states, and accessibility concerns like disabled states are correctly applied.

Comment on lines +152 to +178
# Create Claude SDK client
try:
self.client = ClaudeSDKClient(
options=ClaudeAgentOptions(
model="claude-opus-4-5-20251101",
cli_path=system_cli,
system_prompt=system_prompt,
allowed_tools=[
"Read",
"Glob",
],
permission_mode="acceptEdits",
max_turns=100,
cwd=str(self.project_dir.resolve()),
settings=str(settings_file.resolve()),
)
)
await self.client.__aenter__()
self._client_entered = True
except Exception as e:
logger.exception("Failed to create Claude client")
yield {
"type": "error",
"content": "Failed to initialize Claude"
}
return

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Claude API valid model identifiers 2025

💡 Result:

Key Claude API model identifiers (examples and formats used in 2025):

  • claude-opus-4-20250514 (Claude Opus 4). [1][2]
  • claude-sonnet-4-20250514 (Claude Sonnet 4). [1][2]
  • Versioned snapshots follow the pattern: claude---YYYYMMDD (e.g., claude-sonnet-4-20250514). [1]
  • Aliases (point to latest snapshot) — e.g., claude-opus-4, claude-sonnet-4, claude-3-7-sonnet-latest. Use aliases for experimentation; use full snapshot IDs in production. [1][3]

Source: Anthropic API docs (models list / model details / models overview). [1][2][3]


🏁 Script executed:

# Search for model identifier usage in the codebase
rg "claude-" --type py -A 2 -B 2

# Check for configuration files or settings
fd -t f -e json -e yaml -e toml | head -20

# Search for any hardcoded model references
rg "model\s*=" --type py | head -20

Repository: leonvanzyl/autocoder

Length of output: 4079


🏁 Script executed:

# Check if server services accept model as a parameter
rg "expand_chat_session|assistant_chat_session|spec_chat_session" --type py -A 5 | head -50

# Look for environment variables or config files for model
rg "MODEL|model.*env|getenv" --type py | grep -i model

# Check if there's a settings or config module
fd -t f -name "*config*" -o -name "*settings*" | grep -v node_modules

# Look at the ClaudeAgentOptions to see if model can be overridden
rg "ClaudeAgentOptions" --type py -B 5 -A 10

Repository: leonvanzyl/autocoder

Length of output: 10749


Hardcoded model identifier across multiple service files should be configurable.

The model "claude-opus-4-5-20251101" is hardcoded identically in three service files (expand_chat_session.py, spec_chat_session.py, and assistant_chat_session.py). Unlike the demo script which accepts a configurable --model argument, these server services cannot override the model without code changes. Extract this to a configuration variable or environment variable to allow easier updates and testing with different models.

Comment on lines +368 to +386
try:
# Determine starting priority
max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first()
current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1

created_rows: list = []

for f in features:
db_feature = Feature(
priority=current_priority,
category=f.get("category", "functional"),
name=f.get("name", "Unnamed feature"),
description=f.get("description", ""),
steps=f.get("steps", []),
passes=False,
)
session.add(db_feature)
created_rows.append(db_feature)
current_priority += 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential race condition when determining starting priority.

Unlike the bulk creation endpoint in features.py which uses with_for_update() to lock the max priority row, this method queries the max priority without any locking. Concurrent sessions could calculate the same starting priority, leading to duplicate priority values.

Consider adding row-level locking consistent with the router implementation, or coordinate through the router's bulk endpoint instead of direct DB access.

Suggested fix
         try:
             # Determine starting priority
-            max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first()
+            max_priority_feature = (
+                session.query(Feature)
+                .order_by(Feature.priority.desc())
+                .with_for_update()
+                .first()
+            )
             current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
# Determine starting priority
max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first()
current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1
created_rows: list = []
for f in features:
db_feature = Feature(
priority=current_priority,
category=f.get("category", "functional"),
name=f.get("name", "Unnamed feature"),
description=f.get("description", ""),
steps=f.get("steps", []),
passes=False,
)
session.add(db_feature)
created_rows.append(db_feature)
current_priority += 1
try:
# Determine starting priority
max_priority_feature = (
session.query(Feature)
.order_by(Feature.priority.desc())
.with_for_update()
.first()
)
current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1
created_rows: list = []
for f in features:
db_feature = Feature(
priority=current_priority,
category=f.get("category", "functional"),
name=f.get("name", "Unnamed feature"),
description=f.get("description", ""),
steps=f.get("steps", []),
passes=False,
)
session.add(db_feature)
created_rows.append(db_feature)
current_priority += 1
🤖 Prompt for AI Agents
In @server/services/expand_chat_session.py around lines 368 - 386, The code in
the try block determines starting priority with
session.query(Feature).order_by(Feature.priority.desc()).first() which can race
under concurrent requests; change this to acquire a row-level lock on the
max-priority row via the session.query(...).with_for_update() (or equivalent ORM
for_update) call inside the same transaction before computing current_priority
so concurrent transactions serialize, or alternatively delegate priority
assignment to the router's bulk endpoint; update the priority-read logic (the
query that assigns current_priority) to use with_for_update() on the Feature
model so each transaction reserves the max priority before inserting new Feature
instances.

@leonvanzyl leonvanzyl merged commit 0e7f8c6 into leonvanzyl:master Jan 10, 2026
1 check was pending
@leonvanzyl
Copy link
Owner

Thank you.

leonvanzyl added a commit that referenced this pull request Jan 10, 2026
Post-merge fixes for PR #36 (expand-project-with-ai):

- Fix syntax error in App.tsx Escape handler (missing `} else`)
- Fix missing closing brace in types.ts FeatureBulkCreateResponse
- Remove unused exception variables flagged by ruff (F841)
- Make nav buttons minimalist: remove text labels, keep icons + shortcuts
  - "Add Feature" → icon + N shortcut, tooltip "Add new feature"
  - "Expand" → icon + E shortcut, tooltip "Expand project with AI"

All checks pass: ruff, security tests, ESLint, TypeScript build.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
rudiheydra added a commit to rudiheydra/AutoBuildr that referenced this pull request Jan 27, 2026
…l#36)

- Create StaticSpecAdapter class that wraps legacy agents as AgentSpecs
- Implement create_initializer_spec() method with:
  - Prompt loading from prompts/initializer_prompt.md
  - task_type set to "custom"
  - Tool policy with feature creation tools (feature_create_bulk, etc.)
  - max_turns=100, timeout_seconds=3600 for lengthy initialization
  - AcceptanceSpec with feature_count validator
- Also implement create_coding_spec() and create_testing_spec() for completeness
- Add comprehensive unit tests (45 tests, all passing)
- Add verification script with all 10 feature steps passing
- Add end-to-end database integration test

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
rudiheydra added a commit to rudiheydra/AutoBuildr that referenced this pull request Jan 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants