-
Notifications
You must be signed in to change notification settings - Fork 314
feat: Add Expand Project for bulk AI-powered feature creation #36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add Expand Project for bulk AI-powered feature creation #36
Conversation
Adds the ability to add multiple features to an existing project through a natural language conversation with Claude, similar to how initial spec creation works. Features: - New "Expand" button in header (keyboard shortcut: E) - Full-screen chat interface for describing new features - Claude reads existing app_spec.txt for context - Features created directly in database after user approval - Bulk feature creation endpoint for batch operations New files: - .claude/commands/expand-project.md - Claude skill for expansion - server/services/expand_chat_session.py - Chat session service - server/routers/expand_project.py - WebSocket endpoint - ui/src/components/ExpandProjectChat.tsx - Chat UI - ui/src/components/ExpandProjectModal.tsx - Modal wrapper - ui/src/hooks/useExpandChat.ts - WebSocket hook Modified: - Added POST /bulk endpoint to features router - Added FeatureBulkCreate schemas - Integrated Expand button and modal in App.tsx Co-Authored-By: Claude <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds an interactive AI-driven "expand project" feature: new backend expand-chat session service with REST/WebSocket router, multimodal streaming integration with Claude, bulk feature creation and validation, session registry and cleanup, plus matching frontend modal, chat UI, hook, types, and API to run conversational project expansion and persist generated features. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as ExpandProjectChat
participant WS as WebSocket Endpoint
participant Service as ExpandChatSession
participant Claude as Claude API
participant DB as Database
User->>UI: open modal / send message (+ optional image)
UI->>WS: connect and send "start"
WS->>Service: create or resume session
Service->>Claude: send system prompt / multimodal user message
Claude-->>Service: stream assistant chunks
Service-->>WS: emit streaming chunks/events
WS-->>UI: forward chunks/events to client
UI->>User: render streaming assistant content
Service->>Service: detect `<features_to_create>` block and parse JSON
Service->>DB: bulk insert parsed features
DB-->>Service: return created feature metadata
Service-->>WS: emit features-created event
WS-->>UI: notify UI of features added
User->>UI: send "done"/close
UI->>WS: send "done"
WS->>Service: close session
Service-->>WS: final completion event
WS-->>UI: completion summary
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (6)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server/main.py (1)
38-47: Shutdown comment is now slightly misleading (“agents and sessions”): include “expand sessions” explicitly or keep generic.
🤖 Fix all issues with AI agents
In @.claude/commands/expand-project.md:
- Around line 145-173: The fenced code block containing the sample XML/JSON
hybrid starting with "<features_to_create>" should specify a language to satisfy
markdownlint — change the opening triple backticks to "```json" (i.e., use a
JSON code fence for the block that contains the features array) so the block is
highlighted as JSON; ensure the closing triple backticks remain and no other
content changes.
In @server/routers/expand_project.py:
- Around line 184-197: Remove detailed exception strings from WebSocket
responses and keep only generic messages while preserving server-side logs: in
the block that parses attachments (where ImageAttachment is used and
logger.warning is called) change the websocket.send_json payload to use
"content": "Invalid attachment" instead of including str(e); similarly, in the
other error handling block that calls logger.exception(...) and sends a server
error over websocket, change the websocket.send_json payload to use "content":
"Server error" rather than f"Server error: {str(e)}". Ensure
logger.warning/logger.exception remain unchanged so full details stay in server
logs.
- Around line 163-170: The "start" branch currently always calls
create_expand_session which replaces any existing session and loses state;
instead, first call get_expand_session(project_name, project_dir) and if it
returns a session reuse it, otherwise call create_expand_session to create a new
one; then stream the initial greeting from that session (avoid calling
session.start() on an existing session if start() resets state—call a
non-resetting method or make session.start() idempotent) so retries will not
replace/close an active session.
In @server/routers/features.py:
- Around line 302-374: The create_features_bulk function has a redundant
discarded query and a race condition when re-querying by priority after commit;
also starting_priority in FeatureBulkCreate lacks validation. Fix by validating
bulk.starting_priority (ensure positive int >=1) before use, then when creating
each Feature instance call session.add(...) followed by session.flush() and
session.refresh(db_feature) to get assigned IDs immediately (or collect the ORM
instances returned) instead of committing and re-querying by priority; build the
created_features list directly from these refreshed instances and finally commit
once. Use the symbols create_features_bulk, bulk.starting_priority,
current_priority, session.flush(), session.refresh(db_feature), and Feature to
locate and apply the changes.
In @server/services/expand_chat_session.py:
- Around line 216-309: The _query_claude method currently only searches a single
<features_to_create> block, may create duplicate features on repeated blocks,
and appends an assistant message per TextBlock; change it to iterate all feature
blocks (use re.finditer on full_response) and for each parsed feature JSON skip
any features whose IDs are already in self.created_feature_ids before calling
_create_features_bulk (or de-duplicate the features_data list), update
self.features_created and self.created_feature_ids only with newly created IDs,
and collapse streamed text by accumulating all TextBlock text into a single
assistant string and appending one message to self.messages at the end of the
response handling (keep streaming yields for text chunks but only persist one
assistant entry per turn). Ensure exceptions and yields for errors remain as-is
and reference symbols: _query_claude, receive_response, AssistantMessage,
TextBlock, full_response, _create_features_bulk, created_feature_ids,
features_created, created_feature_ids, and messages.
In @ui/src/components/ExpandProjectChat.tsx:
- Around line 90-126: handleFileSelect currently reads files with FileReader but
lacks error handling and a guard for extracting base64, which can push invalid
attachments; add a reader.onerror handler to call setError with the FileReader
error and abort adding that file, and in reader.onload validate e.target?.result
is a string and that dataUrl.split(',')[1] exists before creating the
ImageAttachment and calling setPendingAttachments; also ensure you clear the
input value after processing (so re-selecting the same file triggers onChange) —
relate changes to handleFileSelect, reader.onload, reader.onerror, base64Data,
and setPendingAttachments.
In @ui/src/hooks/useExpandChat.ts:
- Around line 104-117: The onclose handler schedules reconnects but disconnect()
doesn’t clear that pending timeout or mark the closure as intentional, so a
later timeout can call connect() and reopen the socket; update disconnect() to
clear reconnectTimeoutRef.current and set a manual-disconnect flag (e.g.,
manualDisconnectRef or isManualDisconnectRef.current = true), add a guard in
ws.onclose before scheduling (check !isManualDisconnectRef.current) and in
connect() at the top (return early if isManualDisconnectRef.current is true),
and ensure reconnectTimeoutRef.current is nulled when cleared.
🧹 Nitpick comments (7)
server/schemas.py (1)
99-109: Consider basic validation on bulk requests (empty list / invalid priority).If you don’t want no-op bulks, add a
min_length=1constraint onfeatures. Also consider constrainingstarting_priorityto your allowed range (e.g.,ge=1) to avoid negative/zero priorities slipping through..claude/commands/expand-project.md (1)
83-97: Minor doc polish: Phase 2 bullets repeat “What …”; consider varying the phrasing to scan faster.ui/src/hooks/useExpandChat.ts (1)
244-258: start(): consider capping the CONNECTING poll loop (avoid unbounded timeouts if the socket never opens).ui/src/App.tsx (2)
74-126: Keyboard shortcut wiring is fine; consider also ignoring contentEditable targets (and/or Select) to avoid surprise triggers.
94-99: Avoid repeating the “has features” sum expression; compute once (readability + fewer dependency surprises).Proposed diff
function App() { @@ const { data: features } = useFeatures(selectedProject) @@ + const hasFeatures = + !!features && + (features.pending.length + features.in_progress.length + features.done.length) > 0 @@ // E : Expand project with AI (when project selected and has features) - if ((e.key === 'e' || e.key === 'E') && selectedProject && features && - (features.pending.length + features.in_progress.length + features.done.length) > 0) { + if ((e.key === 'e' || e.key === 'E') && selectedProject && hasFeatures) { e.preventDefault() setShowExpandProject(true) } @@ - {features && (features.pending.length + features.in_progress.length + features.done.length) > 0 && ( + {hasFeatures && ( <button onClick={() => setShowExpandProject(true)}Also applies to: 176-190
server/routers/expand_project.py (1)
244-246: Session retention on disconnect is reasonable for “resume”, but add a pruning strategy (TTL or remove-on-complete) to avoid unbounded in-memory sessions.server/services/expand_chat_session.py (1)
383-444: Session registry:threading.Lockis OK here, but prefer an async lock if contention is possibleYou don’t
awaitwhile holding_expand_sessions_lock(good), so correctness is fine. If this can be hit frequently under async load,threading.Lockcan still block the event loop briefly; considerasyncio.Lock(oranyio.Lock) to keep it idiomatic.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
.claude/commands/expand-project.mdserver/main.pyserver/routers/__init__.pyserver/routers/expand_project.pyserver/routers/features.pyserver/schemas.pyserver/services/expand_chat_session.pyui/src/App.tsxui/src/components/ExpandProjectChat.tsxui/src/components/ExpandProjectModal.tsxui/src/hooks/useExpandChat.tsui/src/lib/api.tsui/src/lib/types.ts
🧰 Additional context used
🧬 Code graph analysis (6)
ui/src/components/ExpandProjectModal.tsx (1)
ui/src/components/ExpandProjectChat.tsx (1)
ExpandProjectChat(25-375)
ui/src/lib/api.ts (1)
server/schemas.py (2)
FeatureBulkCreate(99-102)FeatureBulkCreateResponse(105-108)
ui/src/App.tsx (2)
ui/src/lib/types.ts (1)
Feature(60-67)ui/src/components/ExpandProjectModal.tsx (1)
ExpandProjectModal(17-41)
ui/src/components/ExpandProjectChat.tsx (5)
server/schemas.py (1)
ImageAttachment(187-208)ui/src/hooks/useExpandChat.ts (1)
useExpandChat(38-323)server/services/expand_chat_session.py (1)
start(82-169)server/websocket.py (1)
disconnect(67-73)ui/src/components/ChatMessage.tsx (1)
ChatMessage(15-192)
server/routers/features.py (4)
server/schemas.py (2)
FeatureBulkCreate(99-102)FeatureBulkCreateResponse(105-108)server/routers/projects.py (1)
validate_project_name(77-84)server/websocket.py (1)
validate_project_name(107-109)ui/src/lib/types.ts (1)
Feature(60-67)
server/main.py (5)
server/schemas.py (1)
SetupStatus(139-144)server/services/assistant_chat_session.py (1)
cleanup_all_sessions(361-373)server/services/spec_chat_session.py (1)
cleanup_all_sessions(476-488)server/services/expand_chat_session.py (1)
cleanup_all_expand_sessions(432-444)server/services/process_manager.py (1)
cleanup_all_managers(412-425)
🪛 LanguageTool
.claude/commands/expand-project.md
[style] ~91-~91: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ser take? - What's the success state? - What's the error state? Integration: - ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🪛 markdownlint-cli2 (0.18.1)
.claude/commands/expand-project.md
147-147: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (12)
server/routers/__init__.py (1)
10-24: Router export wiring looks correct.Importing
router as expand_project_routerand re-exporting via__all__is consistent with the existing router pattern here; assumingserver/routers/expand_project.pydefinesrouter, this should integrate cleanly.ui/src/components/ExpandProjectModal.tsx (1)
23-39: Verify Tailwind v4 CSS-variable syntax for the modal background.
bg-[var(--color-neo-bg)]may be impacted by Tailwind v4’s arbitrary-value/CSS-variable syntax changes; if you hit build-time class parsing issues, prefer switching to a v4-friendly form (e.g.bg-(--color-neo-bg)if that’s the intended token).ui/src/lib/types.ts (2)
303-321: ExpandChatServerMessage: confirm whether “question” messages can appear.Given the expand flow includes “clarifying questions”, double-check whether the server ever emits
SpecChatQuestionMessage(type: 'question'). If yes, include it inExpandChatServerMessageto keep client-side handling type-safe and exhaustive.
323-331: Bulk feature create typings look consistent with the server schemas.ui/src/lib/api.ts (1)
116-124: Bulk-create API wrapper is straightforward and consistent with existing helpers.server/main.py (2)
18-30: Router + shutdown cleanup wiring looks consistent with existing session cleanup patterns.
92-99: Including expand_project_router in main router list is the right integration point.ui/src/App.tsx (1)
276-287: Invalidating the features query on completion is the right refresh mechanism.ui/src/components/ExpandProjectChat.tsx (1)
52-60: Effect deps are suppressed; OK if projectName is immutable while mounted—otherwise you’ll keep talking to the wrong session.server/routers/expand_project.py (1)
64-101: REST session status/cancel endpoints are straightforward and match the WS session model.server/services/expand_chat_session.py (2)
310-369: Database isolation is correct; no project scoping required in Feature modelThe database isolation happens at the file level—each
project_dircorresponds to a separate SQLite database file at{project_dir}/features.db. TheFeaturemodel does not need aproject_idorproject_namefield because different projects maintain entirely separate database instances. This approach matches the existing bulk feature creation endpoint inserver/routers/features.py(lines 302–374), which uses the identical pattern without issues.
25-35: Clarify multimodal message payload structure forClaudeSDKClient.query()The async generator pattern itself is supported by
query()(not an issue). However, the payload structure appears inconsistent with documented usage. The code wraps the message in a dict envelope with keys (type,message,parent_tool_use_id,session_id), whereas examples in claude-agent-sdk docs show a simpler structure with directroleandcontentkeys. Confirm that this wrapper format is the correct input for multimodal messages in version 0.1.0, and verify whether hardcoding"session_id": "default"is appropriate or causes cross-session routing issues.
| # Parse attachments if present | ||
| attachments: list[ImageAttachment] = [] | ||
| raw_attachments = message.get("attachments", []) | ||
| if raw_attachments: | ||
| try: | ||
| for raw_att in raw_attachments: | ||
| attachments.append(ImageAttachment(**raw_att)) | ||
| except (ValidationError, Exception) as e: | ||
| logger.warning(f"Invalid attachment data: {e}") | ||
| await websocket.send_json({ | ||
| "type": "error", | ||
| "content": f"Invalid attachment: {str(e)}" | ||
| }) | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd /tmp && find . -name "expand_project.py" -type f 2>/dev/null | head -5Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
find . -name "expand_project.py" -o -name "*expand*" -type f 2>/dev/null | grep -E "\.py$"Repository: leonvanzyl/autocoder
Length of output: 139
🏁 Script executed:
git ls-files | grep -i expandRepository: leonvanzyl/autocoder
Length of output: 281
🏁 Script executed:
wc -l server/routers/expand_project.pyRepository: leonvanzyl/autocoder
Length of output: 100
🏁 Script executed:
sed -n '184,197p' server/routers/expand_project.pyRepository: leonvanzyl/autocoder
Length of output: 841
🏁 Script executed:
sed -n '234,242p' server/routers/expand_project.pyRepository: leonvanzyl/autocoder
Length of output: 363
Remove exception details from WebSocket error messages sent to clients; log details server-side instead.
Exception strings expose internal system details (paths, library versions, implementation specifics) to end users. This information should only be logged server-side. The client should receive generic error messages.
Two instances need fixing:
- Lines 184-197:
"content": f"Invalid attachment: {str(e)}"→"content": "Invalid attachment" - Lines 234-242:
"content": f"Server error: {str(e)}"→"content": "Server error"
The server-side logging (via logger.warning() and logger.exception()) is already in place and should be preserved.
🤖 Prompt for AI Agents
In @server/routers/expand_project.py around lines 184 - 197, Remove detailed
exception strings from WebSocket responses and keep only generic messages while
preserving server-side logs: in the block that parses attachments (where
ImageAttachment is used and logger.warning is called) change the
websocket.send_json payload to use "content": "Invalid attachment" instead of
including str(e); similarly, in the other error handling block that calls
logger.exception(...) and sends a server error over websocket, change the
websocket.send_json payload to use "content": "Server error" rather than
f"Server error: {str(e)}". Ensure logger.warning/logger.exception remain
unchanged so full details stay in server logs.
| async def _query_claude( | ||
| self, | ||
| message: str, | ||
| attachments: list[ImageAttachment] | None = None | ||
| ) -> AsyncGenerator[dict, None]: | ||
| """ | ||
| Internal method to query Claude and stream responses. | ||
|
|
||
| Handles text responses and detects feature creation blocks. | ||
| """ | ||
| if not self.client: | ||
| return | ||
|
|
||
| # Build the message content | ||
| if attachments and len(attachments) > 0: | ||
| content_blocks = [] | ||
| if message: | ||
| content_blocks.append({"type": "text", "text": message}) | ||
| for att in attachments: | ||
| content_blocks.append({ | ||
| "type": "image", | ||
| "source": { | ||
| "type": "base64", | ||
| "media_type": att.mimeType, | ||
| "data": att.base64Data, | ||
| } | ||
| }) | ||
| await self.client.query(_make_multimodal_message(content_blocks)) | ||
| logger.info(f"Sent multimodal message with {len(attachments)} image(s)") | ||
| else: | ||
| await self.client.query(message) | ||
|
|
||
| # Accumulate full response to detect feature blocks | ||
| full_response = "" | ||
|
|
||
| # Stream the response | ||
| async for msg in self.client.receive_response(): | ||
| msg_type = type(msg).__name__ | ||
|
|
||
| if msg_type == "AssistantMessage" and hasattr(msg, "content"): | ||
| for block in msg.content: | ||
| block_type = type(block).__name__ | ||
|
|
||
| if block_type == "TextBlock" and hasattr(block, "text"): | ||
| text = block.text | ||
| if text: | ||
| full_response += text | ||
| yield {"type": "text", "content": text} | ||
|
|
||
| self.messages.append({ | ||
| "role": "assistant", | ||
| "content": text, | ||
| "timestamp": datetime.now().isoformat() | ||
| }) | ||
|
|
||
| # Check for feature creation block in full response | ||
| features_match = re.search( | ||
| r'<features_to_create>\s*(\[[\s\S]*?\])\s*</features_to_create>', | ||
| full_response | ||
| ) | ||
|
|
||
| if features_match: | ||
| try: | ||
| features_json = features_match.group(1) | ||
| features_data = json.loads(features_json) | ||
|
|
||
| if features_data and isinstance(features_data, list): | ||
| # Create features via REST API | ||
| created = await self._create_features_bulk(features_data) | ||
|
|
||
| if created: | ||
| self.features_created += len(created) | ||
| self.created_feature_ids.extend([f["id"] for f in created]) | ||
|
|
||
| yield { | ||
| "type": "features_created", | ||
| "count": len(created), | ||
| "features": created | ||
| } | ||
|
|
||
| logger.info(f"Created {len(created)} features for {self.project_name}") | ||
| except json.JSONDecodeError as e: | ||
| logger.error(f"Failed to parse features JSON: {e}") | ||
| yield { | ||
| "type": "error", | ||
| "content": f"Failed to parse feature definitions: {str(e)}" | ||
| } | ||
| except Exception as e: | ||
| logger.exception("Failed to create features") | ||
| yield { | ||
| "type": "error", | ||
| "content": f"Failed to create features: {str(e)}" | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feature block parsing is duplication-prone and message persistence is too granular
- Duplication risk: If Claude repeats a
<features_to_create>block (or you re-send/refresh),_create_features_bulk()will run again and create duplicates; you already trackcreated_feature_idsbut don’t use it to prevent replays. - Parsing only first match:
re.search(...)finds a single block; if the skill ever emits multiple blocks, you’ll ignore later ones. - Message history: you append an
"assistant"message for every streamedTextBlock(Lines 265-269). That makesself.messagesfragmented and hard to replay; usually you want one assistant message per turn (accumulate blocks, append once at end).
Suggested approach (loop matches + add replay protection)
@@
- features_match = re.search(
- r'<features_to_create>\s*(\[[\s\S]*?\])\s*</features_to_create>',
- full_response
- )
-
- if features_match:
+ for features_match in re.finditer(
+ r'<features_to_create>\s*(\[[\s\S]*?\])\s*</features_to_create>',
+ full_response,
+ ):
try:
features_json = features_match.group(1)
features_data = json.loads(features_json)
@@
- created = await self._create_features_bulk(features_data)
+ created = await self._create_features_bulk(features_data)🤖 Prompt for AI Agents
In @server/services/expand_chat_session.py around lines 216 - 309, The
_query_claude method currently only searches a single <features_to_create>
block, may create duplicate features on repeated blocks, and appends an
assistant message per TextBlock; change it to iterate all feature blocks (use
re.finditer on full_response) and for each parsed feature JSON skip any features
whose IDs are already in self.created_feature_ids before calling
_create_features_bulk (or de-duplicate the features_data list), update
self.features_created and self.created_feature_ids only with newly created IDs,
and collapse streamed text by accumulating all TextBlock text into a single
assistant string and appending one message to self.messages at the end of the
response handling (keep streaming yields for text chunks but only persist one
assistant entry per turn). Ensure exceptions and yields for errors remain as-is
and reference symbols: _query_claude, receive_response, AssistantMessage,
TextBlock, full_response, _create_features_bulk, created_feature_ids,
features_created, created_feature_ids, and messages.
| // File handling for image attachments | ||
| const handleFileSelect = useCallback((files: FileList | null) => { | ||
| if (!files) return | ||
|
|
||
| Array.from(files).forEach((file) => { | ||
| // Validate file type | ||
| if (!ALLOWED_TYPES.includes(file.type)) { | ||
| setError(`Invalid file type: ${file.name}. Only JPEG and PNG are supported.`) | ||
| return | ||
| } | ||
|
|
||
| // Validate file size | ||
| if (file.size > MAX_FILE_SIZE) { | ||
| setError(`File too large: ${file.name}. Maximum size is 5 MB.`) | ||
| return | ||
| } | ||
|
|
||
| // Read and convert to base64 | ||
| const reader = new FileReader() | ||
| reader.onload = (e) => { | ||
| const dataUrl = e.target?.result as string | ||
| const base64Data = dataUrl.split(',')[1] | ||
|
|
||
| const attachment: ImageAttachment = { | ||
| id: `${Date.now()}-${Math.random().toString(36).substring(2, 9)}`, | ||
| filename: file.name, | ||
| mimeType: file.type as 'image/jpeg' | 'image/png', | ||
| base64Data, | ||
| previewUrl: dataUrl, | ||
| size: file.size, | ||
| } | ||
|
|
||
| setPendingAttachments((prev) => [...prev, attachment]) | ||
| } | ||
| reader.readAsDataURL(file) | ||
| }) | ||
| }, []) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Attachment ingestion needs FileReader error handling + base64 extraction guard (avoid pushing undefined/invalid payloads).
Proposed diff
const handleFileSelect = useCallback((files: FileList | null) => {
if (!files) return
Array.from(files).forEach((file) => {
@@
// Read and convert to base64
const reader = new FileReader()
+ reader.onerror = () => {
+ setError(`Failed to read file: ${file.name}`)
+ }
reader.onload = (e) => {
- const dataUrl = e.target?.result as string
- const base64Data = dataUrl.split(',')[1]
+ const dataUrl = e.target?.result
+ if (typeof dataUrl !== 'string') {
+ setError(`Failed to read file: ${file.name}`)
+ return
+ }
+ const base64Data = dataUrl.includes(',') ? dataUrl.split(',')[1] : null
+ if (!base64Data) {
+ setError(`Invalid file encoding: ${file.name}`)
+ return
+ }
@@
setPendingAttachments((prev) => [...prev, attachment])
}
reader.readAsDataURL(file)
})
}, [])
@@
<input
ref={fileInputRef}
type="file"
accept="image/jpeg,image/png"
multiple
onChange={(e) => handleFileSelect(e.target.files)}
className="hidden"
/>Also consider clearing the input value so selecting the same file again re-triggers onChange:
Optional follow-up diff
- onChange={(e) => handleFileSelect(e.target.files)}
+ onChange={(e) => {
+ handleFileSelect(e.target.files)
+ e.currentTarget.value = ''
+ }}Also applies to: 301-308
🤖 Prompt for AI Agents
In @ui/src/components/ExpandProjectChat.tsx around lines 90 - 126,
handleFileSelect currently reads files with FileReader but lacks error handling
and a guard for extracting base64, which can push invalid attachments; add a
reader.onerror handler to call setError with the FileReader error and abort
adding that file, and in reader.onload validate e.target?.result is a string and
that dataUrl.split(',')[1] exists before creating the ImageAttachment and
calling setPendingAttachments; also ensure you clear the input value after
processing (so re-selecting the same file triggers onChange) — relate changes to
handleFileSelect, reader.onload, reader.onerror, base64Data, and
setPendingAttachments.
| ws.onclose = () => { | ||
| setConnectionStatus('disconnected') | ||
| if (pingIntervalRef.current) { | ||
| clearInterval(pingIntervalRef.current) | ||
| pingIntervalRef.current = null | ||
| } | ||
|
|
||
| // Attempt reconnection if not intentionally closed | ||
| if (reconnectAttempts.current < maxReconnectAttempts && !isCompleteRef.current) { | ||
| reconnectAttempts.current++ | ||
| const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 10000) | ||
| reconnectTimeoutRef.current = window.setTimeout(connect, delay) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n ui/src/hooks/useExpandChat.ts | sed -n '95,125p'Repository: leonvanzyl/autocoder
Length of output: 1237
🏁 Script executed:
cat -n ui/src/hooks/useExpandChat.ts | sed -n '290,320p'Repository: leonvanzyl/autocoder
Length of output: 972
🏁 Script executed:
wc -l ui/src/hooks/useExpandChat.tsRepository: leonvanzyl/autocoder
Length of output: 97
🏁 Script executed:
cat -n ui/src/hooks/useExpandChat.ts | sed -n '1,90p'Repository: leonvanzyl/autocoder
Length of output: 3383
🏁 Script executed:
cat -n ui/src/hooks/useExpandChat.ts | sed -n '50,103p'Repository: leonvanzyl/autocoder
Length of output: 2008
disconnect() doesn't cancel a pending reconnect timeout, allowing the WebSocket to reopen after user disconnection.
When ws.onclose fires and schedules a reconnect timeout, if disconnect() is called before that timeout executes, the scheduled reconnection still fires and calls connect(). Since disconnect() nullifies wsRef.current, the check in connect() at line 79 (wsRef.current?.readyState === WebSocket.OPEN) doesn't prevent the reconnection—the WebSocket is recreated anyway.
Proposed diff (add manual disconnect guard + clear pending timeout)
export function useExpandChat({
@@
const reconnectTimeoutRef = useRef<number | null>(null)
const isCompleteRef = useRef(false)
+ const manuallyDisconnectedRef = useRef(false)
@@
const connect = useCallback(() => {
+ if (manuallyDisconnectedRef.current) return
if (wsRef.current?.readyState === WebSocket.OPEN) {
return
}
@@
ws.onopen = () => {
setConnectionStatus('connected')
reconnectAttempts.current = 0
+ manuallyDisconnectedRef.current = false
@@
ws.onclose = () => {
setConnectionStatus('disconnected')
@@
// Attempt reconnection if not intentionally closed
- if (reconnectAttempts.current < maxReconnectAttempts && !isCompleteRef.current) {
+ if (
+ !manuallyDisconnectedRef.current &&
+ reconnectAttempts.current < maxReconnectAttempts &&
+ !isCompleteRef.current
+ ) {
reconnectAttempts.current++
const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 10000)
reconnectTimeoutRef.current = window.setTimeout(connect, delay)
}
}
@@
const disconnect = useCallback(() => {
+ manuallyDisconnectedRef.current = true
reconnectAttempts.current = maxReconnectAttempts // Prevent reconnection
+ if (reconnectTimeoutRef.current) {
+ clearTimeout(reconnectTimeoutRef.current)
+ reconnectTimeoutRef.current = null
+ }
if (pingIntervalRef.current) {
clearInterval(pingIntervalRef.current)
pingIntervalRef.current = null
}
if (wsRef.current) {
wsRef.current.close()
wsRef.current = null
}
setConnectionStatus('disconnected')
}, [])Also applies to: 299-310
🤖 Prompt for AI Agents
In @ui/src/hooks/useExpandChat.ts around lines 104 - 117, The onclose handler
schedules reconnects but disconnect() doesn’t clear that pending timeout or mark
the closure as intentional, so a later timeout can call connect() and reopen the
socket; update disconnect() to clear reconnectTimeoutRef.current and set a
manual-disconnect flag (e.g., manualDisconnectRef or
isManualDisconnectRef.current = true), add a guard in ws.onclose before
scheduling (check !isManualDisconnectRef.current) and in connect() at the top
(return early if isManualDisconnectRef.current is true), and ensure
reconnectTimeoutRef.current is nulled when cleared.
- Add language specifier to fenced code block in expand-project.md - Remove detailed exception strings from WebSocket responses (security) - Make WebSocket "start" message idempotent to avoid session reset - Fix race condition in bulk feature creation with row-level lock - Add validation for starting_priority (must be >= 1) - Fix _query_claude to handle multiple feature blocks and deduplicate - Add FileReader error handling in ExpandProjectChat - Fix disconnect() to clear pending reconnect timeout - Enable sandbox mode and validate CLI path in expand_chat_session - Clean up temporary settings file on session close Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In @server/routers/expand_project.py:
- Around line 46-48: The project has four inconsistent validate_project_name
implementations (one in expand_project.py returning bool and others returning
str/raising HTTPException); consolidate them by creating a single shared
validator (e.g., validate_project_name) in a common util module and update all
routers (expand_project, features, agent, projects) to import and call that
validator; the shared function should perform the regex check
r'^[a-zA-Z0-9_-]{1,50}$' and raise an HTTPException with a 400 status and clear
message on failure, otherwise return the validated name so callers don't
duplicate error handling.
In @server/services/expand_chat_session.py:
- Around line 186-230: Concurrent send_message() calls can interleave
_query_claude() streams and corrupt responses; introduce a per-session
asyncio.Lock (e.g., self._query_lock) initialized when session starts and use
"async with self._query_lock" around calls to _query_claude() (and the analogous
call in receive_response/other send_message overloads) so only one query/stream
runs at a time, ensuring serialized streaming and correct feature parsing.
- Around line 334-393: The _create_features_bulk function is racy and re-queries
by priority; instead, add proper transaction handling: wrap the insert loop in
the session transaction, collect created ORM objects (e.g., db_feature) into a
local list, call session.flush() after adding them to populate IDs (no re-query
by priority), build and return the created_features from those flushed objects,
and ensure any exception calls session.rollback() in the except block before
re-raising; also document that robust concurrency requires either a DB
unique/sequence on priority or a locking strategy around the max-priority
selection if you must keep the max+1 approach.
In @ui/src/components/ExpandProjectChat.tsx:
- Around line 52-59: The start function is unstable because its dependency
connect depends on an inline onError callback recreated each render; memoize the
error handler by creating a stable callback (e.g., wrap setError in useCallback
as handleError) and pass that handler as onError to useExpandChat so
connect/start remain stable; keep the mount/unmount useEffect with the existing
eslint-disable comment.
🧹 Nitpick comments (2)
.claude/commands/expand-project.md (1)
87-92: Consider varying sentence starters for better readability.Three consecutive questions begin with "What's" which slightly impacts flow. Consider rephrasing one for variety, e.g., "How does success look?" instead of "What's the success state?"
server/services/expand_chat_session.py (1)
407-468: Registry usesthreading.Lockinside async fns; considerasyncio.Lockif this runs on the event loop.The critical section is short, so it’s probably fine, but
threading.Lockcan still block the loop under contention.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.claude/commands/expand-project.mdserver/routers/expand_project.pyserver/routers/features.pyserver/services/expand_chat_session.pyui/src/components/ExpandProjectChat.tsxui/src/hooks/useExpandChat.ts
🧰 Additional context used
🧬 Code graph analysis (4)
ui/src/hooks/useExpandChat.ts (4)
ui/src/components/ChatMessage.tsx (1)
ChatMessage(15-192)server/schemas.py (1)
ImageAttachment(187-208)server/websocket.py (2)
connect(58-65)disconnect(67-73)server/services/expand_chat_session.py (1)
start(90-184)
server/routers/features.py (7)
server/schemas.py (2)
FeatureBulkCreate(99-102)FeatureBulkCreateResponse(105-108)server/routers/expand_project.py (2)
validate_project_name(46-48)_get_project_path(35-43)server/routers/agent.py (2)
validate_project_name(35-42)_get_project_path(18-26)server/routers/projects.py (1)
validate_project_name(77-84)server/websocket.py (2)
validate_project_name(107-109)_get_project_path(26-34)api/database.py (1)
Feature(19-44)ui/src/lib/types.ts (1)
Feature(60-67)
server/routers/expand_project.py (4)
server/schemas.py (1)
ImageAttachment(187-208)server/services/expand_chat_session.py (11)
ExpandChatSession(41-404)create_expand_session(418-433)get_expand_session(412-415)list_expand_sessions(450-453)remove_expand_session(436-447)is_complete(398-400)get_features_created(394-396)get_messages(402-404)close(72-88)start(90-184)send_message(186-229)server/routers/features.py (2)
_get_project_path(30-38)validate_project_name(59-66)registry.py (1)
get_project_path(195-213)
server/services/expand_chat_session.py (3)
server/schemas.py (1)
ImageAttachment(187-208)ui/src/lib/types.ts (1)
Feature(60-67)api/database.py (1)
create_database(76-94)
🪛 LanguageTool
.claude/commands/expand-project.md
[style] ~91-~91: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ser take? - What's the success state? - What's the error state? Integration: - ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🔇 Additional comments (9)
server/routers/features.py (1)
302-385: LGTM - Well-designed bulk creation endpoint.The implementation properly handles:
- Row-level locking with
with_for_update()to prevent race conditions when determining starting priority- Sequential priority assignment with proper validation
- Efficient ID-based retrieval of created features
- Appropriate error handling and empty input handling
The concurrency safety and validation logic are particularly well-executed.
ui/src/hooks/useExpandChat.ts (3)
78-242: LGTM - Robust WebSocket connection management.The implementation includes:
- Exponential backoff for reconnection (capped at 10s)
- Keep-alive ping every 30 seconds
- Proper cleanup of intervals
- Respects completion state to prevent unnecessary reconnections
- Comprehensive handling of all server message types
260-297: LGTM - Message sending properly implemented.Correctly validates connection state, builds payload with optional attachments, and maintains chat state.
299-314: LGTM - Clean disconnect implementation.Properly prevents reconnection and cleans up all resources (timers and WebSocket).
ui/src/components/ExpandProjectChat.tsx (2)
91-129: LGTM - Robust file upload handling.Properly validates file type and size, handles FileReader errors, and correctly extracts base64 data from the data URL.
181-377: LGTM - Well-designed chat UI component.The component demonstrates excellent UX patterns:
- Clear connection status indicators
- Drag-and-drop support with visual feedback
- Keyboard shortcuts (Enter to send, Shift+Enter for newlines)
- Auto-scroll and auto-focus behavior
- Comprehensive error handling with dismissible error banner
- Clear completion state with feature count
server/routers/expand_project.py (2)
107-256: LGTM - Robust WebSocket implementation.The endpoint demonstrates solid design:
- Validates project name, path, and spec existence before accepting connection
- Idempotent start message handling (resumes existing sessions)
- Comprehensive message type handling (ping/pong, start, message, done)
- Proper attachment validation using Pydantic schemas
- Intentionally preserves session on disconnect to enable resume
- Good error handling for JSON decode errors and validation failures
89-100: No issue found. Theremove_expand_sessionfunction is defined asasyncinserver/services/expand_chat_session.py, so theawaitusage on line 99 is correct.Likely an incorrect or invalid review comment.
server/services/expand_chat_session.py (1)
25-34: Verify the multimodal message envelope matchesclaude_agent_sdkexpectations (and avoid hard-codedsession_id="default").If the SDK uses
session_idto correlate streams, a constant value risks cross-session bleed if the underlying client shares state.
| def __init__(self, project_name: str, project_dir: Path): | ||
| """ | ||
| Initialize the session. | ||
|
|
||
| Args: | ||
| project_name: Name of the project being expanded | ||
| project_dir: Absolute path to the project directory | ||
| """ | ||
| self.project_name = project_name | ||
| self.project_dir = project_dir | ||
| self.client: Optional[ClaudeSDKClient] = None | ||
| self.messages: list[dict] = [] | ||
| self.complete: bool = False | ||
| self.created_at = datetime.now() | ||
| self._conversation_id: Optional[str] = None | ||
| self._client_entered: bool = False | ||
| self.features_created: int = 0 | ||
| self.created_feature_ids: list[int] = [] | ||
| self._settings_file: Optional[Path] = None | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.complete is never set; expansion_complete is documented but never emitted.
Right now is_complete() will always be False, and callers expecting the "expansion_complete" event (per docstring) won’t ever receive it. Please either implement completion detection/emit or remove the API surface/doc.
Also applies to: 398-405
| async def close(self) -> None: | ||
| """Clean up resources and close the Claude client.""" | ||
| if self.client and self._client_entered: | ||
| try: | ||
| await self.client.__aexit__(None, None, None) | ||
| except Exception as e: | ||
| logger.warning(f"Error closing Claude client: {e}") | ||
| finally: | ||
| self._client_entered = False | ||
| self.client = None | ||
|
|
||
| # Clean up temporary settings file | ||
| if self._settings_file and self._settings_file.exists(): | ||
| try: | ||
| self._settings_file.unlink() | ||
| except Exception as e: | ||
| logger.warning(f"Error removing settings file: {e}") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t overwrite a project’s existing .claude_settings.json; create a temp settings file and always clean it up on early returns.
This currently clobbers any user/project Claude settings (Line 131-135) and leaves the file behind if start() returns early before the session is closed.
Proposed fix (temp settings file + encoding + restrictive perms + early cleanup)
@@
import json
import logging
import re
import shutil
import threading
+import uuid
from datetime import datetime
from pathlib import Path
from typing import AsyncGenerator, Optional
@@
async def start(self) -> AsyncGenerator[dict, None]:
@@
- # Create security settings file
+ # Find and validate Claude CLI (before writing temp files)
+ system_cli = shutil.which("claude")
+ if not system_cli:
+ yield {
+ "type": "error",
+ "content": "Claude CLI not found. Please install Claude Code."
+ }
+ return
+
+ # Create *temporary* security settings file for this session
security_settings = {
"sandbox": {"enabled": True},
"permissions": {
"defaultMode": "acceptEdits",
"allow": [
"Read(./**)",
"Glob(./**)",
],
},
}
- settings_file = self.project_dir / ".claude_settings.json"
+ settings_file = self.project_dir / f".claude_settings.expand.{uuid.uuid4().hex}.json"
self._settings_file = settings_file
- with open(settings_file, "w") as f:
+ with open(settings_file, "w", encoding="utf-8") as f:
json.dump(security_settings, f, indent=2)
+ try:
+ settings_file.chmod(0o600)
+ except Exception:
+ logger.debug("Unable to chmod settings file", exc_info=True)
@@
- # Find and validate Claude CLI
- system_cli = shutil.which("claude")
- if not system_cli:
- yield {
- "type": "error",
- "content": "Claude CLI not found. Please install Claude Code."
- }
- return
try:
@@
except Exception as e:
logger.exception("Failed to create Claude client")
yield {
"type": "error",
- "content": f"Failed to initialize Claude: {str(e)}"
+ "content": "Failed to initialize Claude."
}
+ await self.close()
return
@@
except Exception as e:
logger.exception("Failed to start expand chat")
yield {
"type": "error",
- "content": f"Failed to start conversation: {str(e)}"
+ "content": "Failed to start conversation."
}
+ await self.close()Also applies to: 90-185
Backend improvements: - Create shared validation utility for project name validation - Add asyncio.Lock to prevent concurrent _query_claude calls - Fix _create_features_bulk: use flush() for IDs, add rollback on error - Use unique temp settings file instead of overwriting .claude_settings.json - Remove exception details from error messages (security) Frontend improvements: - Memoize onError callback in ExpandProjectChat for stable dependencies - Add timeout to start() checkAndSend loop to prevent infinite retries - Add manuallyDisconnectedRef to prevent reconnection after explicit disconnect - Clear pending reconnect timeout in disconnect() Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @server/services/expand_chat_session.py:
- Around line 368-386: The code in the try block determines starting priority
with session.query(Feature).order_by(Feature.priority.desc()).first() which can
race under concurrent requests; change this to acquire a row-level lock on the
max-priority row via the session.query(...).with_for_update() (or equivalent ORM
for_update) call inside the same transaction before computing current_priority
so concurrent transactions serialize, or alternatively delegate priority
assignment to the router's bulk endpoint; update the priority-read logic (the
query that assigns current_priority) to use with_for_update() on the Feature
model so each transaction reserves the max priority before inserting new Feature
instances.
🧹 Nitpick comments (4)
ui/src/hooks/useExpandChat.ts (2)
97-108: Consider removing the reset ofmanuallyDisconnectedRefon connection open.Setting
manuallyDisconnectedRef.current = falseon every successful connection (line 100) could allow unwanted reconnection attempts if the WebSocket happens to reconnect after a manual disconnect was initiated. The guard at line 81 preventsconnect()from being called, but if a connection was already in progress whendisconnect()was called, this reset would undo the manual disconnect flag.Consider removing line 100 to ensure manual disconnect intent is preserved.
Suggested fix
ws.onopen = () => { setConnectionStatus('connected') reconnectAttempts.current = 0 - manuallyDisconnectedRef.current = false // Start ping interval to keep connection alive
254-275: Timeout chain instart()is not cleaned up on unmount.The recursive
setTimeoutcalls incheckAndSendare not tracked or cleared when the component unmounts. If the component unmounts while waiting for the connection, these timeouts will continue executing and may callonErroror attempt to send messages on an unmounted component.Consider tracking the timeout ID and clearing it in the cleanup effect.
Suggested approach
+ const startTimeoutRef = useRef<number | null>(null) + // Clean up on unmount useEffect(() => { return () => { if (pingIntervalRef.current) { clearInterval(pingIntervalRef.current) } if (reconnectTimeoutRef.current) { clearTimeout(reconnectTimeoutRef.current) } + if (startTimeoutRef.current) { + clearTimeout(startTimeoutRef.current) + } if (wsRef.current) { wsRef.current.close() } } }, [])Then use
startTimeoutRef.current = window.setTimeout(checkAndSend, 100)in thestartfunction.server/services/expand_chat_session.py (1)
286-293: Multiple assistant messages created for a single response.Each
TextBlockin the Claude response creates a separate entry inself.messages. This could result in many fragmented assistant messages instead of a single coherent one. Consider accumulating the text and appending a single message after the response stream completes.Suggested approach
+ # Accumulate assistant message + assistant_text = "" + # Stream the response async for msg in self.client.receive_response(): msg_type = type(msg).__name__ if msg_type == "AssistantMessage" and hasattr(msg, "content"): for block in msg.content: block_type = type(block).__name__ if block_type == "TextBlock" and hasattr(block, "text"): text = block.text if text: full_response += text + assistant_text += text yield {"type": "text", "content": text} - self.messages.append({ - "role": "assistant", - "content": text, - "timestamp": datetime.now().isoformat() - }) + + # Store complete assistant message + if assistant_text: + self.messages.append({ + "role": "assistant", + "content": assistant_text, + "timestamp": datetime.now().isoformat() + })ui/src/components/ExpandProjectChat.tsx (1)
150-182:ConnectionIndicatoris recreated on every render.The
ConnectionIndicatorfunction component is defined insideExpandProjectChat, causing it to be recreated on every render. While React is smart about this, extracting it outside or usinguseMemowould be cleaner.Option 1: Extract outside component
function ConnectionIndicator({ status }: { status: ConnectionStatus }) { switch (status) { case 'connected': return (/* ... */) // ... other cases } }Then use:
<ConnectionIndicator status={connectionStatus} />
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
server/routers/expand_project.pyserver/routers/features.pyserver/services/expand_chat_session.pyserver/utils/__init__.pyserver/utils/validation.pyui/src/components/ExpandProjectChat.tsxui/src/hooks/useExpandChat.ts
✅ Files skipped from review due to trivial changes (1)
- server/utils/init.py
🚧 Files skipped from review as they are similar to previous changes (1)
- server/routers/expand_project.py
🧰 Additional context used
🧬 Code graph analysis (2)
ui/src/hooks/useExpandChat.ts (4)
ui/src/components/ChatMessage.tsx (1)
ChatMessage(15-192)server/schemas.py (1)
ImageAttachment(187-208)server/websocket.py (2)
connect(58-65)disconnect(67-73)server/services/expand_chat_session.py (1)
start(93-190)
server/routers/features.py (4)
server/schemas.py (4)
FeatureBulkCreate(99-102)FeatureBulkCreateResponse(105-108)FeatureCreate(76-78)FeatureResponse(81-89)server/utils/validation.py (1)
validate_project_name(10-28)server/routers/expand_project.py (1)
_get_project_path(35-43)api/database.py (1)
Feature(19-44)
🔇 Additional comments (13)
ui/src/hooks/useExpandChat.ts (2)
277-314: LGTM!The
sendMessagefunction correctly validates connection state, builds the message payload, and properly strips thepreviewUrlfrom attachments before sending to avoid unnecessary data transfer.
316-332: LGTM!The
disconnectfunction properly cleans up all resources, sets the manual disconnect flag, and prevents any reconnection attempts.server/services/expand_chat_session.py (4)
27-37: LGTM!The
_make_multimodal_messagehelper correctly creates an async generator for multimodal content blocks.
54-74: LGTM!The
__init__method properly initializes session state with appropriate tracking for messages, features, and concurrency control viaasyncio.Lock.
192-238: LGTM!The
send_messagemethod properly validates state, uses the query lock to prevent concurrent Claude queries, and handles errors gracefully.
423-449: LGTM!The session registry functions are thread-safe and properly handle session lifecycle, including cleanup of existing sessions when creating new ones.
server/utils/validation.py (1)
10-28: LGTM!The
validate_project_namefunction provides solid protection against path traversal attacks by restricting to alphanumeric characters, hyphens, and underscores with a reasonable length limit.server/routers/features.py (2)
14-21: LGTM!The new imports for bulk creation schemas and shared validation utility are properly added.
292-375: LGTM!The bulk creation endpoint is well-implemented with:
- Proper input validation for
starting_priority- Race condition prevention using
with_for_update()when calculating max priority- Efficient ID retrieval via
flush()without relying on priority range queries- Comprehensive error handling
ui/src/components/ExpandProjectChat.tsx (4)
55-62: Intentional exhaustive-deps bypass is acceptable but consider documenting.The empty dependency array with eslint-disable is intentional for one-time mount/unmount behavior. This is acceptable, though the
startanddisconnectfunctions should be stable (they are via useCallback). Consider adding a brief comment explaining why.
76-91: LGTM!The
handleSendMessagefunction properly handles both text-only and attachment-only messages, and the keyboard handler correctly implements Enter-to-send with Shift+Enter for newlines.
116-123: Type assertion formimeTypeassumes validation passed.The type assertion
file.type as 'image/jpeg' | 'image/png'relies on the earlier validation check at line 99. While this should be safe given the control flow, a more defensive approach would use a type guard or mapped type.This is acceptable given the validation occurs first, but worth noting for maintainability.
184-380: LGTM!The JSX structure is well-organized with clear sections for header, error banner, messages, input, and completion footer. The conditional rendering logic properly handles all states, and accessibility concerns like disabled states are correctly applied.
| # Create Claude SDK client | ||
| try: | ||
| self.client = ClaudeSDKClient( | ||
| options=ClaudeAgentOptions( | ||
| model="claude-opus-4-5-20251101", | ||
| cli_path=system_cli, | ||
| system_prompt=system_prompt, | ||
| allowed_tools=[ | ||
| "Read", | ||
| "Glob", | ||
| ], | ||
| permission_mode="acceptEdits", | ||
| max_turns=100, | ||
| cwd=str(self.project_dir.resolve()), | ||
| settings=str(settings_file.resolve()), | ||
| ) | ||
| ) | ||
| await self.client.__aenter__() | ||
| self._client_entered = True | ||
| except Exception as e: | ||
| logger.exception("Failed to create Claude client") | ||
| yield { | ||
| "type": "error", | ||
| "content": "Failed to initialize Claude" | ||
| } | ||
| return | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Claude API valid model identifiers 2025
💡 Result:
Key Claude API model identifiers (examples and formats used in 2025):
- claude-opus-4-20250514 (Claude Opus 4). [1][2]
- claude-sonnet-4-20250514 (Claude Sonnet 4). [1][2]
- Versioned snapshots follow the pattern: claude---YYYYMMDD (e.g., claude-sonnet-4-20250514). [1]
- Aliases (point to latest snapshot) — e.g., claude-opus-4, claude-sonnet-4, claude-3-7-sonnet-latest. Use aliases for experimentation; use full snapshot IDs in production. [1][3]
Source: Anthropic API docs (models list / model details / models overview). [1][2][3]
🏁 Script executed:
# Search for model identifier usage in the codebase
rg "claude-" --type py -A 2 -B 2
# Check for configuration files or settings
fd -t f -e json -e yaml -e toml | head -20
# Search for any hardcoded model references
rg "model\s*=" --type py | head -20Repository: leonvanzyl/autocoder
Length of output: 4079
🏁 Script executed:
# Check if server services accept model as a parameter
rg "expand_chat_session|assistant_chat_session|spec_chat_session" --type py -A 5 | head -50
# Look for environment variables or config files for model
rg "MODEL|model.*env|getenv" --type py | grep -i model
# Check if there's a settings or config module
fd -t f -name "*config*" -o -name "*settings*" | grep -v node_modules
# Look at the ClaudeAgentOptions to see if model can be overridden
rg "ClaudeAgentOptions" --type py -B 5 -A 10Repository: leonvanzyl/autocoder
Length of output: 10749
Hardcoded model identifier across multiple service files should be configurable.
The model "claude-opus-4-5-20251101" is hardcoded identically in three service files (expand_chat_session.py, spec_chat_session.py, and assistant_chat_session.py). Unlike the demo script which accepts a configurable --model argument, these server services cannot override the model without code changes. Extract this to a configuration variable or environment variable to allow easier updates and testing with different models.
| try: | ||
| # Determine starting priority | ||
| max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first() | ||
| current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1 | ||
|
|
||
| created_rows: list = [] | ||
|
|
||
| for f in features: | ||
| db_feature = Feature( | ||
| priority=current_priority, | ||
| category=f.get("category", "functional"), | ||
| name=f.get("name", "Unnamed feature"), | ||
| description=f.get("description", ""), | ||
| steps=f.get("steps", []), | ||
| passes=False, | ||
| ) | ||
| session.add(db_feature) | ||
| created_rows.append(db_feature) | ||
| current_priority += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential race condition when determining starting priority.
Unlike the bulk creation endpoint in features.py which uses with_for_update() to lock the max priority row, this method queries the max priority without any locking. Concurrent sessions could calculate the same starting priority, leading to duplicate priority values.
Consider adding row-level locking consistent with the router implementation, or coordinate through the router's bulk endpoint instead of direct DB access.
Suggested fix
try:
# Determine starting priority
- max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first()
+ max_priority_feature = (
+ session.query(Feature)
+ .order_by(Feature.priority.desc())
+ .with_for_update()
+ .first()
+ )
current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| # Determine starting priority | |
| max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first() | |
| current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1 | |
| created_rows: list = [] | |
| for f in features: | |
| db_feature = Feature( | |
| priority=current_priority, | |
| category=f.get("category", "functional"), | |
| name=f.get("name", "Unnamed feature"), | |
| description=f.get("description", ""), | |
| steps=f.get("steps", []), | |
| passes=False, | |
| ) | |
| session.add(db_feature) | |
| created_rows.append(db_feature) | |
| current_priority += 1 | |
| try: | |
| # Determine starting priority | |
| max_priority_feature = ( | |
| session.query(Feature) | |
| .order_by(Feature.priority.desc()) | |
| .with_for_update() | |
| .first() | |
| ) | |
| current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1 | |
| created_rows: list = [] | |
| for f in features: | |
| db_feature = Feature( | |
| priority=current_priority, | |
| category=f.get("category", "functional"), | |
| name=f.get("name", "Unnamed feature"), | |
| description=f.get("description", ""), | |
| steps=f.get("steps", []), | |
| passes=False, | |
| ) | |
| session.add(db_feature) | |
| created_rows.append(db_feature) | |
| current_priority += 1 |
🤖 Prompt for AI Agents
In @server/services/expand_chat_session.py around lines 368 - 386, The code in
the try block determines starting priority with
session.query(Feature).order_by(Feature.priority.desc()).first() which can race
under concurrent requests; change this to acquire a row-level lock on the
max-priority row via the session.query(...).with_for_update() (or equivalent ORM
for_update) call inside the same transaction before computing current_priority
so concurrent transactions serialize, or alternatively delegate priority
assignment to the router's bulk endpoint; update the priority-read logic (the
query that assigns current_priority) to use with_for_update() on the Feature
model so each transaction reserves the max priority before inserting new Feature
instances.
|
Thank you. |
Post-merge fixes for PR #36 (expand-project-with-ai): - Fix syntax error in App.tsx Escape handler (missing `} else`) - Fix missing closing brace in types.ts FeatureBulkCreateResponse - Remove unused exception variables flagged by ruff (F841) - Make nav buttons minimalist: remove text labels, keep icons + shortcuts - "Add Feature" → icon + N shortcut, tooltip "Add new feature" - "Expand" → icon + E shortcut, tooltip "Expand project with AI" All checks pass: ruff, security tests, ESLint, TypeScript build. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…l#36) - Create StaticSpecAdapter class that wraps legacy agents as AgentSpecs - Implement create_initializer_spec() method with: - Prompt loading from prompts/initializer_prompt.md - task_type set to "custom" - Tool policy with feature creation tools (feature_create_bulk, etc.) - max_turns=100, timeout_seconds=3600 for lengthy initialization - AcceptanceSpec with feature_count validator - Also implement create_coding_spec() and create_testing_spec() for completeness - Add comprehensive unit tests (45 tests, all passing) - Add verification script with all 10 feature steps passing - Add end-to-end database integration test Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary
Adds the ability to add multiple features to an existing project through a natural language conversation with Claude, similar to how initial spec creation works.
Problem: Currently, adding features requires adding them one-by-one through the Add Feature modal, which is time-consuming when you need to add many related features.
Solution: New "Expand Project" feature that lets you describe changes in natural language, and Claude generates the features for you.
Features
Eapp_spec.txtto understand your projectPOST /api/projects/{name}/features/bulkfor batch operationsHow to Use
EFiles Changed
New files:
.claude/commands/expand-project.md- Claude skill for expansion conversationsserver/services/expand_chat_session.py- WebSocket chat session serviceserver/routers/expand_project.py- WebSocket endpoint at/api/expand/ws/{project_name}ui/src/components/ExpandProjectChat.tsx- Full chat UI componentui/src/components/ExpandProjectModal.tsx- Modal wrapperui/src/hooks/useExpandChat.ts- WebSocket hook for expansion chatModified:
server/routers/features.py- Added bulk creation endpointserver/schemas.py- AddedFeatureBulkCreateschemasui/src/App.tsx- Integrated Expand button and modalTest Plan
E🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.