-
Notifications
You must be signed in to change notification settings - Fork 489
feat(performance): boost concurrency and vram limits #452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(performance): boost concurrency and vram limits #452
Conversation
…port - Add Z.AI provider with full API integration - Implement World Model native graph view (React Flow) - Add ZhiPu AI GLM models support - Enhanced provider factory with Z.AI routing - UI improvements: provider icons, settings tabs - Server: parallel executor, cluster manager support - Fix Agent Runner API signature (thinkingLevel param)
📝 WalkthroughWalkthroughIntegrates Z.AI (GLM-4) provider alongside Claude, introduces a new provider abstraction layer with plugin registration, adds server multi-threading infrastructure (cluster manager, thread pool, worker threads), implements a Red Giant stellar evolution service, creates a World Model graph visualization view, and refactors provider queries to support provider-agnostic AI operations. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server
participant ProviderRegistry
participant ZaiProvider
participant ZaiTools
participant ExternalAPI as Z.AI API
Client->>Server: Request with model: 'glm-4.7'
Server->>ProviderRegistry: getQueryService()
ProviderRegistry->>ProviderRegistry: Resolve provider by model
ProviderRegistry-->>Server: ZaiProvider instance
Server->>ZaiProvider: executeQuery(options)
ZaiProvider->>ZaiProvider: generateToken(apiKey)
ZaiProvider->>ExternalAPI: POST /chat/completions (Bearer token)
ExternalAPI-->>ZaiProvider: Streaming response
ZaiProvider->>ZaiTools: executeTool(name, args)
ZaiTools-->>ZaiProvider: Tool result
ZaiProvider->>Server: Yield ProviderMessage (text/tool_calls)
Server-->>Client: Stream response
sequenceDiagram
participant UI as Board View
participant Store as AppStore
participant API as HTTP Client
participant Server
participant AutoMode as AutoModeService
UI->>UI: onExpand(feature)
UI->>API: POST /auto-mode/expand-feature
API->>Server: POST /auto-mode/expand-feature
Server->>AutoMode: expandKnowledgeGraph(projectPath, title, options)
AutoMode->>AutoMode: Load feature ancestry
AutoMode->>AutoMode: Build world-model prompt
AutoMode->>ProviderRegistry: getQueryService()
ProviderRegistry-->>AutoMode: QueryService (Claude or Z.AI)
AutoMode->>QueryService: simpleQuery(prompt)
QueryService-->>AutoMode: JSON response
AutoMode->>AutoMode: Parse expansion terms
AutoMode-->>Server: { terms, ancestry, layer }
Server-->>API: Response
API-->>UI: Expansion terms
UI->>Store: Update feature with expanded terms
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @GhostDragonAlpha, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers substantial performance enhancements by optimizing concurrency and VRAM usage, while also introducing a new, highly capable AI provider, Z.AI, through a refactored and more extensible provider architecture. It addresses critical server stability issues and adds innovative features for knowledge graph expansion and project visualization. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces significant performance enhancements, adds support for the Z.AI provider, and fixes a critical server crash. The introduction of a provider-agnostic QueryService and improved graceful shutdown logic are excellent architectural improvements. However, there are several critical security concerns due to hardcoded authentication bypasses and disabled sandboxing for development, which must be addressed before merging. Additionally, a large amount of new, unrelated code for a "Red Giant" service has been included, which should ideally be in a separate pull request.
| // FORCE BYPASS FOR LOCAL Z.AI DEV - Always authenticate WebSocket connections | ||
| return true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change completely bypasses WebSocket authentication by hardcoding return true;. While the comment indicates this is for local development, merging this would create a critical security vulnerability in production, allowing any client to establish a WebSocket connection without authentication. This must be removed or placed behind a strictly-controlled development flag that is disabled by default.
| // FORCE BYPASS FOR LOCAL Z.AI DEV - Always authenticate | ||
| // This removes the login requirement for local development | ||
| useAuthStore.getState().setAuthState({ isAuthenticated: true, authChecked: true }); | ||
| logger.info('[Z.AI DEV] Authentication bypassed for local development'); | ||
| return; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // FORCE ROUTER CONFIGURATION | ||
| const forcedEnv = buildEnv(); | ||
| forcedEnv['ANTHROPIC_BASE_URL'] = 'http://127.0.0.1:3457'; | ||
| forcedEnv['ANTHROPIC_API_KEY'] = 'sk-zai-router'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration for the Claude provider is being hardcoded to point to a local router at http://127.0.0.1:3457. This will break the provider in any environment where this local router is not running. This configuration should be made dynamic, loaded from environment variables or a configuration file, rather than being hardcoded.
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | ||
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | ||
| app.commandLine.appendSwitch('no-sandbox'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disabling the GPU sandbox and the general process sandbox (disable-gpu-sandbox, no-sandbox) significantly weakens the security model of the Electron application. While this might be intended for stability or performance during development, it exposes the application and the user's system to potential risks from web content. This should be avoided in production builds or, at the very least, be controlled by a feature flag that is off by default. Also, disable-gpu-sandbox is duplicated.
apps/server/package.json
Outdated
| "main": "dist/index.js", | ||
| "scripts": { | ||
| "dev": "tsx watch src/index.ts", | ||
| "dev": "set UV_THREADPOOL_SIZE=24 && tsx watch src/index.ts", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dev script uses the set command, which is specific to Windows and will not work as expected on macOS or Linux. To ensure cross-platform compatibility for setting environment variables in npm scripts, it's highly recommended to use the cross-env package.
| "dev": "set UV_THREADPOOL_SIZE=24 && tsx watch src/index.ts", | |
| "dev": "cross-env UV_THREADPOOL_SIZE=24 tsx watch src/index.ts", |
| logger.warn(`Worker ${workerId} (PID ${worker.process.pid}) died (${exitReason})`); | ||
|
|
||
| // Check restart rate limiting | ||
| const now = Date.now(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| enhancementModel: 'GLM-4.7', // Default to GLM-4.7 for feature enhancement | ||
| validationModel: 'GLM-4.7', // Default to GLM-4.7 for GitHub issue validation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default values for enhancementModel and validationModel are hardcoded to a specific Z.AI model (GLM-4.7). To maintain provider-agnosticism and align with other changes in this PR, it would be better to set these to 'default'. This would allow the backend to resolve the appropriate model based on the user's global settings and provider priority.
| enhancementModel: 'GLM-4.7', // Default to GLM-4.7 for feature enhancement | |
| validationModel: 'GLM-4.7', // Default to GLM-4.7 for GitHub issue validation | |
| enhancementModel: 'default', // Default to 'default' to be resolved by backend | |
| validationModel: 'default', // Default to 'default' to be resolved by backend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (12)
apps/ui/src/components/views/board-view/board-header.tsx (1)
79-87: Consider aligning slider max with the Node.js threadpool size limit.The codebase sets
UV_THREADPOOL_SIZE=24consistently across UI and server environments, and the app-store default formaxConcurrencyis already 24 (aligned for a 24-core CPU). However, this slider allows users to set concurrency up to 50, which exceeds the threadpool capacity. While the backend doesn't enforce a limit, allowing values above 24 may cause performance degradation when threadpool operations are throttled.Recommend changing
max={50}tomax={24}to prevent users from setting values that exceed the available threadpool capacity.apps/ui/src/components/views/setup-view/steps/providers-setup-step.tsx (1)
1443-1464: MissingsetZaiAuthStatusinuseCallbackdependency array.The
checkZaifunction usessetZaiAuthStatus, but it's not included in thecheckAllProvidersdependency array. This creates a stale closure that could cause bugs ifsetZaiAuthStatusreference changes.🐛 Proposed fix
}, [ setClaudeCliStatus, setCursorCliStatus, setCodexCliStatus, setCodexAuthStatus, setOpencodeCliStatus, + setZaiAuthStatus, ]);apps/ui/src/components/views/graph-view/components/dependency-edge.tsx (1)
132-132: Delete button will never render — stale guard condition.The
onDeleteDependencyproperty was removed fromDependencyEdgeData(Line 15 comment), but the render condition on Line 132 still checksedgeData?.onDeleteDependency. This will always be falsy, so the delete button is permanently hidden despite the handler being wired correctly via context.Update the condition to check the context-based action instead:
🐛 Proposed fix
- {(isHovered || selected) && edgeData?.onDeleteDependency && ( + {(isHovered || selected) && actions.onDeleteDependency && (apps/ui/src/components/views/graph-view/hooks/use-graph-nodes.ts (2)
101-109: Duplicate property assignments -isMatched,isHighlighted,isDimmedare set twice.Lines 102-104 and 106-108 assign the same filter state properties twice. This appears to be a merge artifact or copy-paste error.
🐛 Fix the duplicate assignments
data: { ...feature, isBlocked: blockingDeps.length > 0, isRunning, blockingDependencies: blockingDeps, // Filter states isMatched, isHighlighted, isDimmed, - // Filter states - isMatched, - isHighlighted, - isDimmed, // Action callbacks - REMOVED: Now handled via Context in TaskNode },
21-28: Remove unusedactionCallbacksparameter and action callback properties fromTaskNodeData.The
actionCallbacksparameter is accepted by the function (line 66) but never used. The comment on line 149 confirms this was intentionally removed ("REMOVED actionCallbacks from deps!"), and line 113 indicates the migration to a Context pattern. The action callback properties inTaskNodeData(lines 21-28) are also not populated into the node data, making them dead code.Remove the unused parameter from the function signature and the unused properties from
TaskNodeData. TheNodeActionCallbacksinterface should remain since it's actively used ingraph-view.tsx,graph-canvas.tsx, and exported from the hooks module.Suggested cleanup
- Remove from
TaskNodeData:isMatched?: boolean; isHighlighted?: boolean; isDimmed?: boolean; - // Action callbacks - onViewLogs?: () => void; - onViewDetails?: () => void; - onStartTask?: () => void; - onStopTask?: () => void; - onResumeTask?: () => void; - onSpawnTask?: () => void; - onDeleteTask?: () => void; }
- Remove from
UseGraphNodesPropsand function signature:interface UseGraphNodesProps { features: Feature[]; runningAutoTasks: string[]; filterResult?: GraphFilterResult; - actionCallbacks?: NodeActionCallbacks; } export function useGraphNodes({ features, runningAutoTasks, filterResult, - actionCallbacks, }: UseGraphNodesProps) {apps/ui/src/lib/http-api-client.ts (1)
1909-1930: Type inconsistency betweengetCredentialsandupdateCredentials.The
getCredentialsresponse now includeszai: { configured: boolean; masked: string }(Line 1915), butupdateCredentialsinput type (Lines 1920-1921) still only acceptsapiKeys?: { anthropic?: string; google?: string; openai?: string }withoutzai.🔧 Suggested fix
updateCredentials: (updates: { - apiKeys?: { anthropic?: string; google?: string; openai?: string }; + apiKeys?: { anthropic?: string; google?: string; openai?: string; zai?: string }; }): Promise<{apps/ui/src/components/ui/provider-icon.tsx (1)
144-171: Missingzaidetection ingetUnderlyingModelIcon.The function maps model strings to icon keys but doesn't detect Z.AI models. If a model string contains
'zai', it will fall through to the default'anthropic'icon.🔧 Add zai model detection
if (modelStr.includes('cursor') || modelStr === 'auto' || modelStr === 'composer-1') { return 'cursor'; } + if (modelStr.includes('zai')) { + return 'zai'; + } // Default based on provider const provider = getProviderFromModel(model); if (provider === 'codex') return 'openai'; if (provider === 'cursor') return 'cursor'; + if (provider === 'zai') return 'zai'; return 'anthropic';apps/ui/src/components/views/board-view/hooks/use-board-actions.ts (1)
292-296: Redundanttitleassignment may cause unintended behavior.Line 294 explicitly sets
title: updates.titleafter spreadingupdates, which is redundant. Ifupdates.titleisundefined, this could unintentionally clear the feature's title. The spread already includes any provided title.🔧 Remove redundant assignment
const finalUpdates = { ...updates, - title: updates.title, branchName: finalBranchName, };apps/ui/src/components/views/settings-view/api-keys/hooks/use-api-key-management.ts (1)
1-1: Remove@ts-nocheckand update the SetupAPI type definition to include missing properties.The
@ts-nocheckis masking a real TypeScript error. The hook code accessesstatus.hasZaiKeyandstatus.hasOpenaiKey(lines 67-80 in use-api-key-management.ts), but theSetupAPI.getApiKeys()return type inapps/ui/src/lib/electron.tsonly declareshasAnthropicKeyandhasGoogleKey. Update theSetupAPI.getApiKeys()type to includehasZaiKeyandhasOpenaiKey, then remove the@ts-nocheckdirective.apps/ui/src/components/views/profiles-view/components/profile-form.tsx (1)
179-192: Critical bug:onSavecalled twice for Codex provider.The
codexbranch incorrectly callsonSavetwice—once withcodexModeland once withopencodeModel. This appears to be a copy-paste error where the OpenCode logic was placed inside the Codex block instead of as a separateelse ifbranch.🐛 Proposed fix
} else if (formData.provider === 'codex') { onSave({ ...baseProfile, codexModel: formData.codexModel, }); + } else if (formData.provider === 'opencode') { onSave({ ...baseProfile, opencodeModel: formData.opencodeModel, }); } else if (formData.provider === 'zai') {apps/ui/src/components/views/board-view/shared/model-selector.tsx (1)
1-1: Remove@ts-nocheckand fix the model ID case mismatch.The
@ts-nocheckdirective hides a real type error. ZAI_MODELS in model-constants.ts uses PascalCase IDs ('GLM-4.5-Flash', 'GLM-4-Plus'), but ZAI_MODEL_MAP in libs/types/src/model.ts defines ModelAlias keys in lowercase ('glm-4.5-flash', 'glm-4-plus'). When a user selects a Z.AI model at line 189, the code passes an invalid model alias to onModelSelect. Fix this by normalizing the ZAI_MODELS IDs to lowercase or usingresolveModelString()from @automaker/model-resolver to convert the model IDs before passing them to onModelSelect.apps/server/src/routes/suggestions/generate-suggestions.ts (1)
255-263: Same malformed ID format in fallback path.The fallback suggestion also uses the inconsistent ID format.
🔧 Suggested fix
events.emit('suggestions:event', { type: 'suggestions_complete', suggestions: [ { - id: `suggestion - ${Date.now()} -0`, + id: `suggestion-${Date.now()}-0`, category: 'Analysis',
🤖 Fix all issues with AI agents
In @apps/server/package.json:
- Line 14: The dev script uses Windows-only syntax ("set UV_THREADPOOL_SIZE=24
&& tsx watch src/index.ts"); add cross-env as a devDependency and update the
"dev" script to use cross-env (e.g., "cross-env UV_THREADPOOL_SIZE=24 tsx watch
src/index.ts") so the env var is set cross-platform; install cross-env as a
devDependency (npm/yarn/pnpm) and update the "dev" entry in package.json
accordingly.
In @apps/server/src/index.ts:
- Around line 268-271: authenticateWebSocket currently always returns true,
causing a critical auth bypass; change it to only auto-allow in a dev-only gated
path (e.g., check process.env.NODE_ENV === 'development' or a dedicated feature
flag like process.env.DEV_WS_BYPASS) and otherwise perform real authentication
by delegating to your production auth routine (e.g., call the existing
token/cookie/header verifier or a new verifyWebSocketAuth(request) function).
Ensure the gate logs a clear warning when bypass is used and that the default
production behavior denies the connection when authentication fails.
In @apps/server/src/providers/claude-provider.ts:
- Around line 76-82: The diff forces local routing and hardcodes credentials by
mutating the env from buildEnv() and setting
ANTHROPIC_BASE_URL='http://127.0.0.1:3457' and
ANTHROPIC_API_KEY='sk-zai-router'; revert this by reading optional router config
from environment (e.g., ZAI_ROUTER_URL and ZAI_ROUTER_API_KEY) rather than
hardcoding, preserve the original ANTHROPIC_API_KEY if present, and implement a
fallback in the Claude provider (the code around buildEnv/forcedEnv and any
functions that read ANTHROPIC_BASE_URL/ANTHROPIC_API_KEY) to use the direct
Claude API when router variables are unset or router healthcheck fails; ensure
no secret literals remain in the source and document the new env vars.
In @apps/server/src/providers/zai-provider.ts:
- Around line 192-198: The current expression treats [] as truthy and returns
all tools, ignoring the caller's allowlist; change the logic so: if allowedTools
is undefined then tools stays undefined (text-only), if allowedTools is an empty
array return an empty array (no tools), otherwise call ZaiTools.getTools() and
filter its result to only include tools whose identifier/name is present in
allowedTools (e.g., compare tool.name or tool.id against allowedTools). Update
the variable initialization around ZaiTools, getTools and allowedTools to
implement this explicit branching and filtering.
In @apps/server/src/routes/github/routes/validate-issue.ts:
- Line 320: The model default of 'default' will fail the validation because
VALID_CLAUDE_MODELS only contains ['opus','sonnet','haiku'] and the Cursor
check; either add 'default' to VALID_CLAUDE_MODELS or resolve 'default' to a
concrete model string before validation. Locate where model is set to 'default'
(the model variable in the validate-issue handler) and either (A) append
'default' to the VALID_CLAUDE_MODELS array, or (B) normalize the incoming model
value by mapping 'default' to your actual default alias (e.g., 'opus') prior to
the validation block that checks VALID_CLAUDE_MODELS and Cursor model IDs.
In @apps/server/src/services/parallel-executor.ts:
- Around line 216-228: In cancelAll(), before wiping this.queue, iterate the
queued task entries and invoke their pending resolvers (the _resolve callbacks)
so their promises settle (e.g., call each queuedTask._resolve with a
cancellation result), then clear this.queue; keep the existing loop that aborts
running tasks and emits 'cancelled' for each abortController, and finally emit
'all-cancelled' after both queued resolvers and running aborts are handled to
ensure no queued promise remains unresolved.
- Around line 166-187: executeTask currently ignores the provided _signal so
abort() does nothing; update the executor signature (the this.executor function
type) to accept an AbortSignal and pass the signal into the call inside
executeTask, and also check signal.aborted before invoking the executor and
after awaiting (or catch an AbortError) to return a cancelled/failed
ExecutionResult. Concretely, change the executor type to (task: FeatureTask,
signal: AbortSignal) => Promise<ExecutionResultLike>, call await
this.executor(task, _signal) inside executeTask, and add an early check that if
(_signal.aborted) return a cancelled result (featureId, success: false or
specific flag, error: 'aborted', durationMs), and handle thrown aborts similarly
in the catch block.
In @apps/server/src/test_zhipu_api.ts:
- Around line 5-6: Replace the hardcoded secret in the test by removing the
API_KEY constant and reading it from an environment variable (e.g.,
process.env.ZHIPU_API_KEY) in the same module and fail fast with a clear error
if it's missing; update any code referencing API_KEY to use the new env var
accessor, and ensure the URL constant (URL) remains constant or is also
configurable via env if needed; remove the exposed value from the repo history
and rotate the leaked key, and consider moving this test file out of source or
adding it to .gitignore (or use a dedicated test config with dotenv) so secrets
are never stored in source.
In @apps/server/tests/verification/e2e-api-test.ps1:
- Around line 96-100: The script prints the success message unconditionally;
change the control flow so that when $foundZai is false the script stops with a
non-zero exit (or returns) instead of falling through: update the if (-not
$foundZai) block that calls Write-Error to also call exit 1 (or throw) or add an
else branch wrapping the Write-Host "E2E VERIFICATION COMPLETE: ALL CHECKS
PASSED" so the success message only runs when $foundZai is true.
In @apps/ui/src/components/views/agent-view.tsx:
- Line 24: The initial state for modelSelection uses { model: 'default' } but
PhaseModelEntry.model does not include 'default', causing a type mismatch; fix
by either extending the type to include 'default' (e.g., make
PhaseModelEntry.model = 'default' | ModelAlias | CursorModelId | CodexModelId)
or mirror the existing pattern and use a type assertion when initializing state
(e.g., useState<PhaseModelEntry>({ model: 'default' as const })), targeting the
useState call that creates modelSelection and following the board-view.tsx
example for consistency.
In @apps/ui/src/components/views/board-view/shared/model-constants.ts:
- Around line 105-155: The ZAI_MODELS array in the UI uses mixed/uppercase ids
(e.g., 'GLM-4-Plus', 'GLM-4.7') which won't match provider detection in
isZaiModel (which checks model.startsWith('glm-')) nor the backend ZAI_MODELS
(which use lowercase); change every id value in the UI ZAI_MODELS constant to
the exact lowercase IDs used by the backend (e.g., 'glm-4-plus', 'glm-4.7',
etc.), leaving label/description/badge unchanged, and verify they exactly match
the entries used in libs/provider-zai/src/zai-query-service.ts so isZaiModel
correctly routes to the Z.AI provider.
In @apps/ui/src/components/views/board-view/shared/model-selector.tsx:
- Around line 59-66: The code mismatches Z.AI model ID casing:
onModelSelect('glm-4.5-flash') uses lowercase while ZAI_MODELS IDs are
PascalCase and isZaiSelected checks selectedModel.startsWith('glm-') in
lowercase, causing Z.AI selection and rendering to fail; fix by using the exact
ZAI model ID casing when switching provider (call onModelSelect with
'GLM-4.5-Flash') and update the check to selectedModel.startsWith('GLM-') or
alternatively normalize model IDs consistently (e.g., compare
selectedModel.toLowerCase().startsWith('glm-') and call onModelSelect with a
lowercased normalized ID) so ZAI_MODELS, selectedModel, selectedProvider,
onModelSelect and isZaiSelected remain consistent.
In @apps/ui/src/main.ts:
- Around line 46-51: The code disables Chromium sandboxing and duplicates a GPU
flag: remove calls that weaken security by deleting the
app.commandLine.appendSwitch('no-sandbox') and the duplicate
app.commandLine.appendSwitch('disable-gpu-sandbox') lines, leaving only safe
GPU-stability flags (retain app.commandLine.appendSwitch('ignore-gpu-blacklist')
and app.commandLine.appendSwitch('disable-gpu-process-crash-limit')); ensure any
rationale is documented in a comment and avoid reintroducing sandbox-disabling
switches elsewhere in the startup sequence.
In @apps/ui/src/routes/__root.tsx:
- Around line 264-270: The unconditional authentication bypass
(useAuthStore.getState().setAuthState({ isAuthenticated: true, authChecked: true
})) followed by an immediate return makes the original auth flow unreachable and
will enable unauthenticated access in production; wrap this bypass in a strict
environment or feature-flag guard (e.g., check NODE_ENV === 'development' or a
dedicated BUILD_DEV_BYPASS flag) so it only runs in local/dev builds, remove or
gate the unconditional return so the original logic (the code after the bypass)
executes in non-dev environments, and add a clear log message via logger.info
that includes the active guard/flag name to make the dev-only behavior explicit.
In @libs/provider-zai/src/zai-provider.ts:
- Around line 108-132: The generateToken method sets the JWT exp and timestamp
using Date.now() (milliseconds); change both to NumericDate (seconds) per RFC
7519 and Z.ai spec by computing const now = Math.floor(Date.now() / 1000) and
using exp: now + 3600 and timestamp: now before calling jwt.sign (in the
generateToken function that builds payload and calls jwt.sign); keep the rest of
the signing options and existing error handling intact.
In @libs/types/src/model.ts:
- Line 69: The type alias ModelAlias references ZAI_MODEL_MAP before it is
declared, causing a block-scoped forward-reference error; fix by moving the
ZAI_MODEL_MAP const declaration so it appears above the export type ModelAlias
(ensure CLAUDE_MODEL_MAP and ZAI_MODEL_MAP are both defined before using them in
ModelAlias), then re-run TypeScript compile to verify no more "used before its
declaration" errors.
🟠 Major comments (43)
libs/provider-zai/tsconfig.json-1-20 (1)
1-20: Extend from the monorepo's base TypeScript configuration.This file should extend from
../tsconfig.base.jsonto maintain consistency with all other library packages in the monorepo. Every other provider (types, prompts, utils, model-resolver, platform, etc.) uses this pattern. Additionally, the current configuration usesES2022while the base config specifiesES2020, and it's missing the"types": ["node"]setting present in the base.Update to:
Suggested fix
{ "extends": "../tsconfig.base.json", "compilerOptions": { "outDir": "./dist", "rootDir": "./src" }, "include": ["src/**/*"], "exclude": ["node_modules", "dist"] }apps/ui/src/main.ts-66-70 (1)
66-70: 64GB V8 heap allocation may exceed available system RAM.Setting
--max-old-space-size=65536(64GB) on machines with less RAM can cause the OS to kill the process or trigger excessive swapping. The parallel GC flags are beneficial, but the heap size should be bounded by available memory.♻️ Consider dynamic heap sizing or a configurable limit
-// CPU / RAM MAXIMIZATION: Unlock V8 Heap to 64GB + Turbo GC (Multi-threaded) -app.commandLine.appendSwitch( - 'js-flags', - '--max-old-space-size=65536 --parallel-marking --concurrent-sweeping --min-semi-space-size=64' -); +// V8 memory tuning: enable parallel GC, set heap based on available memory +import os from 'os'; +const totalMemMb = Math.floor(os.totalmem() / (1024 * 1024)); +// Use ~50% of system RAM, capped at 16GB for safety +const heapSizeMb = Math.min(Math.floor(totalMemMb * 0.5), 16384); +app.commandLine.appendSwitch( + 'js-flags', + `--max-old-space-size=${heapSizeMb} --parallel-marking --concurrent-sweeping --min-semi-space-size=64` +);apps/ui/src/main.ts-54-64 (1)
54-64: Resource limits assume high-end hardware and may crash on typical machines.
force-gpu-mem-available-mb=24576(24GB VRAM) will cause issues on GPUs with less memoryrenderer-process-limit=100is excessive and wastes resourcesmax-active-webgl-contexts=64may exhaust GPU resourcesThese hardcoded values should be configurable or detected dynamically, as they can cause crashes or severe performance degradation on machines that don't meet these assumptions.
♻️ Consider making these configurable or using sensible defaults
-// UNLEASH THE BEAST: Force 24GB VRAM allocation limits (Safe now that sandbox is off) -app.commandLine.appendSwitch('force_high_performance_gpu'); -app.commandLine.appendSwitch('force-gpu-mem-available-mb', '24576'); -app.commandLine.appendSwitch('max-active-webgl-contexts', '64'); -app.commandLine.appendSwitch('renderer-process-limit', '100'); +// GPU optimization flags - use conservative defaults +app.commandLine.appendSwitch('force_high_performance_gpu'); +// Only set VRAM limit if explicitly configured (avoids crashes on low-VRAM systems) +const vramLimitMb = process.env.GPU_VRAM_LIMIT_MB; +if (vramLimitMb) { + app.commandLine.appendSwitch('force-gpu-mem-available-mb', vramLimitMb); +}apps/ui/.tanstack/tmp/69f3d50c-e01ecd68a9d5cb8a3cbc59e117bd6b08-1-9 (1)
1-9: Add.tanstack/to.gitignoreto exclude auto-generated build artifacts.This file is tracked in git but is explicitly marked as auto-generated by TanStack Router (lines 7-9: "DO NOT make any changes in this file as it will be overwritten"). The
apps/ui/.gitignorealready excludes another TanStack Router generated file (src/routeTree.gen.ts), so the entire.tanstack/tmp/directory should follow the same pattern to prevent unnecessary churn and merge conflicts on every route change.Add
.tanstack/toapps/ui/.gitignore(near the existing "# TanStack Router generated" comment).apps/server/src/services/parallel-executor.ts-64-67 (1)
64-67: Concurrency cap of 10 contradicts PR objectives.The PR description states it "increases concurrency slider limit (to 50)" and "Sets UV_THREADPOOL_SIZE=24", but this method hard-caps concurrency at 10. This limits the effectiveness of the concurrency improvements elsewhere in the PR.
Suggested fix
setConcurrency(concurrency: number): void { - this.concurrency = Math.max(1, Math.min(10, concurrency)); + this.concurrency = Math.max(1, Math.min(50, concurrency)); logger.info(`Concurrency set to ${this.concurrency}`); }apps/server/src/services/parallel-executor.ts-14-15 (1)
14-15: UsecreateEventEmitter()fromlib/events.tsinstead of Node'sEventEmitter.As per coding guidelines, server operations should use
createEventEmitter()fromlib/events.tsto emit events that stream to the frontend via WebSocket. This ensures executor events (queued, started, completed, failed, cancelled) are properly propagated to clients.Suggested change
import { createLogger } from '../utils/logger.js'; -import { EventEmitter } from 'events'; +import { createEventEmitter } from '../lib/events.js';You'll also need to update the class to use the created emitter rather than extending
EventEmitter.apps/ui/src/components/views/graph-view/components/task-node.tsx-95-96 (1)
95-96: Move import to the top of the file.The
useGraphActionsimport is placed in the middle of the file after constant definitions. For consistency and readability, all imports should be at the top of the file.📦 Suggested fix
Move this import to line 5 (after the other imports):
import { cn } from '@/lib/utils'; import { TaskNodeData } from '../hooks/use-graph-nodes'; +import { useGraphActions } from '../context/graph-actions-context'; import { Button } from '@/components/ui/button';And remove lines 95-96.
apps/ui/src/components/views/graph-view/components/task-node.tsx-16-28 (1)
16-28: Remove leftover developer notes.This comment block appears to be internal thinking/debugging notes that were accidentally left in the code. These should be removed before merging.
🧹 Suggested cleanup
-// Extend TaskNodeData locally since we can't easily modify the hook file right now -// or assume it's imported from a central type definition that we can edit. -// Wait, the import is: import { TaskNodeData } from '../hooks/use-graph-nodes'; -// We should check if we can modify the source. -// For now, let's cast or extend locally if possible, but actually let's just use the imported type. -// If we can't modify the imported type easily, we might get a TS error. -// Let's assume we will modify the hook file too, but for this step let's modify the usage. -// Actually, earlier I tried to modify the interface in this file but it wasn't there? -// Ah, line 20 says: import { TaskNodeData } from '../hooks/use-graph-nodes'; -// So the interface is NOT in this file. I need to modify '../hooks/use-graph-nodes.ts' first? -// Or I can modify this file to export an extended type and use it. -// Let's look at the file content again. It imports TaskNodeData. - -// Let's modify the imports to include Sparkles for the AI actionapps/ui/src/styles/global.css-1141-1143 (1)
1141-1143: Targetstroke-widthto the path element.The
stroke-widthproperty must be applied to the SVG<path>element inside the edge container, not the container itself. The codebase already follows this pattern elsewhere (lines 1061 and 983).💡 Suggested fix
.graph-zoomed-out .react-flow__edge.graph-edge-highlighted { opacity: 1 !important; - stroke-width: 2px !important; +} + +.graph-zoomed-out .react-flow__edge.graph-edge-highlighted path { + stroke-width: 2px !important; }apps/ui/package.json-18-22 (1)
18-22: 32GB heap limit may be excessive for typical development machines.Setting
--max-old-space-size=32768allocates up to 32GB of heap memory. This could cause out-of-memory issues or system instability on machines with less RAM available.Consider:
- Using a more conservative value like 8192 (8GB) or 16384 (16GB)
- Making this configurable via an environment variable with a sensible default
Note: The PR mentions "24GB VRAM" (GPU memory), but this setting affects Node.js heap (system RAM). Verify this is the intended change.
♻️ Suggested fix with configurable default
- "dev": "cross-env NODE_OPTIONS=--max-old-space-size=32768 vite", + "dev": "cross-env NODE_OPTIONS=${NODE_OPTIONS:---max-old-space-size=8192} vite",package.json-16-17 (1)
16-17: Duplicatedev:webkey causes confusion and potential issues.Static analysis tools are notorious for false positives because they lack the context of the codebase. However, this is a valid issue. Line 16 defines
dev:webbut Line 23 also definesdev:web. In JSON, duplicate keys result in only the last definition taking effect (Line 23), so Line 16 is effectively dead code.Additionally, Line 16's value (
npm run dev:web --workspace=apps/ui) would cause infinite recursion if it were the active definition.🔧 Suggested fix - remove the duplicate
"fix:lockfile": "node scripts/fix-lockfile-urls.mjs", "dev": "node dev.mjs", - "dev:web": "npm run dev:web --workspace=apps/ui", "_dev:web": "npm run dev:web --workspace=apps/ui",apps/server/tests/verification/e2e-api-test.ps1-33-40 (1)
33-40: Script continues after feature creation failure.If
Invoke-RestMethodfails,Write-Erroris called but the script proceeds to step 3 with an undefined$FeatureId, causing downstream failures.🐛 Proposed fix
catch { Write-Error "Failed to create feature: $_" + exit 1 }apps/server/src/workers/thread-pool.ts-294-299 (1)
294-299:beforeExitmay not fire on SIGTERM/SIGINT.The
beforeExitevent only fires when the event loop empties naturally. It won't trigger on signals likeSIGTERMorSIGINT, leaving workers orphaned on container shutdown or Ctrl+C.🐛 Proposed fix: Add signal handlers
// Shutdown on process exit - process.on('beforeExit', async () => { - if (globalPool) { - await globalPool.shutdown(); - } - }); + const shutdownHandler = async () => { + if (globalPool) { + await globalPool.shutdown(); + globalPool = null; + } + }; + + process.on('beforeExit', shutdownHandler); + process.on('SIGTERM', () => shutdownHandler().then(() => process.exit(0))); + process.on('SIGINT', () => shutdownHandler().then(() => process.exit(0))); return globalPool; }apps/server/src/workers/thread-pool.ts-88-102 (1)
88-102: Unawaited async calls in worker event handlers.
recycleWorker(line 93) andspawnWorker(line 100) are async but called withoutawait. This can cause race conditions where the pool state becomes inconsistent before the async operation completes.🐛 Proposed fix: Handle async operations properly
worker.on('error', (error) => { logger.error(`Worker error: ${error.message}`); if (state.currentTask) { state.currentTask.reject(error); } - this.recycleWorker(state); + this.recycleWorker(state).catch((e) => + logger.error(`Failed to recycle worker after error: ${e.message}`) + ); }); worker.on('exit', (code) => { if (!this.isShuttingDown && code !== 0) { logger.warn(`Worker exited with code ${code}, spawning replacement`); this.removeWorker(state); - this.spawnWorker(); + this.spawnWorker().catch((e) => + logger.error(`Failed to spawn replacement worker: ${e.message}`) + ); } });apps/ui/src/components/views/profiles-view/components/profile-form.tsx-69-70 (1)
69-70: Case mismatch: defaultzaiModelwon't matchZAI_MODELSentries.The default value
'glm-4.5'uses lowercase, butZAI_MODELSfrommodel-constants.tsuses'GLM-4.5'(uppercase). This causes the model selection comparison on line 555 to fail, leaving no model visually selected by default.🐛 Proposed fix
// Z.AI-specific - zaiModel: profile.zaiModel || 'glm-4.5', + zaiModel: profile.zaiModel || 'GLM-4.5',Apply the same fix on lines 90 and 112.
libs/provider-zai/src/register.ts-23-31 (1)
23-31:autoRegister()not invoked on module load (same issue as Claude).The docstring says auto-registration happens "on import if ZAI_API_KEY is present," but
autoRegister()is never called within this module. Add a side-effect call or ensure it's invoked elsewhere.🐛 Proposed fix
export function autoRegister(): void { if (process.env.ZAI_API_KEY) { register(); console.log('[provider-zai] Auto-registered Z.AI provider'); } } + +// Auto-register on module import +autoRegister();apps/server/tests/verification/direct-zai-test.ts-14-15 (1)
14-15: API key split may truncate secrets containing dots.If the secret portion of the API key contains additional
.characters,split('.')with destructuring will only capture up to the first dot in the secret, truncating the rest.🐛 Proposed fix
-const [id, secret] = apiKey.split('.'); +const dotIndex = apiKey.indexOf('.'); +const id = apiKey.substring(0, dotIndex); +const secret = apiKey.substring(dotIndex + 1); console.log('API Key ID:', id);test_settings_load.ts-1-2 (1)
1-2: Import path uses.tsextension and non-standard relative path.The import
./apps/server/src/services/settings-service.tsuses a.tsextension which is non-standard for TypeScript imports (should be.jsfor ESM or omit extension). Additionally, this relative path from repo root may not resolve correctly depending on how the script is executed.As per coding guidelines, prefer importing from shared packages when available:
-import { SettingsService } from './apps/server/src/services/settings-service.ts'; +import { SettingsService } from '@automaker/server/services/settings-service';Or if running as a standalone script, ensure the path is correct for your execution context.
libs/provider-zai/src/register.ts-14-21 (1)
14-21: Missing idempotency guard for provider registration.Unlike the Claude provider registration which uses a
registeredflag, this implementation allows multiple registrations ifregister()is called more than once. This could cause duplicate providers in the registry.Additionally, the comment says "Lower than Claude (10)" but Claude's registration uses priority
100, not10.🐛 Proposed fix
import { registerProvider } from '@automaker/providers-core'; import { ZaiQueryService } from './zai-query-service.js'; +let registered = false; + /** * Register Z.AI provider with the registry */ export function register(): void { + if (registered) return; + registerProvider({ name: 'zai', createQueryService: () => new ZaiQueryService(), aliases: ['glm', 'zhipu'], - priority: 5, // Lower than Claude (10) by default + priority: 50, // Lower than Claude (100) by default }); + + registered = true; }apps/server/src/routes/auto-mode/routes/expand-feature.ts-37-40 (1)
37-40: Zod validation errors should return 400, not 500.When
expandFeatureSchema.parse()throws aZodError, the handler returns a 500 status. Validation errors should return 400 (Bad Request) to distinguish client errors from server errors.🔧 Suggested fix
+import { ZodError } from 'zod'; + export const createExpandFeatureHandler = (autoModeService: AutoModeService) => { return async (req: Request, res: Response) => { try { const { projectPath, seedTitle, depth, domainContext, focusArea, externalContext, subspecTemplate, } = expandFeatureSchema.parse(req.body); const result = await autoModeService.expandKnowledgeGraph(projectPath, seedTitle, { depth, domainContext, focusArea, externalContext, subspecTemplate, }); res.json(result); } catch (error) { - console.error('Error expanding feature:', error); - res.status(500).json({ error: 'Failed to expand feature' }); + if (error instanceof ZodError) { + res.status(400).json({ error: 'Invalid request', details: error.errors }); + return; + } + console.error('Error expanding feature:', error); + res.status(500).json({ error: 'Failed to expand feature' }); } }; };apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx-863-868 (1)
863-868: Type mismatch: ZAI model IDs are notModelAliastypes.ZAI model IDs (e.g.,
'GLM-4-Plus','GLM-4.7') are being cast toModelAlias, butModelAliasis defined askeyof typeof CLAUDE_MODEL_MAP | keyof typeof ZAI_MODEL_MAP. The frontendZAI_MODELSuses PascalCase IDs while the backendZAI_MODEL_MAPuses lowercase. This type assertion may mask runtime issues.Consider creating a dedicated
ZaiModelIdtype or ensuring the model ID is properly typed.🔧 Suggested fix
onSelect={() => { - onChange({ - model: model.id as ModelAlias, - thinkingLevel: 'none', - }); + onChange({ + model: model.id, + thinkingLevel: 'none', + }); setOpen(false); }}Also verify that
PhaseModelEntry.modeltype accepts ZAI model IDs.apps/ui/src/store/app-store.ts-1181-1181 (1)
1181-1181: **Aggressive default concurrency increase may overwhelm typical systems.**IncreasingmaxConcurrencyfrom 3 to 24 as a global default is aggressive. The recommendation is to set the UV_THREADPOOL_SIZE to the number of logical cores your machine is running. Not all users have 24-core CPUs, and "More threads usually means more throughput but a higher memory footprint."Consider detecting available cores at runtime or making this configurable based on system specs rather than hardcoding a high default.
libs/types/src/model.ts-63-67 (1)
63-67: Misleading key name:claudedefault is set to a Z.AI model.
DEFAULT_MODELS.claudeis set to'GLM-4.7', which is a Z.AI model, not a Claude model. This creates semantic confusion and may break existing code that expects theclaudedefault to be an Anthropic model (likesonnet).If Z.AI is intended as the new default provider, consider renaming the key to
defaultor adding a separatezaikey. If this is unintentional, revert to a Claude model alias.apps/ui/src/store/app-store.ts-1193-1194 (1)
1193-1194: Model name case inconsistency in defaults.Same issue as the Z.AI profile - these use
'GLM-4.7'(uppercase) butZAI_MODEL_MAPdefines'glm-4.7'(lowercase).- enhancementModel: 'GLM-4.7', // Default to GLM-4.7 for feature enhancement - validationModel: 'GLM-4.7', // Default to GLM-4.7 for GitHub issue validation + enhancementModel: 'glm-4.7', // Default to GLM-4.7 for feature enhancement + validationModel: 'glm-4.7', // Default to GLM-4.7 for GitHub issue validationapps/ui/src/store/app-store.ts-1145-1154 (1)
1145-1154: Model name case inconsistency:'GLM-4.7'vs'glm-4.7'.The Z.AI profile uses
model: 'GLM-4.7'(uppercase), butZAI_MODEL_MAPinlibs/types/src/model.tsdefines keys as'glm-4.7'(lowercase). This mismatch may cause model resolution failures.🐛 Proposed fix
{ id: 'profile-zai-reasoning', name: 'Z.AI Reasoning', description: 'Z.AI GLM-4.7 for reasoning and coding.', - model: 'GLM-4.7', + model: 'glm-4.7', provider: 'zai', isBuiltIn: true, icon: 'Zai', },apps/server/tests/verification/test-all-zai-models.ts-21-37 (1)
21-37: DuplicatedgenerateTokenlogic — extract to shared utility.This token generation logic is duplicated in at least three places:
apps/server/src/providers/zai-provider.ts(lines 122-150)libs/provider-zai/src/zai-provider.ts(lines 107-131)- This test file
Per coding guidelines, import from shared packages (
@automaker/*). Extract this to a shared utility in@automaker/provider-zaior@automaker/utilsand import it here.♻️ Suggested approach
Create a shared utility:
// libs/provider-zai/src/token-utils.ts export function generateZaiToken(apiKey: string): string { // ... shared implementation }Then in this test:
-import { createRequire } from 'module'; -const require = createRequire(import.meta.url); -const jwt = require('jsonwebtoken'); +import { generateZaiToken } from '@automaker/provider-zai'; -function generateToken(apiKey: string): string { - const [id, secret] = apiKey.split('.'); - if (!id || !secret) return apiKey; - // ... -} // In main(): -const token = generateToken(apiKey); +const token = generateZaiToken(apiKey);apps/ui/src/components/views/world-model-view.tsx-262-279 (1)
262-279: Hardcoded localhost URL will fail in production.The fetch URL
http://localhost:3000/api/auto-mode/expand-featureis hardcoded. This should use an environment variable or API utility to ensure it works across different environments.🔧 Suggested fix
+import { getApiBaseUrl } from '@/lib/api'; // or similar utility + const handleRunSmartExpand = useCallback( async (seedFeature: Feature, options: any) => { try { - const response = await fetch('http://localhost:3000/api/auto-mode/expand-feature', { + const response = await fetch(`${getApiBaseUrl()}/api/auto-mode/expand-feature`, { method: 'POST', headers: { 'Content-Type': 'application/json', },apps/server/src/verify_zai_provider.ts-1-4 (1)
1-4: Hardcoded path is not portable.The path
c:/Chimera/tools/AutoMaker/.envis a Windows-specific absolute path that won't work on other machines or in CI. Consider using a relative path or environment variable.🔧 Suggested fix
import { config } from 'dotenv'; -config({ path: 'c:/Chimera/tools/AutoMaker/.env', override: true }); +import * as path from 'path'; +// Load .env from repository root or use environment-specific path +config({ path: process.env.DOTENV_PATH || path.resolve(__dirname, '../../../.env'), override: true }); import { ZaiProvider } from './providers/zai-provider.js'; -import * as path from 'path';apps/server/src/providers/zai-tools.ts-315-316 (1)
315-316: ReDoS vulnerability from user-controlled regex pattern.The
patternparameter is passed directly tonew RegExp()without validation. A malicious or poorly crafted pattern could cause catastrophic backtracking. As flagged by static analysis, consider validating regex safety.🔧 Suggested fix with timeout protection
private async executeGrep( pattern: string, searchPath: string, include?: string ): Promise<string> { const resolvedPath = this.resolvePath(searchPath); const globPattern = include || '**/*'; const files = await globAsync(globPattern, { cwd: resolvedPath, nodir: true }); const results: string[] = []; - const regex = new RegExp(pattern); + let regex: RegExp; + try { + // Basic validation - limit pattern length and complexity + if (pattern.length > 500) { + throw new Error('Pattern too long'); + } + regex = new RegExp(pattern); + } catch (e) { + return `Invalid regex pattern: ${e instanceof Error ? e.message : String(e)}`; + } for (const file of files) {libs/provider-claude/src/claude-query-service.ts-136-141 (1)
136-141: UseresolveModelString()from @automaker/model-resolver instead of hardcoding the model.Per coding guidelines for TypeScript files, use
resolveModelString()to convert model aliases. The hardcoded default'claude-sonnet-4-20250514'should be resolved through the model resolver. ImportresolveModelStringfrom@automaker/model-resolverand call it withundefinedto use the default, or with a model alias like'sonnet'for consistency across the codebase.apps/server/src/cluster-manager.ts-24-83 (1)
24-83: Worker restart rate limiting is ineffective and can leak memory.You key restart counts by
worker.id, but each crash+fork yields a new id, so a crash loop won’t be throttled; also the Map grows over time.Suggested: track restart timestamps globally (or per “slot”) and prune by time window.
apps/server/src/index.ts-178-179 (1)
178-179: Validate SETTINGS_FILE input (path traversal / unexpected subpaths).Passing
process.env.SETTINGS_FILEinto file path composition can allow../or nested paths that escapeDATA_DIRor cause non-existent subdirs (write failures). Recommend enforcing “filename only” (basename) inSettingsService(or at the callsite).apps/server/src/services/settings-service.ts-107-120 (1)
107-120: ValidatesettingsFileName(prevent traversal / unexpected subdirectories).If
settingsFileNamecontains../or path separators,path.join(this.dataDir, this.settingsFileName)can escapedataDiror fail atomic writes (missing subdir). Enforce basename-only.Proposed fix
export class SettingsService { private dataDir: string; private settingsFileName: string; constructor(dataDir: string, settingsFileName: string = 'settings.json') { this.dataDir = dataDir; - this.settingsFileName = settingsFileName; + const base = path.basename(settingsFileName); + if (base !== settingsFileName) { + throw new Error(`Invalid settingsFileName "${settingsFileName}" (must be a filename)`); + } + this.settingsFileName = base; }Also applies to: 137-140, 267-272, 361-364
apps/server/src/providers/zai-provider.ts-226-233 (1)
226-233: Vision is likely broken: prompt arrays drop non-string parts.If upstream passes structured prompt parts (e.g., images), this code collapses to strings only. That conflicts with
supportsVision: trueon models and will prevent image input from reaching the API.apps/server/src/services/auto-mode-service.ts-3223-3387 (1)
3223-3387: expandKnowledgeGraph(): provider selection can mismatch the requested model.
getQueryService()returns the highest-priority active provider, but you passmodelConfig.modelwhich may correspond to a different provider. Either:
- select by provider derived from the resolved model (e.g.,
getQueryServiceByName(...)), or- make QueryServices accept/route “foreign” models (unlikely).
Also: you already import
resolvePhaseModelat the file top; the dynamic import inside this method is redundant.apps/server/src/index.ts-35-36 (1)
35-36: /api/debug is mounted unauthenticated; gate it and/or protect it.Right now it’s reachable before
/apiauth middleware is applied, so it can expose internal state in production.Proposed gating + protection
-// Register debug routes immediately -app.use('/api/debug', createDebugRoutes()); +// Register debug routes only when explicitly enabled +if (process.env.ENABLE_DEBUG_ROUTES === 'true') { + // Either mount after app.use('/api', authMiddleware) OR protect explicitly here. + app.use('/api/debug', authMiddleware, createDebugRoutes()); +}Also applies to: 111-113
apps/server/src/cluster-manager.ts-13-18 (1)
13-18: Import should come from shared package, not a relative logger module.This violates the guideline:
**/*.{ts,tsx,js,jsx}should import from@automaker/*.Proposed fix
-import { createLogger } from './utils/logger.js'; +import { createLogger } from '@automaker/utils';Based on coding guidelines, avoid old relative paths.
apps/server/src/index.ts-188-202 (1)
188-202: Don’tdelete process.env.ANTHROPIC_API_KEYto “prioritize” ZAI.This is global process state and will break any Claude-dependent flows that still run (e.g., Claude routes, any direct SDK usage elsewhere). Prefer selecting providers via registry priority/config while leaving credentials intact.
Minimal safe change
- // Enforce Z.AI priority by disabling Claude if Z.AI key is present - if (process.env.ZAI_API_KEY) { - if (process.env.ANTHROPIC_API_KEY) { - logger.info( - 'Detected Z.AI Key: Explicitly disabling Claude provider by unsetting ANTHROPIC_API_KEY' - ); - delete process.env.ANTHROPIC_API_KEY; - } - } + // Prefer Z.AI via provider selection logic (priority/config), but keep credentials intact.apps/server/src/index.ts-9-12 (1)
9-12: UV_THREADPOOL_SIZE set in-module is unreliable under ESM (imports are hoisted).With ESM, dependency loading happens before module body runs, so this doesn’t guarantee “BEFORE any libs load”, and Node/libuv threadpool sizing is safest when set in the process environment at launch (or via a tiny bootstrap entrypoint that sets env then
await import(...)).Proposed adjustment (prefer launch-time env; remove misleading in-module set)
-// CPU MAXIMIZATION: Set thread pool size equal to core count (24) BEFORE any libs load -// This ensures the server process (separate from Electron) gets the full thread pool -process.env.UV_THREADPOOL_SIZE = process.env.UV_THREADPOOL_SIZE || '24';apps/server/src/services/auto-mode-service.ts-560-598 (1)
560-598: Theif (model === 'default')block is unreachable dead code, andsettingsServiceis nullable without a guard.
resolveModelString()converts the literal string'default'to the provideddefaultModelbefore returning—it never returns'default'itself (see lines 50–54 oflibs/model-resolver/src/resolver.ts). Therefore, the condition at line 563 will never be true.Additionally,
settingsServiceis typed asSettingsService | null(assigned asnullin the constructor) but accessed directly asthis.settingsService.getGlobalSettings()without a null check. Other parts of the codebase use optional chaining (this.settingsService?.getGlobalSettings()), indicating awareness of the nullability.Remove the dead
if (model === 'default')block entirely. If handling explicitly passed'default'strings is needed, resolve settings before callingresolveModelString(), then add a null guard forsettingsService.apps/server/src/providers/zai-provider.ts-242-255 (1)
242-255: Fix model string casing to match Z.AI API requirements.Z.AI's OpenAI-compatible endpoint requires lowercase model IDs (e.g.,
"glm-4.7"not"GLM-4.7"). ThemodelStringvalues inZAI_MODELS_DEFare currently uppercase, which will cause API requests to fail. Update all model strings to lowercase:'glm-4.7','glm-4.6','glm-4-plus', etc.apps/server/src/services/auto-mode-service.ts-1367-1393 (1)
1367-1393: The fallback at line 1371 is unreachable—'default'is always resolved to a concrete model.When
resolvePhaseModel(phaseModelEntry)is called without a second parameter, it defaults toDEFAULT_MODELS.claude('GLM-4.7'). The internalresolveModelString('default', 'GLM-4.7')immediately converts the literal'default'string to'GLM-4.7', so the conditionif (analysisModel === 'default')can never be true.To preserve the
'default'sentinel for settings-based substitution, either pass an explicitdefaultModelparameter (resolvePhaseModel(phaseModelEntry, 'default')) or refactor to substitute from settings before resolution.libs/types/src/settings.ts-816-817 (1)
816-817: Type mismatch:'GLM-4.7'is not a validModelAlias.These deprecated fields (
enhancementModel,validationModel) are typed asModelAlias(lines 503-505), which should be values like'haiku','sonnet','opus'. Assigning'GLM-4.7'violates the type definition and will cause TypeScript errors.🐛 Proposed fix - restore Claude model defaults
phaseModels: DEFAULT_PHASE_MODELS, - enhancementModel: 'GLM-4.7', - validationModel: 'GLM-4.7', + enhancementModel: 'haiku', + validationModel: 'sonnet', enabledCursorModels: getAllCursorModelIds(),Per coding guidelines, if Z.AI models are intended as defaults, use
resolveModelString()from@automaker/model-resolverfor proper model alias resolution.
apps/server/package.json
Outdated
| "main": "dist/index.js", | ||
| "scripts": { | ||
| "dev": "tsx watch src/index.ts", | ||
| "dev": "set UV_THREADPOOL_SIZE=24 && tsx watch src/index.ts", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cross-platform incompatibility: set is Windows-only.
The set UV_THREADPOOL_SIZE=24 && syntax only works on Windows CMD. On macOS/Linux, this will fail because set is not used to set environment variables for subsequent commands.
Use cross-env for cross-platform compatibility:
🐛 Proposed fix
Add cross-env to devDependencies:
+ "cross-env": "^7.0.3",Update the script:
- "dev": "set UV_THREADPOOL_SIZE=24 && tsx watch src/index.ts",
+ "dev": "cross-env UV_THREADPOOL_SIZE=24 tsx watch src/index.ts",🤖 Prompt for AI Agents
In @apps/server/package.json at line 14, The dev script uses Windows-only syntax
("set UV_THREADPOOL_SIZE=24 && tsx watch src/index.ts"); add cross-env as a
devDependency and update the "dev" script to use cross-env (e.g., "cross-env
UV_THREADPOOL_SIZE=24 tsx watch src/index.ts") so the env var is set
cross-platform; install cross-env as a devDependency (npm/yarn/pnpm) and update
the "dev" entry in package.json accordingly.
| // PROBE: Log Provider Execution (Removed) | ||
|
|
||
| // FORCE ROUTER CONFIGURATION | ||
| const forcedEnv = buildEnv(); | ||
| forcedEnv['ANTHROPIC_BASE_URL'] = 'http://127.0.0.1:3457'; | ||
| forcedEnv['ANTHROPIC_API_KEY'] = 'sk-zai-router'; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded credentials and forced routing is a critical security and reliability concern.
This change hardcodes a localhost URL and API key, which:
- Security risk: The
sk-zai-routerkey is committed to source code. Even if it's a local-only key, hardcoded credentials are a security anti-pattern. - Breaks standalone Claude usage: All Claude requests are now forced through
127.0.0.1:3457, requiring a local router service to be running. - Overrides user configuration: The user's
ANTHROPIC_API_KEYfrom environment is ignored. - No fallback or configuration: There's no way to disable this behavior or use Claude directly.
If this router architecture is intentional, consider:
- Making the router URL configurable via environment variable (e.g.,
ZAI_ROUTER_URL) - Not hardcoding the API key - use an environment variable or generate it dynamically
- Adding a fallback to direct Claude API when the router is unavailable
- Documenting this architectural change
🔧 Suggested configurable approach
- // FORCE ROUTER CONFIGURATION
- const forcedEnv = buildEnv();
- forcedEnv['ANTHROPIC_BASE_URL'] = 'http://127.0.0.1:3457';
- forcedEnv['ANTHROPIC_API_KEY'] = 'sk-zai-router';
+ // Router configuration (optional - falls back to direct Claude API)
+ const forcedEnv = buildEnv();
+ if (process.env.ZAI_ROUTER_URL) {
+ forcedEnv['ANTHROPIC_BASE_URL'] = process.env.ZAI_ROUTER_URL;
+ forcedEnv['ANTHROPIC_API_KEY'] = process.env.ZAI_ROUTER_KEY || forcedEnv['ANTHROPIC_API_KEY'];
+ }🤖 Prompt for AI Agents
In @apps/server/src/providers/claude-provider.ts around lines 76 - 82, The diff
forces local routing and hardcodes credentials by mutating the env from
buildEnv() and setting ANTHROPIC_BASE_URL='http://127.0.0.1:3457' and
ANTHROPIC_API_KEY='sk-zai-router'; revert this by reading optional router config
from environment (e.g., ZAI_ROUTER_URL and ZAI_ROUTER_API_KEY) rather than
hardcoding, preserve the original ANTHROPIC_API_KEY if present, and implement a
fallback in the Claude provider (the code around buildEnv/forcedEnv and any
functions that read ANTHROPIC_BASE_URL/ANTHROPIC_API_KEY) to use the direct
Claude API when router variables are unset or router healthcheck fails; ensure
no secret literals remain in the source and document the new env vars.
| // Initialize Tools | ||
| const zaiTools = new ZaiTools(cwd); | ||
| // Only use tools if allowedTools is present (or empty array means none?) | ||
| // Standard AutoMaker behavior: if allowedTools is provided, use them. | ||
| // If not provided, assume text only? AutoModeService usually provides them. | ||
| const tools = allowedTools ? zaiTools.getTools() : undefined; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tool allowlist is not enforced (empty array enables all tools).
allowedTools ? zaiTools.getTools() : undefined:
- treats
[]as truthy → enables tools when caller likely intended “no tools” - never filters to the requested allowlist → can expose unintended tools
Proposed fix (respect empty array + filter)
- const tools = allowedTools ? zaiTools.getTools() : undefined;
+ const tools =
+ allowedTools && allowedTools.length > 0
+ ? zaiTools.getTools().filter((t) => allowedTools.includes(t.function.name))
+ : undefined;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Initialize Tools | |
| const zaiTools = new ZaiTools(cwd); | |
| // Only use tools if allowedTools is present (or empty array means none?) | |
| // Standard AutoMaker behavior: if allowedTools is provided, use them. | |
| // If not provided, assume text only? AutoModeService usually provides them. | |
| const tools = allowedTools ? zaiTools.getTools() : undefined; | |
| // Initialize Tools | |
| const zaiTools = new ZaiTools(cwd); | |
| // Only use tools if allowedTools is present (or empty array means none?) | |
| // Standard AutoMaker behavior: if allowedTools is provided, use them. | |
| // If not provided, assume text only? AutoModeService usually provides them. | |
| const tools = | |
| allowedTools && allowedTools.length > 0 | |
| ? zaiTools.getTools().filter((t) => allowedTools.includes(t.function.name)) | |
| : undefined; |
🤖 Prompt for AI Agents
In @apps/server/src/providers/zai-provider.ts around lines 192 - 198, The
current expression treats [] as truthy and returns all tools, ignoring the
caller's allowlist; change the logic so: if allowedTools is undefined then tools
stays undefined (text-only), if allowedTools is an empty array return an empty
array (no tools), otherwise call ZaiTools.getTools() and filter its result to
only include tools whose identifier/name is present in allowedTools (e.g.,
compare tool.name or tool.id against allowedTools). Update the variable
initialization around ZaiTools, getTools and allowedTools to implement this
explicit branching and filtering.
| issueBody, | ||
| issueLabels, | ||
| model = 'opus', | ||
| model = 'default', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Default model 'default' will fail validation.
The default model is set to 'default', but the validation logic at lines 370-379 only accepts models in VALID_CLAUDE_MODELS (['opus', 'sonnet', 'haiku']) or Cursor model IDs. The string 'default' matches neither condition, so all requests without an explicit model parameter will receive a 400 error.
Either:
- Add
'default'to the validation allowlist, or - Resolve
'default'to an actual model alias before validation
🐛 Proposed fix
const VALID_CLAUDE_MODELS: readonly ModelAlias[] = ['opus', 'sonnet', 'haiku'] as const;
+const VALID_DEFAULT_ALIAS = 'default' as const;
// ... in the handler:
- if (!isValidClaudeModel && !isValidCursorModel) {
+ const isDefaultAlias = model === VALID_DEFAULT_ALIAS;
+ if (!isValidClaudeModel && !isValidCursorModel && !isDefaultAlias) {
res.status(400).json({
success: false,
- error: `Invalid model. Must be one of: ${VALID_CLAUDE_MODELS.join(', ')}, or a Cursor model ID`,
+ error: `Invalid model. Must be one of: ${VALID_CLAUDE_MODELS.join(', ')}, 'default', or a Cursor model ID`,
});
return;
}🤖 Prompt for AI Agents
In @apps/server/src/routes/github/routes/validate-issue.ts at line 320, The
model default of 'default' will fail the validation because VALID_CLAUDE_MODELS
only contains ['opus','sonnet','haiku'] and the Cursor check; either add
'default' to VALID_CLAUDE_MODELS or resolve 'default' to a concrete model string
before validation. Locate where model is set to 'default' (the model variable in
the validate-issue handler) and either (A) append 'default' to the
VALID_CLAUDE_MODELS array, or (B) normalize the incoming model value by mapping
'default' to your actual default alias (e.g., 'opus') prior to the validation
block that checks VALID_CLAUDE_MODELS and Cursor model IDs.
| private async executeTask(task: FeatureTask, _signal: AbortSignal): Promise<ExecutionResult> { | ||
| if (!this.executor) { | ||
| throw new Error('No executor set. Call setExecutor() first.'); | ||
| } | ||
|
|
||
| const startTime = Date.now(); | ||
|
|
||
| try { | ||
| const result = await this.executor(task); | ||
| return { | ||
| ...result, | ||
| durationMs: Date.now() - startTime, | ||
| }; | ||
| } catch (error) { | ||
| return { | ||
| featureId: task.featureId, | ||
| success: false, | ||
| error: error instanceof Error ? error.message : String(error), | ||
| durationMs: Date.now() - startTime, | ||
| }; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Abort signal is unused — cancellation won't stop running tasks.
The _signal parameter is ignored (underscore prefix confirms this). When cancel() calls abortController.abort(), nothing actually observes the signal, so the executor continues running to completion. Either pass the signal to the executor or check signal.aborted before/during execution.
Suggested fix — pass signal to executor
Update the type and pass the signal:
-type FeatureExecutorFn = (task: FeatureTask) => Promise<ExecutionResult>;
+type FeatureExecutorFn = (task: FeatureTask, signal: AbortSignal) => Promise<ExecutionResult>;
-private async executeTask(task: FeatureTask, _signal: AbortSignal): Promise<ExecutionResult> {
+private async executeTask(task: FeatureTask, signal: AbortSignal): Promise<ExecutionResult> {
if (!this.executor) {
throw new Error('No executor set. Call setExecutor() first.');
}
+ if (signal.aborted) {
+ return {
+ featureId: task.featureId,
+ success: false,
+ error: 'Task cancelled before execution',
+ durationMs: 0,
+ };
+ }
+
const startTime = Date.now();
try {
- const result = await this.executor(task);
+ const result = await this.executor(task, signal);🤖 Prompt for AI Agents
In @apps/server/src/services/parallel-executor.ts around lines 166 - 187,
executeTask currently ignores the provided _signal so abort() does nothing;
update the executor signature (the this.executor function type) to accept an
AbortSignal and pass the signal into the call inside executeTask, and also check
signal.aborted before invoking the executor and after awaiting (or catch an
AbortError) to return a cancelled/failed ExecutionResult. Concretely, change the
executor type to (task: FeatureTask, signal: AbortSignal) =>
Promise<ExecutionResultLike>, call await this.executor(task, _signal) inside
executeTask, and add an early check that if (_signal.aborted) return a cancelled
result (featureId, success: false or specific flag, error: 'aborted',
durationMs), and handle thrown aborts similarly in the catch block.
| const zaiModel = ZAI_MODELS.find((m) => m.id === selectedModel); | ||
| if (zaiModel) { | ||
| return { | ||
| ...zaiModel, | ||
| icon: Sparkles, // Or another icon | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine frontend ZAI_MODELS
echo "=== Looking for frontend model constants ==="
find apps/ui -name "*constants*" -type f | grep -E "\.(ts|tsx)$"Repository: AutoMaker-Org/automaker
Length of output: 190
🏁 Script executed:
# Look for backend ZAI models
echo "=== Looking for backend ZAI model definitions ==="
find libs/provider-zai -name "*" -type f | head -20Repository: AutoMaker-Org/automaker
Length of output: 325
🏁 Script executed:
# Search for ZAI_MODELS definition in frontend
echo "=== Frontend ZAI_MODELS ==="
rg -n "ZAI_MODELS" apps/ui --type=ts --type=tsx -A 5Repository: AutoMaker-Org/automaker
Length of output: 139
🏁 Script executed:
# Search for ZAI model IDs in backend
echo "=== Backend ZAI model IDs ==="
rg -n "(glm|GLM)" libs/provider-zai --type=ts -B 2 -A 2 | head -50Repository: AutoMaker-Org/automaker
Length of output: 3338
🏁 Script executed:
# Search for model constants in frontend using broader search
rg -n "model.*constant\|ZAI.*MODEL" apps/ui --type=ts -iRepository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Look for the model-constants file directly
find apps/ui -name "*constant*" -o -name "*model*" | grep -E "\.(ts|tsx)$" | head -20Repository: AutoMaker-Org/automaker
Length of output: 1321
🏁 Script executed:
# Read the phase-model-selector.tsx file to see imports and usage
cat apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx | head -50Repository: AutoMaker-Org/automaker
Length of output: 1324
🏁 Script executed:
# Search for ZAI_MODELS definition or import in apps/ui
rg "ZAI_MODELS" apps/ui -A 10 --type tsRepository: AutoMaker-Org/automaker
Length of output: 15378
🏁 Script executed:
cat apps/ui/src/components/views/board-view/shared/model-constants.ts | head -100Repository: AutoMaker-Org/automaker
Length of output: 2862
🏁 Script executed:
# Get all ZAI_MODELS IDs from frontend
rg "id:\s*['\"]GLM" apps/ui/src/components/views/board-view/shared/model-constants.ts -A 2Repository: AutoMaker-Org/automaker
Length of output: 798
🏁 Script executed:
# Also check if there's any model resolution or transformation happening
rg "resolveModelString\|glm-" apps/ui/src --type tsRepository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check the complete context around lines 185-191
sed -n '180,195p' apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsxRepository: AutoMaker-Org/automaker
Length of output: 466
🏁 Script executed:
# Check if resolveModelString is imported or used in the file
rg "resolveModelString\|@automaker/model-resolver" apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsxRepository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check what the selectedModel value might be at that point
sed -n '1,200p' apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx | grep -A 10 -B 10 "selectedModel"Repository: AutoMaker-Org/automaker
Length of output: 2552
Model ID case mismatch between frontend and backend ZAI models will cause selection failures.
The frontend ZAI_MODELS (from model-constants.ts) uses mixed-case IDs like 'GLM-4-Plus', 'GLM-4.7', 'GLM-4.6V', while the backend ZAI_MODELS (from libs/provider-zai/src/zai-query-service.ts) uses lowercase IDs like 'glm-4-plus', 'glm-4.7', 'glm-4.6v'. When the UI-selected model is passed to the backend, this case mismatch will prevent the backend from finding and processing the correct model.
Normalize the frontend ZAI model IDs to lowercase or apply case conversion when the model is sent to the backend.
| // STABILITY MODE: Disable security features that might kill the GPU process during heavy load | ||
| app.commandLine.appendSwitch('ignore-gpu-blacklist'); | ||
| app.commandLine.appendSwitch('disable-gpu-process-crash-limit'); | ||
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | ||
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | ||
| app.commandLine.appendSwitch('no-sandbox'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Disabling sandbox creates significant security vulnerabilities.
The no-sandbox and disable-gpu-sandbox flags disable Chromium's process sandboxing, which is a critical security boundary in Electron. This exposes the application to:
- Remote code execution if any loaded content (including web views or external resources) is compromised
- Privilege escalation attacks
- Memory corruption exploits
Additionally, line 50 is a duplicate of line 49.
🔒 Recommended: Remove sandbox-disabling flags
// STABILITY MODE: Disable security features that might kill the GPU process during heavy load
app.commandLine.appendSwitch('ignore-gpu-blacklist');
app.commandLine.appendSwitch('disable-gpu-process-crash-limit');
-app.commandLine.appendSwitch('disable-gpu-sandbox');
-app.commandLine.appendSwitch('disable-gpu-sandbox');
-app.commandLine.appendSwitch('no-sandbox');If GPU stability is truly an issue, consider keeping only disable-gpu-process-crash-limit and ignore-gpu-blacklist while preserving the sandbox.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // STABILITY MODE: Disable security features that might kill the GPU process during heavy load | |
| app.commandLine.appendSwitch('ignore-gpu-blacklist'); | |
| app.commandLine.appendSwitch('disable-gpu-process-crash-limit'); | |
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | |
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | |
| app.commandLine.appendSwitch('no-sandbox'); | |
| // STABILITY MODE: Disable security features that might kill the GPU process during heavy load | |
| app.commandLine.appendSwitch('ignore-gpu-blacklist'); | |
| app.commandLine.appendSwitch('disable-gpu-process-crash-limit'); |
🤖 Prompt for AI Agents
In @apps/ui/src/main.ts around lines 46 - 51, The code disables Chromium
sandboxing and duplicates a GPU flag: remove calls that weaken security by
deleting the app.commandLine.appendSwitch('no-sandbox') and the duplicate
app.commandLine.appendSwitch('disable-gpu-sandbox') lines, leaving only safe
GPU-stability flags (retain app.commandLine.appendSwitch('ignore-gpu-blacklist')
and app.commandLine.appendSwitch('disable-gpu-process-crash-limit')); ensure any
rationale is documented in a comment and avoid reintroducing sandbox-disabling
switches elsewhere in the startup sequence.
| // FORCE BYPASS FOR LOCAL Z.AI DEV - Always authenticate | ||
| // This removes the login requirement for local development | ||
| useAuthStore.getState().setAuthState({ isAuthenticated: true, authChecked: true }); | ||
| logger.info('[Z.AI DEV] Authentication bypassed for local development'); | ||
| return; | ||
|
|
||
| // --- Original auth logic below (disabled) --- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Unconditional authentication bypass will allow unauthenticated access in production.
This code bypasses authentication for all users without any environment check. There is no guard like process.env.NODE_ENV === 'development' or a feature flag—this will run in production builds and grant authenticated access to everyone.
Additionally, the return on line 268 makes all original authentication logic (lines 271–367) unreachable dead code.
If this is intended for local development only, it must be guarded by an environment check and should not be merged to main. Consider using a build-time flag that is stripped in production builds.
Suggested approach if dev bypass is needed
try {
- // FORCE BYPASS FOR LOCAL Z.AI DEV - Always authenticate
- // This removes the login requirement for local development
- useAuthStore.getState().setAuthState({ isAuthenticated: true, authChecked: true });
- logger.info('[Z.AI DEV] Authentication bypassed for local development');
- return;
-
- // --- Original auth logic below (disabled) ---
+ // Optional: Dev-only auth bypass (ensure VITE_DEV_AUTH_BYPASS is NEVER set in production builds)
+ if (import.meta.env.DEV && import.meta.env.VITE_DEV_AUTH_BYPASS === 'true') {
+ useAuthStore.getState().setAuthState({ isAuthenticated: true, authChecked: true });
+ logger.warn('[DEV] Authentication bypassed - do not use in production');
+ return;
+ }
+
// Initialize API key for Electron mode
await initApiKey();🤖 Prompt for AI Agents
In @apps/ui/src/routes/__root.tsx around lines 264 - 270, The unconditional
authentication bypass (useAuthStore.getState().setAuthState({ isAuthenticated:
true, authChecked: true })) followed by an immediate return makes the original
auth flow unreachable and will enable unauthenticated access in production; wrap
this bypass in a strict environment or feature-flag guard (e.g., check NODE_ENV
=== 'development' or a dedicated BUILD_DEV_BYPASS flag) so it only runs in
local/dev builds, remove or gate the unconditional return so the original logic
(the code after the bypass) executes in non-dev environments, and add a clear
log message via logger.info that includes the active guard/flag name to make the
dev-only behavior explicit.
| private generateToken(apiKey: string): string { | ||
| try { | ||
| const [id, secret] = apiKey.split('.'); | ||
| if (!id || !secret) return apiKey; | ||
|
|
||
| const payload = { | ||
| api_key: id, | ||
| exp: Date.now() + 3600 * 1000, | ||
| timestamp: Date.now(), | ||
| }; | ||
|
|
||
| const token = jwt.sign(payload, secret, { | ||
| algorithm: 'HS256', | ||
| header: { | ||
| alg: 'HS256', | ||
| sign_type: 'SIGN', | ||
| }, | ||
| } as any); | ||
|
|
||
| return token; | ||
| } catch (error) { | ||
| logger.error('Failed to generate Z.AI JWT token', error); | ||
| return apiKey; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, verify the file exists and check the code
find . -type f -name "zai-provider.ts" | head -5Repository: AutoMaker-Org/automaker
Length of output: 150
🏁 Script executed:
# Check the actual code at the specified lines
if [ -f "libs/provider-zai/src/zai-provider.ts" ]; then
echo "=== File content (lines 100-140) ==="
sed -n '100,140p' "libs/provider-zai/src/zai-provider.ts"
fiRepository: AutoMaker-Org/automaker
Length of output: 917
🏁 Script executed:
# Search for any Z.AI API documentation or comments in the codebase
rg -i "z\.ai|zai.*api" --type ts --type tsx -A 5 -B 5 | head -100Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Check for other JWT implementations in the codebase for comparison
rg "jwt\.sign\|exp.*Date\.now\|exp.*\/" --type ts --type tsx -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 199
🌐 Web query:
Z.AI GLM API JWT authentication exp timestamp format specification
💡 Result:
Use the standard JWT "NumericDate" format (seconds since the Unix epoch, 1970-01-01T00:00:00Z) for the exp claim — i.e., an integer/number of seconds (not milliseconds). (datatracker.ietf.org)
Z.ai’s API uses standard HTTP Bearer authentication; when a JWT exp is required, follow the RFC‑7519 NumericDate convention above (watch out for languages/SDKs that return milliseconds). (docs.z.ai)
Citations:
- 1: https://datatracker.ietf.org/doc/rfc7519/?utm_source=openai
- 2: https://docs.z.ai/api-reference/introduction?utm_source=openai
🏁 Script executed:
# Check other JWT implementations in the codebase for correct pattern
rg "jwt\.sign" --type ts -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 2465
🏁 Script executed:
# Look for any Z.AI API documentation or tests that might clarify timestamp format
rg -i "timestamp" ./libs/provider-zai --type ts -B 2 -A 2Repository: AutoMaker-Org/automaker
Length of output: 367
🏁 Script executed:
# Check test_zhipu_api.ts to understand the useMs parameter
cat -n apps/server/src/test_zhipu_api.ts | head -100Repository: AutoMaker-Org/automaker
Length of output: 2231
🏁 Script executed:
# Check the direct-zai-test.ts to understand how timestamp is being used
cat -n apps/server/tests/verification/direct-zai-test.ts | head -50Repository: AutoMaker-Org/automaker
Length of output: 1706
🏁 Script executed:
# Let's check if there are any comments or docs about Z.ai API format requirements in the codebase
rg -i "zhipu|z\.ai" --type ts -B 3 -A 3 | grep -i "millis\|second\|exp\|timestamp" | head -40Repository: AutoMaker-Org/automaker
Length of output: 2947
🏁 Script executed:
# Check if apps/server version has any differences from libs version
diff -u <(sed -n '100,135p' apps/server/src/providers/zai-provider.ts) <(sed -n '100,135p' libs/provider-zai/src/zai-provider.ts) || trueRepository: AutoMaker-Org/automaker
Length of output: 154
🏁 Script executed:
# Check the apps/server version directly
sed -n '100,140p' apps/server/src/providers/zai-provider.tsRepository: AutoMaker-Org/automaker
Length of output: 1181
🏁 Script executed:
# Verify if the Zhipu SDK documentation or comments confirm milliseconds are required
rg "Zhipu\|official.*SDK" --type ts -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Let's verify if the libs version might be intentionally different from apps version
git log --oneline -20 libs/provider-zai/src/zai-provider.ts 2>/dev/null | head -10 || echo "No git history available"Repository: AutoMaker-Org/automaker
Length of output: 127
🏁 Script executed:
# Check if there are any recent changes or PRs discussing the TTL difference
rg "3600|210|TTL" libs/provider-zai/src/zai-provider.ts -B 2 -A 2Repository: AutoMaker-Org/automaker
Length of output: 190
🏁 Script executed:
# Check the Z.ai API documentation reference more carefully - search for any inline docs
rg -i "z\.ai\|zhipu\|glm.*4\.7" --type ts libs/provider-zai/ -B 2 -A 2Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check if there's a README or integration guide for Z.ai
find . -name "README*" -o -name "*.md" | xargs grep -l "z\.ai\|zhipu\|zai" 2>/dev/null | head -5Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Let's check if there's any integration test that validates the current implementation works
rg -i "zai.*test\|generateToken" libs/provider-zai/ --type tsRepository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check if there's a package.json or documentation in libs/provider-zai
ls -la libs/provider-zai/Repository: AutoMaker-Org/automaker
Length of output: 355
🏁 Script executed:
# Let me look for any test files that might validate the token generation
find libs/provider-zai -name "*.test.ts" -o -name "*.spec.ts"Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# One more check - let's see if the ZAI constants or URL references help clarify
rg "ZAI_API_URL\|api\.z\.ai\|zhipu\.ai" libs/provider-zai/ -B 2 -A 2Repository: AutoMaker-Org/automaker
Length of output: 49
Fix JWT exp claim to use seconds per RFC 7519 and Z.ai API specification.
Z.ai API documentation explicitly requires following RFC 7519 NumericDate format (seconds since Unix epoch), not milliseconds. Date.now() returns milliseconds, so the current code sets expiration ~41 days in the future instead of 1 hour. The same issue affects the timestamp field if the API expects it in seconds.
🐛 Proposed fix
private generateToken(apiKey: string): string {
try {
const [id, secret] = apiKey.split('.');
if (!id || !secret) return apiKey;
+ const nowSeconds = Math.floor(Date.now() / 1000);
const payload = {
api_key: id,
- exp: Date.now() + 3600 * 1000,
- timestamp: Date.now(),
+ exp: nowSeconds + 3600,
+ timestamp: nowSeconds,
};🤖 Prompt for AI Agents
In @libs/provider-zai/src/zai-provider.ts around lines 108 - 132, The
generateToken method sets the JWT exp and timestamp using Date.now()
(milliseconds); change both to NumericDate (seconds) per RFC 7519 and Z.ai spec
by computing const now = Math.floor(Date.now() / 1000) and using exp: now + 3600
and timestamp: now before calling jwt.sign (in the generateToken function that
builds payload and calls jwt.sign); keep the rest of the signing options and
existing error handling intact.
| } as const; | ||
|
|
||
| export type ModelAlias = keyof typeof CLAUDE_MODEL_MAP; | ||
| export type ModelAlias = keyof typeof CLAUDE_MODEL_MAP | keyof typeof ZAI_MODEL_MAP; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forward reference to ZAI_MODEL_MAP before declaration.
ZAI_MODEL_MAP is used in the type definition at line 69, but it's declared later at lines 81-89. TypeScript const declarations are not hoisted like var, so this will cause a compile-time error ("Block-scoped variable 'ZAI_MODEL_MAP' used before its declaration").
Move ZAI_MODEL_MAP definition before this type alias.
🐛 Proposed fix
+/**
+ * Model alias mapping for Z.AI models
+ */
+export const ZAI_MODEL_MAP = {
+ 'glm-4-plus': 'glm-4-plus',
+ 'glm-4.7': 'glm-4.7',
+ 'glm-4.6': 'glm-4.6',
+ 'glm-4.5': 'glm-4.5',
+ 'glm-4.5-air': 'glm-4.5-air',
+ 'glm-4.5-flash': 'glm-4.5-flash',
+ 'glm-4.6v': 'glm-4.6v',
+} as const;
+
export type ModelAlias = keyof typeof CLAUDE_MODEL_MAP | keyof typeof ZAI_MODEL_MAP;
export type CodexModelId = (typeof CODEX_MODEL_MAP)[keyof typeof CODEX_MODEL_MAP];
/**
* AgentModel - Alias for ModelAlias for backward compatibility
* Represents available models across providers
*/
export type AgentModel = ModelAlias | CodexModelId;
-
-/**
- * Model alias mapping for Z.AI models
- */
-export const ZAI_MODEL_MAP = {
- 'glm-4-plus': 'glm-4-plus',
- 'glm-4.7': 'glm-4.7',
- 'glm-4.6': 'glm-4.6',
- 'glm-4.5': 'glm-4.5',
- 'glm-4.5-air': 'glm-4.5-air',
- 'glm-4.5-flash': 'glm-4.5-flash',
- 'glm-4.6v': 'glm-4.6v',
-} as const;🤖 Prompt for AI Agents
In @libs/types/src/model.ts at line 69, The type alias ModelAlias references
ZAI_MODEL_MAP before it is declared, causing a block-scoped forward-reference
error; fix by moving the ZAI_MODEL_MAP const declaration so it appears above the
export type ModelAlias (ensure CLAUDE_MODEL_MAP and ZAI_MODEL_MAP are both
defined before using them in ModelAlias), then re-run TypeScript compile to
verify no more "used before its declaration" errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In @apps/server/src/index.ts:
- Around line 267-270: The authenticateWebSocket function currently
unconditionally returns true, bypassing all WebSocket authentication; revert
this by restoring the original authentication logic (validate
cookies/headers/JWT/session) or, if you need a local-dev bypass, gate it
explicitly behind an environment check such as process.env.NODE_ENV ===
'development' or a dedicated flag like process.env.ZAI_DEV === 'true'; use
existing auth utilities used elsewhere in the app (e.g., any JWT/session
verification helpers) to validate the incoming import('http').IncomingMessage
and only return true on successful verification, otherwise return false and log
the failure.
- Around line 111-112: The stray comment "Register debug routes immediately"
indicates missing implementation; either remove that comment or add real debug
route registration. To implement, create a createDebugRoutes() function (or
equivalent router) following the same pattern as other handlers (e.g.,
instantiate an Express Router, attach any debug endpoints) and register it with
app.use('/api/debug', createDebugRoutes()) in apps/server/src/index.ts where the
comment is; if debug routes are not required, simply delete the comment to avoid
confusion. Ensure createDebugRoutes is exported/visible where registered and
matches existing middleware patterns (error handling, auth) used by other route
modules.
In @apps/server/src/providers/zai-provider.ts:
- Around line 383-385: The info-level log at the tool execution site exposes
potentially sensitive data by logging full args; change the call to either use
logger.debug (so it’s not logged in production) or sanitize/redact sensitive
fields in args before logging, keeping the descriptive context "Executing tool:
{toolCall.name}" and still calling zaiTools.executeTool(toolCall.name, args);
update the line referencing logger.info, and ensure any sanitization logic runs
on the args object prior to logging and prior to passing the original or
redacted args to zaiTools.executeTool as appropriate.
- Around line 254-262: The hardcoded max_tokens: 8192 in the
turnClient.chat.completions.create call can exceed model-specific limits (e.g.,
glm-4.5-flash has maxOutputTokens 4096); update the code in zai-provider where
you call turnClient.chat.completions.create (using apiModel, messages, tools) to
determine the model's allowed output tokens (from the model metadata or a
maintained modelLimits map keyed by apiModel, e.g., maxOutputTokens) and set
max_tokens to the model limit (or a safe minimum of your configured cap and the
model limit) instead of 8192, or omit max_tokens to let the API default—ensure
the logic falls back safely if metadata is unavailable.
- Around line 113-121: The initial OpenAI client created in initializeClient
uses the raw ZAI_API_KEY which will fail if the provider requires a signed JWT
(as handled in executeQuery); update initializeClient to detect when ZAI_API_KEY
is an ID.Secret-style key (e.g., contains a dot) and generate the same signed
JWT used in executeQuery, then create this.client with that JWT-based
authorization (mirror the header/token format executeQuery uses) so the first
API call is authenticated; locate initializeClient and reuse the JWT generation
logic from executeQuery to produce the token before instantiating the OpenAI
client.
🧹 Nitpick comments (7)
apps/server/src/index.ts (3)
9-11: Consider dynamic core detection or document the hardware assumption.The hardcoded value of 24 assumes specific hardware. On machines with fewer cores, this may cause unnecessary thread contention. Consider using
os.cpus().lengthfor dynamic sizing, or document this as a tunable default.+import os from 'os'; + // CPU MAXIMIZATION: Set thread pool size equal to core count BEFORE any libs load -// This ensures the server process (separate from Electron) gets the full thread pool -process.env.UV_THREADPOOL_SIZE = process.env.UV_THREADPOOL_SIZE || '24'; +// This ensures the server process gets an appropriate thread pool for the hardware +process.env.UV_THREADPOOL_SIZE = process.env.UV_THREADPOOL_SIZE || String(Math.min(os.cpus().length, 128));
38-38: Remove orphaned comment.This comment appears to be a leftover artifact and doesn't relate to the surrounding import statements.
-// ... (inside createServer or similar)
633-641: Consider consistent handling forunhandledRejection.
uncaughtExceptiontriggers graceful shutdown, butunhandledRejectiononly logs. This asymmetry could leave the server running in an inconsistent state after a critical promise rejection. Consider either triggering shutdown for unhandled rejections or documenting why this difference is intentional.process.on('unhandledRejection', (reason, promise) => { logger.error('Unhandled rejection at:', promise, 'reason:', reason); + // Treat unhandled rejections as fatal to match uncaughtException behavior + gracefulShutdown('unhandledRejection'); });apps/server/src/providers/zai-provider.ts (4)
1-24: Missing event emitter integration per coding guidelines.The coding guidelines for
apps/server/src/**/*.{ts,tsx}require usingcreateEventEmitter()fromlib/events.tsfor server operations that stream to frontend via WebSocket. This provider yields messages directly but doesn't emit events for WebSocket streaming.Consider importing and using the event emitter:
import { createEventEmitter } from '../lib/events.js';Then emit events within
executeQueryfor tool executions and streaming updates. Based on coding guidelines.
48-101: Model definitions duplicated from shared library.
ZAI_MODELS_DEFis already defined inlibs/provider-zai/src/zai-provider.tsbut with different content:
- The shared library includes
glm-4-plusmodel which is missing here- This file adds extra fields (
thinkingMode,customSystemPrompt,toolCallingNotes)Consider importing from the shared library and extending with provider-specific fields:
import { ZAI_MODELS_DEF as BASE_MODELS } from '@automaker/provider-zai'; const ZAI_MODELS_DEF: ModelConfig[] = BASE_MODELS.map(m => ({ ...m, thinkingMode: getThinkingMode(m.id), customSystemPrompt: getSystemPrompt(m.id), // ... }));This ensures model additions in the shared library are automatically available here.
239-247: New OpenAI client created on every turn.Creating a new
OpenAIclient instance on every iteration is inefficient. While token refresh is needed due to the 3.5-minute TTL, consider caching the client and only recreating when the token is near expiration.Suggested optimization
private tokenExpiry: number = 0; private tokenClient: OpenAI | null = null; private getClientWithFreshToken(): OpenAI { const apiKey = process.env.ZAI_API_KEY || ''; const now = Date.now(); // Refresh if within 30 seconds of expiry if (this.tokenClient && now < this.tokenExpiry - 30000) { return this.tokenClient; } if (apiKey.includes('.')) { const token = this.generateToken(apiKey); this.tokenExpiry = now + 210 * 1000; this.tokenClient = new OpenAI({ apiKey: token, baseURL: ZAI_API_URL }); } return this.tokenClient ?? this.client!; }
254-262: No timeout or abort signal for API calls.The streaming API call has no timeout configuration. If Z.AI becomes unresponsive, the request could hang indefinitely. Consider adding a timeout:
const controller = new AbortController(); const timeout = setTimeout(() => controller.abort(), 120000); // 2 min timeout try { const stream = await turnClient.chat.completions.create({ model: apiModel, messages, max_tokens: modelDef?.maxOutputTokens ?? 8192, stream: true, tools: tools?.length ? tools : undefined, tool_choice: tools?.length ? 'auto' : undefined, }, { signal: controller.signal }); // ... process stream } finally { clearTimeout(timeout); }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/server/src/index.tsapps/server/src/providers/zai-provider.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always import from shared packages (@automaker/*), never from old relative paths
Files:
apps/server/src/providers/zai-provider.tsapps/server/src/index.ts
apps/server/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
createEventEmitter()fromlib/events.tsfor all server operations to emit events that stream to frontend via WebSocket
Files:
apps/server/src/providers/zai-provider.tsapps/server/src/index.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
resolveModelString()from @automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names
Files:
apps/server/src/providers/zai-provider.tsapps/server/src/index.ts
🧬 Code graph analysis (2)
apps/server/src/providers/zai-provider.ts (2)
libs/provider-zai/src/zai-provider.ts (1)
ZAI_MODELS_DEF(34-84)apps/server/src/providers/zai-tools.ts (1)
ZaiTools(182-341)
apps/server/src/index.ts (1)
apps/server/src/services/settings-service.ts (1)
SettingsService(107-797)
🔇 Additional comments (7)
apps/server/src/index.ts (3)
177-177: LGTM!The
SettingsServiceinstantiation correctly matches the constructor signature and uses an appropriate fallback for the settings filename.
222-222: LGTM!Correctly injects
settingsServicedependency intocreateSetupRoutes, aligning with the updated route factory signature.
588-627: LGTM! Well-structured graceful shutdown mechanism.The shutdown implementation properly:
- Sets a hard timeout with
unref()to prevent process hang- Cleans up resources in the correct order (services → WebSocket → HTTP)
- Uses optional chaining for
closeAllConnectionscompatibilityapps/server/src/providers/zai-provider.ts (4)
123-151: Token generation logic looks correct.The JWT generation follows the Zhipu AI SDK format with HS256 signing and appropriate TTL. The fallback to raw key on error is good defensive coding.
Minor:
Math.round(Date.now())on line 130 is unnecessary sinceDate.now()already returns an integer.
271-328: Stream processing handles multiple tool calls correctly.The streaming logic properly accumulates partial tool call data across chunks and handles the index-based array for multiple simultaneous tool calls. The reasoning content extraction for GLM-4 specific fields is also well implemented.
422-433: LGTM!The installation detection correctly identifies Z.AI as a cloud service and properly checks for API key presence.
438-444: LGTM!The model mapping is straightforward. The tier cast on line 442 is technically redundant since
ModelConfig.tieralready has the correct type, but it doesn't cause issues.
apps/server/src/index.ts
Outdated
| // Register debug routes immediately | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the file at the specified lines with context
wc -l apps/server/src/index.tsRepository: AutoMaker-Org/automaker
Length of output: 95
🏁 Script executed:
# Read around lines 111-112 with context
sed -n '100,130p' apps/server/src/index.tsRepository: AutoMaker-Org/automaker
Length of output: 996
🏁 Script executed:
# Search for createDebugRoutes in the codebase
rg "createDebugRoutes" --type ts -C 3Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check if there's a lib/events.ts file and what debug-related functions exist
fd "debug" --type f -e ts -e tsx | head -20Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Search for route registration patterns in the file
rg "app\.(use|get|post)" apps/server/src/index.ts -nRepository: AutoMaker-Org/automaker
Length of output: 2087
🏁 Script executed:
# Check if "debug" is mentioned anywhere else in the server
rg -i "debug.*route" apps/server/src --type tsRepository: AutoMaker-Org/automaker
Length of output: 128
🏁 Script executed:
# Look at the full initialization section of index.ts to understand the structure
sed -n '80,200p' apps/server/src/index.tsRepository: AutoMaker-Org/automaker
Length of output: 4595
Remove incomplete debug routes comment or implement the feature.
The comment "Register debug routes immediately" at line 111 has no corresponding implementation. The createDebugRoutes() function does not exist in the codebase, and no debug routes are registered. Either implement debug route registration following the pattern of other route handlers (e.g., app.use('/api/debug', createDebugRoutes())), or remove the comment if debug routes are not needed.
🤖 Prompt for AI Agents
In @apps/server/src/index.ts around lines 111 - 112, The stray comment "Register
debug routes immediately" indicates missing implementation; either remove that
comment or add real debug route registration. To implement, create a
createDebugRoutes() function (or equivalent router) following the same pattern
as other handlers (e.g., instantiate an Express Router, attach any debug
endpoints) and register it with app.use('/api/debug', createDebugRoutes()) in
apps/server/src/index.ts where the comment is; if debug routes are not required,
simply delete the comment to avoid confusion. Ensure createDebugRoutes is
exported/visible where registered and matches existing middleware patterns
(error handling, auth) used by other route modules.
| // Enforce Z.AI priority by disabling Claude if Z.AI key is present | ||
| if (process.env.ZAI_API_KEY) { | ||
| if (process.env.ANTHROPIC_API_KEY) { | ||
| logger.info( | ||
| 'Detected Z.AI Key: Explicitly disabling Claude provider by unsetting ANTHROPIC_API_KEY' | ||
| ); | ||
| delete process.env.ANTHROPIC_API_KEY; | ||
| } | ||
| } | ||
|
|
||
| // Load providers dynamically to ensure they see the updated environment | ||
| await import('@automaker/provider-claude'); | ||
| await import('@automaker/provider-zai'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side effect: mutating process.env may cause misleading warnings.
The ANTHROPIC_API_KEY check at lines 86-103 runs before this deletion occurs, so the warning about missing Claude authentication won't display when Z.AI is configured. However, mutating process.env.ANTHROPIC_API_KEY could confuse other parts of the application that may have cached or already read the value.
Consider either:
- Moving the Z.AI priority check before the warning block (lines 86-103), or
- Using a runtime flag instead of mutating
process.env
| function authenticateWebSocket(request: import('http').IncomingMessage): boolean { | ||
| const url = new URL(request.url || '', `http://${request.headers.host}`); | ||
|
|
||
| // Convert URL search params to query object | ||
| const query: Record<string, string | undefined> = {}; | ||
| url.searchParams.forEach((value, key) => { | ||
| query[key] = value; | ||
| }); | ||
|
|
||
| // Parse cookies from header | ||
| const cookieHeader = request.headers.cookie; | ||
| const cookies = cookieHeader ? cookie.parse(cookieHeader) : {}; | ||
|
|
||
| // Use shared authentication logic for standard auth methods | ||
| if ( | ||
| checkRawAuthentication( | ||
| request.headers as Record<string, string | string[] | undefined>, | ||
| query, | ||
| cookies | ||
| ) | ||
| ) { | ||
| return true; | ||
| } | ||
|
|
||
| // Additionally check for short-lived WebSocket connection token (WebSocket-specific) | ||
| const wsToken = url.searchParams.get('wsToken'); | ||
| if (wsToken && validateWsConnectionToken(wsToken)) { | ||
| return true; | ||
| } | ||
|
|
||
| return false; | ||
| // FORCE BYPASS FOR LOCAL Z.AI DEV - Always authenticate WebSocket connections | ||
| return true; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: WebSocket authentication completely bypassed.
This function unconditionally returns true, completely disabling WebSocket authentication for all connections. The comment indicates this is for "local Z.AI dev" but there's no environment check to limit this bypass.
This is a critical security vulnerability that allows unauthenticated access to all WebSocket endpoints including terminal access.
🔒 Proposed fix: Gate bypass behind environment check or restore original logic
function authenticateWebSocket(request: import('http').IncomingMessage): boolean {
- // FORCE BYPASS FOR LOCAL Z.AI DEV - Always authenticate WebSocket connections
- return true;
+ // Allow bypass only in explicit local development mode
+ if (process.env.DISABLE_WS_AUTH === 'true' && process.env.NODE_ENV === 'development') {
+ return true;
+ }
+
+ // Check for valid authentication (API key, session token, or cookie)
+ const apiKey = request.headers['x-api-key'] as string | undefined;
+ const sessionToken = request.headers['x-session-token'] as string | undefined;
+ const cookies = cookie.parse(request.headers.cookie || '');
+
+ return checkRawAuthentication(apiKey, sessionToken, cookies);
}🤖 Prompt for AI Agents
In @apps/server/src/index.ts around lines 267 - 270, The authenticateWebSocket
function currently unconditionally returns true, bypassing all WebSocket
authentication; revert this by restoring the original authentication logic
(validate cookies/headers/JWT/session) or, if you need a local-dev bypass, gate
it explicitly behind an environment check such as process.env.NODE_ENV ===
'development' or a dedicated flag like process.env.ZAI_DEV === 'true'; use
existing auth utilities used elsewhere in the app (e.g., any JWT/session
verification helpers) to validate the incoming import('http').IncomingMessage
and only return true on successful verification, otherwise return false and log
the failure.
| private initializeClient() { | ||
| const apiKey = process.env.ZAI_API_KEY; | ||
| if (apiKey) { | ||
| this.client = new OpenAI({ | ||
| apiKey, | ||
| baseURL: ZAI_API_URL, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial client may fail authentication if JWT is required.
The constructor creates a client with the raw API key, but the JWT-based authentication in executeQuery (lines 241-247) suggests Z.AI requires signed tokens when the key is in ID.Secret format. The initial this.client would fail on first API call.
Consider generating the JWT token during initialization as well:
Suggested fix
private initializeClient() {
const apiKey = process.env.ZAI_API_KEY;
if (apiKey) {
+ const authToken = apiKey.includes('.') ? this.generateToken(apiKey) : apiKey;
this.client = new OpenAI({
- apiKey,
+ apiKey: authToken,
baseURL: ZAI_API_URL,
});
}
}🤖 Prompt for AI Agents
In @apps/server/src/providers/zai-provider.ts around lines 113 - 121, The
initial OpenAI client created in initializeClient uses the raw ZAI_API_KEY which
will fail if the provider requires a signed JWT (as handled in executeQuery);
update initializeClient to detect when ZAI_API_KEY is an ID.Secret-style key
(e.g., contains a dot) and generate the same signed JWT used in executeQuery,
then create this.client with that JWT-based authorization (mirror the
header/token format executeQuery uses) so the first API call is authenticated;
locate initializeClient and reuse the JWT generation logic from executeQuery to
produce the token before instantiating the OpenAI client.
| const messages: OpenAI.ChatCompletionMessageParam[] = []; | ||
|
|
||
| // Get model-specific configuration | ||
| const modelDef = ZAI_MODELS_DEF.find((m) => m.id === model); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable shadowing: modelDef declared twice.
modelDef is declared at line 194 for building the system prompt, then re-declared at line 235 inside the loop. The outer declaration becomes unused after the first iteration since the inner one shadows it.
Suggested fix
- // Find model definition to ensure correct casing (GLM-4.7)
- const modelDef = ZAI_MODELS_DEF.find((m) => m.id === model);
- const apiModel = modelDef ? modelDef.modelString : model;
+ // Use modelDef from outer scope for correct casing (GLM-4.7)
+ const apiModel = modelDef ? modelDef.modelString : model;Also applies to: 235-235
| const stream = await turnClient.chat.completions.create({ | ||
| model: apiModel, | ||
| messages, | ||
| max_tokens: 8192, // Increased to support large file operations | ||
| stream: true, | ||
| // Only pass tools if we have them | ||
| tools: tools && tools.length > 0 ? tools : undefined, | ||
| tool_choice: tools && tools.length > 0 ? 'auto' : undefined, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded max_tokens ignores model-specific limits.
max_tokens: 8192 is hardcoded, but glm-4.5-flash has maxOutputTokens: 4096. This could cause API errors or unexpected truncation.
Suggested fix
const stream = await turnClient.chat.completions.create({
model: apiModel,
messages,
- max_tokens: 8192, // Increased to support large file operations
+ max_tokens: modelDef?.maxOutputTokens ?? 8192,
stream: true,
// Only pass tools if we have them
tools: tools && tools.length > 0 ? tools : undefined,
tool_choice: tools && tools.length > 0 ? 'auto' : undefined,
});🤖 Prompt for AI Agents
In @apps/server/src/providers/zai-provider.ts around lines 254 - 262, The
hardcoded max_tokens: 8192 in the turnClient.chat.completions.create call can
exceed model-specific limits (e.g., glm-4.5-flash has maxOutputTokens 4096);
update the code in zai-provider where you call
turnClient.chat.completions.create (using apiModel, messages, tools) to
determine the model's allowed output tokens (from the model metadata or a
maintained modelLimits map keyed by apiModel, e.g., maxOutputTokens) and set
max_tokens to the model limit (or a safe minimum of your configured cap and the
model limit) instead of 8192, or omit max_tokens to let the API default—ensure
the logic falls back safely if metadata is unavailable.
| supportsFeature(feature: string): boolean { | ||
| const supportedFeatures = ['text', 'vision', 'tools', 'json_mode']; | ||
| return supportedFeatures.includes(feature); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feature support check doesn't account for model-specific capabilities.
This method returns true for 'vision' unconditionally, but glm-4.5-flash has supportsVision: false. This could lead to incorrect behavior if callers rely on this method to determine model capabilities.
Consider either:
- Making this method model-aware:
supportsFeature(feature: string, modelId?: string) - Documenting that this returns provider-level capabilities (any model supports this feature)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @apps/server/src/providers/zai-provider.ts:
- Around line 48-101: ZAI_MODELS_DEF is missing the 'glm-4-plus' model present
in the external provider; add a new model object to the ZAI_MODELS_DEF array
with id: 'glm-4-plus' (and corresponding name like "GLM 4 Plus"), modelString:
'GLM-4-Plus', provider: 'zai', and include the same canonical properties used by
other entries (description, contextWindow, maxOutputTokens, supportsVision,
supportsTools, tier, thinkingMode, customSystemPrompt, toolCallingNotes, and
default as appropriate) so the provider definitions match the external
libs/provider-zai implementation (use values consistent with glm-4.7/glm-4.6
where applicable).
- Around line 235-236: The inner redeclaration "const modelDef =
ZAI_MODELS_DEF.find((m) => m.id === model);" shadows the outer modelDef; remove
the redeclaration and reuse the existing modelDef when computing apiModel (i.e.,
ensure apiModel is set with the already-declared modelDef ? modelDef.modelString
: model). Also delete any duplicate lookup to ZAI_MODELS_DEF so only the
original modelDef lookup is used.
In @apps/server/src/routes/setup/routes/verify-claude-auth.ts:
- Around line 338-343: The cacheKey variable is out of scope when calling
verificationCache.set; declare cacheKey in the outer scope (above the inner try)
so it is available for both the verification flow and the final cache set, then
use that cacheKey in verificationCache.set. Also adjust the caching logic in the
verify-claude-auth flow (where verificationCache.set is called) to only store
successful authentications or non-retriable errors—do not cache
transient/retryable errors like rate limits or billing issues (inspect
errorMessage or error codes and skip caching for those cases).
- Around line 159-173: The bug is that cacheKey is declared inside the inner try
and later referenced out of scope and cleanupEnv is never invoked; fix by
declaring cacheKey (e.g., const cacheKey = `${authMethod}:${apiKey || 'none'}`)
outside/above the try so it’s available when setting verificationCache, and
ensure cleanupEnv() is called in the existing finally block (alongside any other
teardown) to avoid leaking the temporary environment override created by
createTempEnvOverride; update references to verificationCache.get/set
accordingly so they use the moved cacheKey.
In @apps/ui/src/store/app-store.ts:
- Around line 1193-1194: The enhancementModel and validationModel fields are
using the invalid uppercase key 'GLM-4.7' which doesn't match the ModelAlias
keys; change both values to the lowercase 'glm-4.7' so they align with the
ZAI_MODEL_MAP/ModelAlias types and pass type checking for enhancementModel and
validationModel.
🧹 Nitpick comments (7)
apps/server/src/routes/setup/routes/verify-claude-auth.ts (2)
79-83: Cache key exposes API keys and lacks eviction mechanism.Two concerns with the cache design:
Security: The cache key at line 162 uses the raw
apiKey, which could leak credentials if the Map is ever serialized, logged, or inspected in debugging. Consider hashing the key.Memory: The Map grows unbounded. While entries become stale after 30 seconds, they're never deleted. Consider periodic cleanup or using a bounded cache.
♻️ Suggested improvement
+import { createHash } from 'crypto'; + +function getCacheKey(authMethod: string | undefined, apiKey: string | undefined): string { + const keyPart = apiKey ? createHash('sha256').update(apiKey).digest('hex').slice(0, 16) : 'none'; + return `${authMethod}:${keyPart}`; +} + // Simple in-memory cache to debounce repeated verification calls on UI reload const verificationCache = new Map< string, { authenticated: boolean; error?: string; timestamp: number } >(); + +// Periodically clean up stale cache entries +setInterval(() => { + const now = Date.now(); + for (const [key, value] of verificationCache) { + if (now - value.timestamp > 30000) { + verificationCache.delete(key); + } + } +}, 60000);
176-185: UseresolveModelString()from@automaker/model-resolverfor model aliases.The hardcoded model string
'claude-3-5-sonnet-latest'should be replaced withresolveModelString('sonnet')to align with coding guidelines and maintain consistency with the rest of the codebase. This approach centralizes model resolution and ensures the latest sonnet variant is used.♻️ Suggested change
+import { resolveModelString } from '@automaker/model-resolver'; + // Run a minimal query to verify authentication -// Use standard model instead of experimental hardcoded one const stream = query({ prompt: "Reply with only the word 'ok'", options: { - model: 'claude-3-5-sonnet-latest', + model: resolveModelString('sonnet'), maxTurns: 1, allowedTools: [], abortController, }, });apps/server/src/providers/zai-provider.ts (5)
129-130: RedundantMath.round()call.
Date.now()already returns an integer (milliseconds since epoch), soMath.round()is unnecessary.Suggested fix
- const now = Math.round(Date.now()); // Milliseconds + const now = Date.now(); // Milliseconds
238-247: Consider caching token and reusing client within TTL window.A new
OpenAIclient is instantiated on every loop iteration (potentially up to 20 turns). While this ensures token freshness, it may accumulate resources if the SDK maintains connection pools. Consider caching the generated token with its expiry timestamp and only regenerating when nearing expiration.// Class-level fields: private cachedToken: string | null = null; private tokenExpiresAt: number = 0; // In executeQuery loop: const apiKey = process.env.ZAI_API_KEY || ''; if (apiKey.includes('.')) { const now = Date.now(); if (!this.cachedToken || now >= this.tokenExpiresAt - 30000) { // 30s buffer this.cachedToken = this.generateToken(apiKey); this.tokenExpiresAt = now + 210 * 1000; this.client = new OpenAI({ apiKey: this.cachedToken, baseURL: ZAI_API_URL }); } }
383-393: Consider adding timeout for tool execution.Tool execution (line 393) has no timeout. If a tool call (e.g.,
bashcommand) hangs indefinitely, the entire agent loop will block. Consider wrapping the execution with a timeout to prevent runaway processes.// Example using Promise.race with timeout const TOOL_TIMEOUT_MS = 60000; // 60 seconds const result = await Promise.race([ zaiTools.executeTool(toolCall.name, args), new Promise<string>((_, reject) => setTimeout(() => reject(new Error('Tool execution timed out')), TOOL_TIMEOUT_MS) ), ]);
15-15: Unused import:isZaiModel.
isZaiModelis imported from@automaker/typesbut never used in this file. Consider removing it to keep imports clean.Suggested fix
-import { isZaiModel, validateBareModelId, type ModelProvider } from '@automaker/types'; +import { validateBareModelId, type ModelProvider } from '@automaker/types';Note: If
isZaiModelis intended for future use, consider adding a TODO comment or removing until needed.
214-216:conversationHistoryparameter is extracted but never used.The
conversationHistoryparameter (line 180) is destructured from options but the TODO at lines 214-216 indicates it's not yet implemented. This means multi-turn conversations will lose prior context. If this is intentional for MVP, consider logging a warning or documenting the limitation.Would you like me to help implement conversation history mapping from
ProviderMessage[]toOpenAI.ChatCompletionMessageParam[]?
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
apps/server/src/lib/worktree-metadata.tsapps/server/src/providers/zai-provider.tsapps/server/src/routes/setup/routes/verify-claude-auth.tsapps/ui/src/store/app-store.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always import from shared packages (@automaker/*), never from old relative paths
Files:
apps/server/src/lib/worktree-metadata.tsapps/server/src/providers/zai-provider.tsapps/ui/src/store/app-store.tsapps/server/src/routes/setup/routes/verify-claude-auth.ts
apps/server/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
createEventEmitter()fromlib/events.tsfor all server operations to emit events that stream to frontend via WebSocket
Files:
apps/server/src/lib/worktree-metadata.tsapps/server/src/providers/zai-provider.tsapps/server/src/routes/setup/routes/verify-claude-auth.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
resolveModelString()from @automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names
Files:
apps/server/src/lib/worktree-metadata.tsapps/server/src/providers/zai-provider.tsapps/ui/src/store/app-store.tsapps/server/src/routes/setup/routes/verify-claude-auth.ts
🧠 Learnings (1)
📚 Learning: 2025-12-28T05:07:48.147Z
Learnt from: CR
Repo: AutoMaker-Org/automaker PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-28T05:07:48.147Z
Learning: Applies to **/*.{ts,tsx} : Use `resolveModelString()` from automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names
Applied to files:
apps/server/src/providers/zai-provider.ts
🧬 Code graph analysis (1)
apps/server/src/providers/zai-provider.ts (5)
libs/provider-zai/src/zai-provider.ts (2)
ZAI_MODELS_DEF(34-84)ZaiProvider(86-256)libs/providers-core/src/index.ts (5)
BaseProvider(14-14)ExecuteOptions(20-20)ProviderMessage(21-21)InstallationStatus(23-23)ModelDefinition(22-22)libs/types/src/index.ts (5)
ExecuteOptions(10-10)ProviderMessage(12-12)validateBareModelId(195-195)InstallationStatus(13-13)ModelDefinition(15-15)apps/server/src/providers/zai-tools.ts (1)
ZaiTools(182-341)apps/ui/src/lib/electron.ts (1)
ModelDefinition(187-187)
🔇 Additional comments (10)
apps/server/src/lib/worktree-metadata.ts (1)
151-171: LGTM! Good parallelization of I/O-bound operations.The refactor correctly parallelizes metadata loading using
Promise.all, which addresses the serial I/O bottleneck mentioned in the PR objectives. Each read operation has proper individual error handling, ensuring one failed read doesn't break the batch.The
secure-fsmodule provides throttling capabilities, which should manage concurrency at the file system level if needed.apps/ui/src/store/app-store.ts (5)
123-128: LGTM!The
zaifield addition to theApiKeysinterface is consistent with the existing pattern for other providers.
1170-1175: LGTM!The
zaikey initialization is consistent with the pattern used for other API keys.
1181-1181: LGTM!The conservative default of 2 concurrent agents is appropriate, and the slider remains unlocked for users who need higher concurrency.
1543-1561: LGTM! Good memory protection and idiomatic refactor.The switch to a functional state updater is more idiomatic for Zustand. The 10,000-step cap effectively prevents OOM in long-running sessions while retaining recent history. Using
slice(-10000)correctly preserves the most recent entries.
1145-1154: No issues found. The 'Zai' icon is properly defined inapps/ui/src/components/ui/provider-icon.tsxand correctly mapped in thePROFILE_ICONSconstant. The profile structure is valid and will render without issues.apps/server/src/providers/zai-provider.ts (4)
271-328: LGTM: Well-structured stream processing with proper tool call accumulation.The streaming logic correctly handles:
- Incremental content chunks
- GLM-4 specific reasoning/thinking content
- Multiple simultaneous tool calls with partial argument streaming
- Proper indexing for tool call accumulation
417-424: LGTM: Comprehensive error handling.The try-catch properly logs errors and yields an error message to the caller, allowing graceful degradation rather than crashing.
1-6: No action required—zai-provider correctly follows the established provider pattern.All providers in the codebase (claude-provider, cursor-provider, codex-provider, etc.) use async generators to stream
ProviderMessageobjects viayield, notcreateEventEmitter(). The zai-provider is consistent with this pattern and requires no changes.Likely an incorrect or invalid review comment.
454-460:supportsFeature()is a provider-level check and should not be relied upon for model-specific capabilities.The method returns
truefor'vision'at the provider level, but individual models likeglm-4.5-flashhavesupportsVision: false. This could be misleading if callers incorrectly assume provider-level feature support applies to all models.The codebase currently handles this correctly by using
ProviderFactory.modelSupportsVision()for model-specific checks. However, consider adding a JSDoc clarification thatsupportsFeature()is provider-level only, or renaming it to reflect this intent.
| const ZAI_MODELS_DEF: ModelConfig[] = [ | ||
| { | ||
| id: 'glm-4.7', | ||
| name: 'GLM 4.7', | ||
| modelString: 'GLM-4.7', | ||
| provider: 'zai', | ||
| description: 'New Flagship - Best reasoning & coding with Interleaved Thinking', | ||
| contextWindow: 128000, | ||
| maxOutputTokens: 8192, | ||
| supportsVision: true, | ||
| supportsTools: true, | ||
| tier: 'premium', | ||
| default: true, | ||
| thinkingMode: 'interleaved', // Inserts reasoning before each tool call | ||
| customSystemPrompt: | ||
| 'You are a highly capable coding assistant. Use your reasoning abilities to plan multi-step tasks carefully. Execute tools one at a time and verify results.', | ||
| toolCallingNotes: | ||
| 'Supports Interleaved Thinking - automatically reasons before each tool call. Best for complex multi-step coding tasks.', | ||
| }, | ||
| { | ||
| id: 'glm-4.6', | ||
| name: 'GLM 4.6 (Agentic)', | ||
| modelString: 'GLM-4.6', | ||
| provider: 'zai', | ||
| description: 'Optimized for agentic workflows with streaming tool calls', | ||
| contextWindow: 128000, | ||
| maxOutputTokens: 8192, | ||
| supportsVision: true, | ||
| supportsTools: true, | ||
| tier: 'premium', | ||
| thinkingMode: 'none', // Uses streaming tool call mode instead | ||
| customSystemPrompt: | ||
| 'You are an autonomous agent. Execute tasks efficiently using available tools. Prefer direct action over excessive planning.', | ||
| toolCallingNotes: | ||
| 'Optimized for agents with streaming tool output. Autonomously decides when to use tools.', | ||
| }, | ||
| { | ||
| id: 'glm-4.5-flash', | ||
| name: 'GLM 4.5 Flash', | ||
| modelString: 'GLM-4.5-Flash', | ||
| provider: 'zai', | ||
| description: 'Fast lightweight model with dual thinking modes', | ||
| contextWindow: 128000, | ||
| maxOutputTokens: 4096, | ||
| supportsVision: false, | ||
| supportsTools: true, | ||
| tier: 'basic', | ||
| thinkingMode: 'none', // Non-thinking mode for speed | ||
| customSystemPrompt: | ||
| 'You are a fast, efficient assistant. Complete tasks quickly and concisely.', | ||
| toolCallingNotes: | ||
| 'Fastest model. Use for simple tasks where speed matters more than complex reasoning.', | ||
| }, | ||
| ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing glm-4-plus model definition.
The external libs/provider-zai/src/zai-provider.ts defines four models including glm-4-plus, but this file only defines three models (glm-4.7, glm-4.6, glm-4.5-flash). This inconsistency could cause issues if users attempt to use the glm-4-plus model through this provider.
Suggested addition for glm-4-plus
},
+ {
+ id: 'glm-4-plus',
+ name: 'GLM 4 Plus',
+ modelString: 'GLM-4-Plus',
+ provider: 'zai',
+ description: 'High performance general model',
+ contextWindow: 128000,
+ maxOutputTokens: 4096,
+ supportsVision: true,
+ supportsTools: true,
+ tier: 'premium',
+ thinkingMode: 'none',
+ customSystemPrompt: 'You are a capable general-purpose assistant.',
+ toolCallingNotes: 'General purpose model with good tool calling support.',
+ },
];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const ZAI_MODELS_DEF: ModelConfig[] = [ | |
| { | |
| id: 'glm-4.7', | |
| name: 'GLM 4.7', | |
| modelString: 'GLM-4.7', | |
| provider: 'zai', | |
| description: 'New Flagship - Best reasoning & coding with Interleaved Thinking', | |
| contextWindow: 128000, | |
| maxOutputTokens: 8192, | |
| supportsVision: true, | |
| supportsTools: true, | |
| tier: 'premium', | |
| default: true, | |
| thinkingMode: 'interleaved', // Inserts reasoning before each tool call | |
| customSystemPrompt: | |
| 'You are a highly capable coding assistant. Use your reasoning abilities to plan multi-step tasks carefully. Execute tools one at a time and verify results.', | |
| toolCallingNotes: | |
| 'Supports Interleaved Thinking - automatically reasons before each tool call. Best for complex multi-step coding tasks.', | |
| }, | |
| { | |
| id: 'glm-4.6', | |
| name: 'GLM 4.6 (Agentic)', | |
| modelString: 'GLM-4.6', | |
| provider: 'zai', | |
| description: 'Optimized for agentic workflows with streaming tool calls', | |
| contextWindow: 128000, | |
| maxOutputTokens: 8192, | |
| supportsVision: true, | |
| supportsTools: true, | |
| tier: 'premium', | |
| thinkingMode: 'none', // Uses streaming tool call mode instead | |
| customSystemPrompt: | |
| 'You are an autonomous agent. Execute tasks efficiently using available tools. Prefer direct action over excessive planning.', | |
| toolCallingNotes: | |
| 'Optimized for agents with streaming tool output. Autonomously decides when to use tools.', | |
| }, | |
| { | |
| id: 'glm-4.5-flash', | |
| name: 'GLM 4.5 Flash', | |
| modelString: 'GLM-4.5-Flash', | |
| provider: 'zai', | |
| description: 'Fast lightweight model with dual thinking modes', | |
| contextWindow: 128000, | |
| maxOutputTokens: 4096, | |
| supportsVision: false, | |
| supportsTools: true, | |
| tier: 'basic', | |
| thinkingMode: 'none', // Non-thinking mode for speed | |
| customSystemPrompt: | |
| 'You are a fast, efficient assistant. Complete tasks quickly and concisely.', | |
| toolCallingNotes: | |
| 'Fastest model. Use for simple tasks where speed matters more than complex reasoning.', | |
| }, | |
| ]; | |
| const ZAI_MODELS_DEF: ModelConfig[] = [ | |
| { | |
| id: 'glm-4.7', | |
| name: 'GLM 4.7', | |
| modelString: 'GLM-4.7', | |
| provider: 'zai', | |
| description: 'New Flagship - Best reasoning & coding with Interleaved Thinking', | |
| contextWindow: 128000, | |
| maxOutputTokens: 8192, | |
| supportsVision: true, | |
| supportsTools: true, | |
| tier: 'premium', | |
| default: true, | |
| thinkingMode: 'interleaved', // Inserts reasoning before each tool call | |
| customSystemPrompt: | |
| 'You are a highly capable coding assistant. Use your reasoning abilities to plan multi-step tasks carefully. Execute tools one at a time and verify results.', | |
| toolCallingNotes: | |
| 'Supports Interleaved Thinking - automatically reasons before each tool call. Best for complex multi-step coding tasks.', | |
| }, | |
| { | |
| id: 'glm-4.6', | |
| name: 'GLM 4.6 (Agentic)', | |
| modelString: 'GLM-4.6', | |
| provider: 'zai', | |
| description: 'Optimized for agentic workflows with streaming tool calls', | |
| contextWindow: 128000, | |
| maxOutputTokens: 8192, | |
| supportsVision: true, | |
| supportsTools: true, | |
| tier: 'premium', | |
| thinkingMode: 'none', // Uses streaming tool call mode instead | |
| customSystemPrompt: | |
| 'You are an autonomous agent. Execute tasks efficiently using available tools. Prefer direct action over excessive planning.', | |
| toolCallingNotes: | |
| 'Optimized for agents with streaming tool output. Autonomously decides when to use tools.', | |
| }, | |
| { | |
| id: 'glm-4.5-flash', | |
| name: 'GLM 4.5 Flash', | |
| modelString: 'GLM-4.5-Flash', | |
| provider: 'zai', | |
| description: 'Fast lightweight model with dual thinking modes', | |
| contextWindow: 128000, | |
| maxOutputTokens: 4096, | |
| supportsVision: false, | |
| supportsTools: true, | |
| tier: 'basic', | |
| thinkingMode: 'none', // Non-thinking mode for speed | |
| customSystemPrompt: | |
| 'You are a fast, efficient assistant. Complete tasks quickly and concisely.', | |
| toolCallingNotes: | |
| 'Fastest model. Use for simple tasks where speed matters more than complex reasoning.', | |
| }, | |
| { | |
| id: 'glm-4-plus', | |
| name: 'GLM 4 Plus', | |
| modelString: 'GLM-4-Plus', | |
| provider: 'zai', | |
| description: 'High performance general model', | |
| contextWindow: 128000, | |
| maxOutputTokens: 4096, | |
| supportsVision: true, | |
| supportsTools: true, | |
| tier: 'premium', | |
| thinkingMode: 'none', | |
| customSystemPrompt: 'You are a capable general-purpose assistant.', | |
| toolCallingNotes: 'General purpose model with good tool calling support.', | |
| }, | |
| ]; |
🤖 Prompt for AI Agents
In @apps/server/src/providers/zai-provider.ts around lines 48 - 101,
ZAI_MODELS_DEF is missing the 'glm-4-plus' model present in the external
provider; add a new model object to the ZAI_MODELS_DEF array with id:
'glm-4-plus' (and corresponding name like "GLM 4 Plus"), modelString:
'GLM-4-Plus', provider: 'zai', and include the same canonical properties used by
other entries (description, contextWindow, maxOutputTokens, supportsVision,
supportsTools, tier, thinkingMode, customSystemPrompt, toolCallingNotes, and
default as appropriate) so the provider definitions match the external
libs/provider-zai implementation (use values consistent with glm-4.7/glm-4.6
where applicable).
| const modelDef = ZAI_MODELS_DEF.find((m) => m.id === model); | ||
| const apiModel = modelDef ? modelDef.modelString : model; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable modelDef shadows outer declaration.
modelDef is already declared at line 194 and used for system prompt configuration. Re-declaring it here at line 235 shadows the outer variable. Since both refer to the same lookup, remove the redeclaration and reuse the existing variable.
Suggested fix
- // Find model definition to ensure correct casing (GLM-4.7)
- const modelDef = ZAI_MODELS_DEF.find((m) => m.id === model);
- const apiModel = modelDef ? modelDef.modelString : model;
+ // Use existing modelDef from line 194 for correct casing (GLM-4.7)
+ const apiModel = modelDef ? modelDef.modelString : model;🤖 Prompt for AI Agents
In @apps/server/src/providers/zai-provider.ts around lines 235 - 236, The inner
redeclaration "const modelDef = ZAI_MODELS_DEF.find((m) => m.id === model);"
shadows the outer modelDef; remove the redeclaration and reuse the existing
modelDef when computing apiModel (i.e., ensure apiModel is set with the
already-declared modelDef ? modelDef.modelString : model). Also delete any
duplicate lookup to ZAI_MODELS_DEF so only the original modelDef lookup is used.
| const cleanupEnv = createTempEnvOverride(authEnv); | ||
|
|
||
| // Check cache first to prevent heavy process spawning on reload | ||
| const cacheKey = `${authMethod}:${apiKey || 'none'}`; | ||
| const cached = verificationCache.get(cacheKey); | ||
| if (cached && Date.now() - cached.timestamp < 30000) { | ||
| // 30s cache | ||
| logger.info('[Setup] Using cached verification result'); | ||
| res.json({ | ||
| success: true, | ||
| authenticated: cached.authenticated, | ||
| error: cached.error, | ||
| }); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cacheKey is out of scope when setting cache; cleanupEnv is never called.
Two critical issues:
-
ReferenceError:
cacheKeyis declared withconstinside the innertryblock (line 162), but it's used at line 339 which is outside that block's scope. This will cause a runtime error. -
Resource leak:
cleanupEnvis created but never invoked. The temporary environment override is never cleaned up.
🐛 Proposed fix
Move cacheKey declaration outside the inner try block and ensure cleanupEnv is called in the finally block:
+ // Define cache key before try block so it's accessible for caching result
+ const cacheKey = `${authMethod}:${apiKey || 'none'}`;
+
+ // Check cache first to prevent heavy process spawning on reload
+ const cached = verificationCache.get(cacheKey);
+ if (cached && Date.now() - cached.timestamp < 30000) {
+ // 30s cache
+ logger.info('[Setup] Using cached verification result');
+ res.json({
+ success: true,
+ authenticated: cached.authenticated,
+ error: cached.error,
+ });
+ return;
+ }
+
// Create temporary environment override for SDK call
const cleanupEnv = createTempEnvOverride(authEnv);
- // Check cache first to prevent heavy process spawning on reload
- const cacheKey = `${authMethod}:${apiKey || 'none'}`;
- const cached = verificationCache.get(cacheKey);
- if (cached && Date.now() - cached.timestamp < 30000) {
- // 30s cache
- logger.info('[Setup] Using cached verification result');
- res.json({
- success: true,
- authenticated: cached.authenticated,
- error: cached.error,
- });
- return;
- }
+ try {
// Run a minimal query to verify authentication
// ... existing verification logic ...
+ } finally {
+ cleanupEnv();
+ }Also ensure cleanupEnv() is called in the existing finally block at line 332.
🤖 Prompt for AI Agents
In @apps/server/src/routes/setup/routes/verify-claude-auth.ts around lines 159 -
173, The bug is that cacheKey is declared inside the inner try and later
referenced out of scope and cleanupEnv is never invoked; fix by declaring
cacheKey (e.g., const cacheKey = `${authMethod}:${apiKey || 'none'}`)
outside/above the try so it’s available when setting verificationCache, and
ensure cleanupEnv() is called in the existing finally block (alongside any other
teardown) to avoid leaking the temporary environment override created by
createTempEnvOverride; update references to verificationCache.get/set
accordingly so they use the moved cacheKey.
| // Cache the result | ||
| verificationCache.set(cacheKey, { | ||
| authenticated, | ||
| error: errorMessage || undefined, | ||
| timestamp: Date.now(), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cache set will fail due to cacheKey being out of scope.
As noted above, cacheKey is defined inside the inner try block and is not accessible here. This code will throw a ReferenceError at runtime.
Additionally, consider whether caching errors like rate limits or billing issues is desirable—users might want to retry immediately after resolving an account issue.
See the fix proposed in the earlier comment (lines 159-173) to move cacheKey to the appropriate scope.
🤖 Prompt for AI Agents
In @apps/server/src/routes/setup/routes/verify-claude-auth.ts around lines 338 -
343, The cacheKey variable is out of scope when calling verificationCache.set;
declare cacheKey in the outer scope (above the inner try) so it is available for
both the verification flow and the final cache set, then use that cacheKey in
verificationCache.set. Also adjust the caching logic in the verify-claude-auth
flow (where verificationCache.set is called) to only store successful
authentications or non-retriable errors—do not cache transient/retryable errors
like rate limits or billing issues (inspect errorMessage or error codes and skip
caching for those cases).
7c594d5 to
e2fd343
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
apps/ui/src/components/views/graph-view/components/dependency-edge.tsx (1)
132-132: Critical bug: Delete button will never render.The condition
edgeData?.onDeleteDependencychecks for a property that was removed fromDependencyEdgeData(as noted in line 15). SinceonDeleteDependencyno longer exists on edge data, this condition will always befalseand the delete button will never appear.Proposed fix
- {(isHovered || selected) && edgeData?.onDeleteDependency && ( + {(isHovered || selected) && actions.onDeleteDependency && (apps/ui/src/components/views/graph-view/hooks/use-graph-nodes.ts (2)
101-109: Duplicate filter state assignments.Lines 101-104 and 105-108 assign
isMatched,isHighlighted, andisDimmedtwice to the same object.Proposed fix
data: { ...feature, isBlocked: blockingDeps.length > 0, isRunning, blockingDependencies: blockingDeps, // Filter states isMatched, isHighlighted, isDimmed, - // Filter states - isMatched, - isHighlighted, - isDimmed, // Action callbacks - REMOVED: Now handled via Context in TaskNode },
40-49: AddonExpandtoNodeActionCallbacksinterface.The
task-node.tsxcomponent callsactions.onExpand?.(data.id), butonExpandis not defined in this interface, causing a type error (suppressed with@ts-ignorein task-node.tsx).Proposed fix
export interface NodeActionCallbacks { onViewLogs?: (featureId: string) => void; onViewDetails?: (featureId: string) => void; onStartTask?: (featureId: string) => void; onStopTask?: (featureId: string) => void; onResumeTask?: (featureId: string) => void; onSpawnTask?: (featureId: string) => void; onDeleteTask?: (featureId: string) => void; onDeleteDependency?: (sourceId: string, targetId: string) => void; + onExpand?: (featureId: string) => void; }apps/ui/src/hooks/use-settings-migration.ts (1)
716-720:syncCredentialsToServertype does not includezaicredential.The
hydrateStoreFromSettingsfunction accepts and processeszaicredentials (line 590), butsyncCredentialsToServeronly acceptsanthropic,openai. Addzai?: string;to the type definition to maintain consistency between read and write operations.
🤖 Fix all issues with AI agents
In @apps/server/src/services/red-giant-service.ts:
- Around line 364-375: The calculateLuminosity function computes radiusMeters,
solarRadiusMeters, and massLuminosityRelation (and accepts mass) but never uses
them; either remove these dead variables and the mass parameter or incorporate
them into the luminosity calculation. Update calculateLuminosity to consistently
use solar units/metrics (e.g., use radiusMeters or normalize radius by
solarRadiusMeters and apply massLuminosityRelation to the base luminosity) or
drop mass and unused constants if the simple L ∝ R^2 T^4 form is intended;
ensure the returned luminosity uses the declared variables
(radiusMeters/solarRadiusMeters/massLuminosityRelation) or remove them and the
mass parameter to eliminate dead code.
- Around line 214-241: Intermediate-mass stars (initialMass >= 2.5 && < 8.0)
lack a terminal state: when progress >= 1.0 add a branch that calls
this.recordEvent (e.g., type 'planetary-nebula' or 'end-of-red-giant') with the
same details payload and set star.status = 'white-dwarf' and update
star.rgbPhase (or equivalent) to the post-horizontal-branch phase; place this
branch alongside the existing helium-flash and core-collapse checks so stars in
the 2.5–8.0 M☉ range transition correctly instead of remaining 'active'.
In
@apps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsx:
- Around line 262-300: The memo comparator areKanbanCardPropsEqual is missing
comparisons for props that affect rendering—add equality checks for
prev.glassmorphism vs next.glassmorphism, prev.cardBorderEnabled vs
next.cardBorderEnabled, prev.cardBorderOpacity vs next.cardBorderOpacity, and
prev.shortcutKey vs next.shortcutKey so changes to backdrop-blur, border
classes/opacity, and the CardActions shortcut prop correctly trigger re-renders;
update the function to return false when any of those props differ (keeping the
existing try/catch and other comparisons intact).
In @apps/ui/src/components/views/graph-view/components/task-node.tsx:
- Around line 307-318: The click handler uses @ts-ignore because onExpand is
missing from NodeActionCallbacks and the fallback to data.onExpand is invalid;
add an onExpand callback to the NodeActionCallbacks interface in
use-graph-nodes.ts (e.g., onExpand?: (id: string) => void), remove the fallback
reference to data.onExpand and the @ts-ignore in task-node.tsx, and update the
handler to simply call actions.onExpand?.(data.id) after stopping propagation.
- Around line 16-28: Remove the stray developer "thinking" comments in
task-node.tsx (the block above the import of TaskNodeData) so the file only
contains purposeful comments and code; specifically delete the multi-line notes
that discuss extending or modifying TaskNodeData and the decision process,
leaving the import statement (import { TaskNodeData } from
'../hooks/use-graph-nodes';) and the component code intact and well-commented
only where necessary.
In @apps/ui/src/hooks/use-settings-migration.ts:
- Around line 446-459: The code is force-casting the API response to
GlobalSettings because aiProfiles.model comes back as string aliases; instead of
as unknown as GlobalSettings, import resolveModelString from
'@automaker/model-resolver' and map/transform the returned globalResult.settings
into a proper GlobalSettings object by resolving each aiProfile.model alias into
the expected PhaseModelConfig (or concrete model config) using
resolveModelString (apply this when building serverSettings and whenever
aiProfiles are processed — e.g., where 'sonnet'/'opus' aliases are used), so
types align without unsafe casting and model aliases are normalized to the
expected shape.
- Around line 584-591: The hydration currently overwrites real keys in apiKeys
by assigning masked strings from credentials (variables: apiKeys,
current.apiKeys, credentials) which replaces usable config with display-only
values; change the logic to preserve existing real keys in current.apiKeys and
never replace them with credentials.*.masked—either skip setting apiKeys.* when
credentials.*.masked exists or populate a separate display-only object (e.g.,
maskedApiKeys) for UI use (keep apiKeys for real config values and add a new
masked store for presentation).
In @apps/ui/src/main.ts:
- Around line 49-51: Remove the duplicate call to
app.commandLine.appendSwitch('disable-gpu-sandbox') and stop disabling the
Chromium sandbox by default: leave
app.commandLine.appendSwitch('disable-gpu-sandbox') only once, remove
app.commandLine.appendSwitch('no-sandbox'), and make disabling the renderer
sandbox opt-in via an environment variable (e.g., ELECTRON_DISABLE_SANDBOX)
checked before calling app.commandLine.appendSwitch('no-sandbox'); also add a
concise runtime warning/log explaining the security risk when the env opt-in is
used so users are explicitly informed.
In @apps/ui/src/store/app-store.ts:
- Around line 1193-1194: The defaults enhancementModel and validationModel are
set to 'GLM-4.7' but may not be a valid ModelAlias and will break for users
without a Z.AI key; either add 'GLM-4.7' to the ModelAlias union/type definition
or change the defaults to a known ModelAlias (e.g., a Claude alias) and
implement runtime fallback logic in the code paths that use
enhancementModel/validationModel (check for a configured ZAI API key before
selecting a ZAI model and fall back to the Claude alias) so TypeScript and
runtime behavior remain safe; update references to enhancementModel,
validationModel, and the ModelAlias type accordingly.
- Line 1181: The store sets maxConcurrency: 24 while the UI enforces a 10-task
ceiling, causing a mismatch; update the store default to match the UI by
changing the maxConcurrency value used in the app store (symbol: maxConcurrency
in app-store.ts / the app-store default config) from 24 to 10, or alternatively
adjust the UI slider max to 24—pick one consistent change and ensure any related
docs or comments are updated to reflect the chosen default so store and UI
limits remain aligned.
In @libs/provider-zai/src/zai-query-service.ts:
- Around line 209-225: The resolveModel method currently ignores known aliases
and falls back to this.defaultModel; update it to import and call
resolveModelString() from @automaker/model-resolver to map aliases (e.g., haiku,
sonnet, opus) to actual Z.AI model IDs, then if the resolved string exists in
ZAI_MODELS return it, otherwise fall back to this.defaultModel; ensure the logic
stays inside the private resolveModel(model?: string): string function and
replace the "model in ZAI_MODELS" check with a check against the resolved value
returned by resolveModelString().
🧹 Nitpick comments (13)
apps/ui/src/main.ts (2)
10-12: Consider dynamic thread pool sizing instead of hardcoded value.The hardcoded value of 24 assumes a specific CPU configuration. On machines with fewer cores, this over-allocates threads (wasting memory), and on machines with more cores, it under-utilizes available parallelism.
Suggested improvement
-// CPU MAXIMIZATION: Set thread pool size equal to core count (24) BEFORE any libs load -// This unlocks massive parallel I/O for the Node.js backend -process.env.UV_THREADPOOL_SIZE = '24'; +// CPU MAXIMIZATION: Set thread pool size based on available cores BEFORE any libs load +// This unlocks massive parallel I/O for the Node.js backend +// eslint-disable-next-line @typescript-eslint/no-require-imports +const cpuCount = require('os').cpus().length; +process.env.UV_THREADPOOL_SIZE = String(Math.min(Math.max(cpuCount, 4), 128));
54-60: Hardcoded memory limits assume high-end hardware.The VRAM (24GB), WebGL contexts (32), renderer process limit (100), and V8 heap (64GB) values are hardcoded for high-end systems. On machines with less VRAM or RAM, these settings may cause issues:
force-gpu-mem-available-mb=24000could cause GPU allocation failures on systems with less VRAM.--max-old-space-size=65536(64GB) may exceed available system RAM.Consider making these values configurable via environment variables or detecting available system resources.
Example: Make configurable with sensible defaults
-// UNLEASH THE BEAST: Force 24GB VRAM allocation limits (Safe now that sandbox is off) -app.commandLine.appendSwitch('force-gpu-mem-available-mb', '24000'); -app.commandLine.appendSwitch('max-active-webgl-contexts', '32'); -app.commandLine.appendSwitch('renderer-process-limit', '100'); - -// CPU / RAM MAXIMIZATION: Unlock V8 Heap to 64GB -app.commandLine.appendSwitch('js-flags', '--max-old-space-size=65536'); +// GPU memory configuration (defaults to 8GB, configurable via AUTOMAKER_GPU_MEM_MB) +const gpuMemMb = process.env.AUTOMAKER_GPU_MEM_MB || '8000'; +app.commandLine.appendSwitch('force-gpu-mem-available-mb', gpuMemMb); +app.commandLine.appendSwitch('max-active-webgl-contexts', '32'); +app.commandLine.appendSwitch('renderer-process-limit', '100'); + +// V8 heap configuration (defaults to 8GB, configurable via AUTOMAKER_HEAP_SIZE_MB) +const heapSizeMb = process.env.AUTOMAKER_HEAP_SIZE_MB || '8192'; +app.commandLine.appendSwitch('js-flags', `--max-old-space-size=${heapSizeMb}`);apps/server/src/services/red-giant-service.ts (1)
397-399:substris deprecated; usesubstringinstead.
String.prototype.substris deprecated per the ECMAScript spec.Proposed fix
private generateId(): string { - return `red-giant-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; + return `red-giant-${Date.now()}-${Math.random().toString(36).substring(2, 11)}`; }apps/server/src/routes/red-giant/routes/create-star.ts (1)
16-30: Consider adding upper bound validation for mass.The validation ensures
mass > 0, but there's no upper limit. Astrophysically, stars above ~150 solar masses are unstable due to the Eddington limit. Adding an upper bound would prevent unrealistic simulation states.Proposed enhancement
if (mass <= 0) { res.status(400).json({ success: false, error: 'mass must be greater than 0', }); return; } + + if (mass > 150) { + res.status(400).json({ + success: false, + error: 'mass cannot exceed 150 solar masses (Eddington limit)', + }); + return; + }libs/provider-zai/src/zai-query-service.ts (2)
50-53: Remove placeholder comment.Line 53 contains
// ...which appears to be a leftover placeholder or incomplete marker. This should be removed.🧹 Suggested fix
getName(): string { return 'zai'; } - // ...
151-191: Consider extracting duplicate message-building logic.The message array construction (lines 153-165) is repeated across
simpleQuery,streamQuery, andquery. Extract to a private helper for DRY.♻️ Suggested helper method
private buildMessages(prompt: string, options?: QueryOptions): OpenAI.ChatCompletionMessageParam[] { const messages: OpenAI.ChatCompletionMessageParam[] = []; if (options?.systemPrompt) { messages.push({ role: 'system', content: options.systemPrompt }); } messages.push({ role: 'user', content: prompt }); return messages; }Then replace each occurrence with
const messages = this.buildMessages(prompt, options);apps/ui/src/components/views/graph-view/context/graph-actions-context.tsx (1)
1-8: Clean context implementation, butonExpandis missing fromNodeActionCallbacks.The context setup is correct. However, according to the
NodeActionCallbacksinterface inuse-graph-nodes.ts(lines 40-49),onExpandis not defined. Thetask-node.tsxfile callsactions.onExpand?.(data.id)which will work at runtime due to optional chaining, but lacks type safety.Consider adding
onExpandtoNodeActionCallbacks:export interface NodeActionCallbacks { // ... existing callbacks onExpand?: (featureId: string) => void; }apps/ui/src/components/views/graph-view/graph-canvas.tsx (2)
228-230: Type the viewport parameter instead of usingany.The
_eventparameter usesany. ReactFlow provides proper types for the viewport callback.Suggested fix
- const handleMoveEnd = useCallback((_event: any, viewport: { zoom: number }) => { + const handleMoveEnd = useCallback((_event: React.MouseEvent | React.TouchEvent | null, viewport: { zoom: number }) => { setZoomLevel(viewport.zoom); }, []);Alternatively, import
OnMoveEndtype from@xyflow/reactfor full type coverage.
318-319: Move import to the top of the file.The import statement is placed after the component definition, which is unconventional and can reduce readability. Consider moving it to the imports section at the top of the file (near line 34).
apps/ui/src/components/views/graph-view/components/task-node.tsx (1)
408-454: Good render optimization, buttry-catchis unusual here.The custom
arePropsEqualfunction is a solid optimization that prevents re-renders from large data updates (logs, steps, etc.). However, wrapping props comparison intry-catchis defensive against scenarios that shouldn't occur with well-typed props.If the types are correct, the try-catch can be removed. If kept for safety, consider logging more context:
console.error('TaskNode props comparison error:', e, { prev, next });apps/ui/src/components/views/graph-view/hooks/use-graph-nodes.ts (1)
62-67:actionCallbacksparameter is accepted but never used.The
actionCallbacksprop is still in the function signature andUseGraphNodesPropsinterface (line 55), but it was removed from theuseMemodependency array (line 149) and is not used anywhere in the function body.Either remove the parameter entirely for clarity, or keep it if there's a plan to use it elsewhere.
Proposed fix (if removing)
interface UseGraphNodesProps { features: Feature[]; runningAutoTasks: string[]; filterResult?: GraphFilterResult; - actionCallbacks?: NodeActionCallbacks; } export function useGraphNodes({ features, runningAutoTasks, filterResult, - actionCallbacks, }: UseGraphNodesProps) {apps/ui/src/hooks/use-settings-migration.ts (2)
94-108: Consider clearing the timeout on normal completion.The timeout is never cleared if migration completes normally before 5 seconds. While minor, this leaves a dangling timer. Additionally, if
waitForMigrationComplete()is called multiple times before completion, each call creates a new timeout.♻️ Suggested improvement
+let migrationTimeoutId: ReturnType<typeof setTimeout> | null = null; + export function waitForMigrationComplete(): Promise<void> { // If migration already completed before anything started waiting, resolve immediately. if (migrationCompleted) { return Promise.resolve(); } if (!migrationCompletePromise) { migrationCompletePromise = new Promise((resolve) => { migrationCompleteResolve = resolve; }); } - // Safety Timeout: If migration takes longer than 5 seconds, proceed anyway. - // This prevents the "Waiting for migration to..." freeze if the migration hook fails/hangs. - const timeoutPromise = new Promise<void>((resolve) => { - setTimeout(() => { + // Safety Timeout: Only create one timeout across all callers + if (!migrationTimeoutId) { + migrationTimeoutId = setTimeout(() => { if (!migrationCompleted) { logger.warn('Migration wait timed out (5s). Forcing completion signal.'); migrationCompleted = true; if (migrationCompleteResolve) migrationCompleteResolve(); - resolve(); } }, 5000); - }); + } - return Promise.race([migrationCompletePromise, timeoutPromise]); + return migrationCompletePromise; }Also clear the timeout in
signalMigrationComplete():export function signalMigrationComplete(): void { migrationCompleted = true; + if (migrationTimeoutId) { + clearTimeout(migrationTimeoutId); + migrationTimeoutId = null; + } if (migrationCompleteResolve) { migrationCompleteResolve(); } }And in
resetMigrationState():export function resetMigrationState(): void { migrationCompleted = false; migrationCompletePromise = null; migrationCompleteResolve = null; + if (migrationTimeoutId) { + clearTimeout(migrationTimeoutId); + migrationTimeoutId = null; + } }
543-557: Duplicate JSDoc comment and inline type could be extracted.Lines 543-544 contain a duplicate JSDoc block. Additionally, the credentials type is verbose and could be extracted for reusability.
♻️ Suggested cleanup
-/** - * Hydrate the Zustand store from settings object - */ +/** Credentials structure returned by the settings API */ +interface ServerCredentials { + anthropic: { configured: boolean; masked: string }; + google: { configured: boolean; masked: string }; + openai: { configured: boolean; masked: string }; + zai?: { configured: boolean; masked: string }; +} + /** * Hydrate the Zustand store from settings object + * @param settings - Global settings to apply + * @param credentials - Optional server credentials for API key hydration */ export function hydrateStoreFromSettings( settings: GlobalSettings, - credentials?: { - anthropic: { configured: boolean; masked: string }; - google: { configured: boolean; masked: string }; - openai: { configured: boolean; masked: string }; - zai?: { configured: boolean; masked: string }; - } | null + credentials?: ServerCredentials | null ): void {
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
apps/server/src/routes/red-giant/routes/create-star.tsapps/server/src/routes/red-giant/routes/delete-star.tsapps/server/src/routes/red-giant/routes/evolve-star.tsapps/server/src/routes/red-giant/routes/get-events.tsapps/server/src/routes/red-giant/routes/get-metrics.tsapps/server/src/routes/red-giant/routes/get-star.tsapps/server/src/routes/red-giant/routes/list-stars.tsapps/server/src/services/red-giant-service.tsapps/ui/package.jsonapps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsxapps/ui/src/components/views/graph-view/components/dependency-edge.tsxapps/ui/src/components/views/graph-view/components/task-node.tsxapps/ui/src/components/views/graph-view/context/graph-actions-context.tsxapps/ui/src/components/views/graph-view/graph-canvas.tsxapps/ui/src/components/views/graph-view/hooks/use-graph-nodes.tsapps/ui/src/hooks/use-settings-migration.tsapps/ui/src/main.tsapps/ui/src/store/app-store.tsapps/ui/src/styles/global.csslibs/provider-zai/src/zai-query-service.ts
🚧 Files skipped from review as they are similar to previous changes (6)
- apps/server/src/routes/red-giant/routes/get-metrics.ts
- apps/server/src/routes/red-giant/routes/evolve-star.ts
- apps/ui/package.json
- apps/server/src/routes/red-giant/routes/get-events.ts
- apps/server/src/routes/red-giant/routes/list-stars.ts
- apps/ui/src/styles/global.css
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always import from shared packages (@automaker/*), never from old relative paths
Files:
apps/ui/src/components/views/graph-view/graph-canvas.tsxapps/server/src/routes/red-giant/routes/delete-star.tsapps/ui/src/components/views/graph-view/context/graph-actions-context.tsxapps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsxapps/server/src/routes/red-giant/routes/get-star.tsapps/ui/src/main.tsapps/ui/src/components/views/graph-view/components/task-node.tsxapps/server/src/routes/red-giant/routes/create-star.tsapps/ui/src/store/app-store.tslibs/provider-zai/src/zai-query-service.tsapps/ui/src/hooks/use-settings-migration.tsapps/server/src/services/red-giant-service.tsapps/ui/src/components/views/graph-view/hooks/use-graph-nodes.tsapps/ui/src/components/views/graph-view/components/dependency-edge.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
resolveModelString()from @automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names
Files:
apps/ui/src/components/views/graph-view/graph-canvas.tsxapps/server/src/routes/red-giant/routes/delete-star.tsapps/ui/src/components/views/graph-view/context/graph-actions-context.tsxapps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsxapps/server/src/routes/red-giant/routes/get-star.tsapps/ui/src/main.tsapps/ui/src/components/views/graph-view/components/task-node.tsxapps/server/src/routes/red-giant/routes/create-star.tsapps/ui/src/store/app-store.tslibs/provider-zai/src/zai-query-service.tsapps/ui/src/hooks/use-settings-migration.tsapps/server/src/services/red-giant-service.tsapps/ui/src/components/views/graph-view/hooks/use-graph-nodes.tsapps/ui/src/components/views/graph-view/components/dependency-edge.tsx
apps/server/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
createEventEmitter()fromlib/events.tsfor all server operations to emit events that stream to frontend via WebSocket
Files:
apps/server/src/routes/red-giant/routes/delete-star.tsapps/server/src/routes/red-giant/routes/get-star.tsapps/server/src/routes/red-giant/routes/create-star.tsapps/server/src/services/red-giant-service.ts
🧠 Learnings (4)
📚 Learning: 2025-12-30T01:02:07.114Z
Learnt from: illia1f
Repo: AutoMaker-Org/automaker PR: 324
File: apps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsx:122-131
Timestamp: 2025-12-30T01:02:07.114Z
Learning: Tailwind CSS v4 uses postfix syntax for the important modifier: append ! at the end of the utility class (e.g., backdrop-blur-[0px]! or hover:bg-red-500!). The older v3 style used a prefix (!) at the start (e.g., !backdrop-blur-[0px]); prefer the postfix form for consistency across TSX files.
Applied to files:
apps/ui/src/components/views/graph-view/graph-canvas.tsxapps/ui/src/components/views/graph-view/context/graph-actions-context.tsxapps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsxapps/ui/src/components/views/graph-view/components/task-node.tsxapps/ui/src/components/views/graph-view/components/dependency-edge.tsx
📚 Learning: 2026-01-07T15:21:11.144Z
Learnt from: webdevcody
Repo: AutoMaker-Org/automaker PR: 378
File: apps/server/src/lib/sdk-options.ts:141-148
Timestamp: 2026-01-07T15:21:11.144Z
Learning: AutoMaker intentionally uses `permissionMode: 'bypassPermissions'` with `allowDangerouslySkipPermissions: true` in base SDK options for fully autonomous operation. This is an accepted architectural decision, with Docker containerization as the primary security boundary.
Applied to files:
apps/ui/src/main.ts
📚 Learning: 2025-12-28T05:07:48.147Z
Learnt from: CR
Repo: AutoMaker-Org/automaker PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-28T05:07:48.147Z
Learning: Applies to **/*.{ts,tsx} : Use `resolveModelString()` from automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names
Applied to files:
apps/ui/src/store/app-store.ts
📚 Learning: 2025-12-28T05:07:48.147Z
Learnt from: CR
Repo: AutoMaker-Org/automaker PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-28T05:07:48.147Z
Learning: Applies to apps/server/src/**/*.{ts,tsx} : Use `createEventEmitter()` from `lib/events.ts` for all server operations to emit events that stream to frontend via WebSocket
Applied to files:
apps/server/src/services/red-giant-service.ts
🧬 Code graph analysis (5)
apps/ui/src/components/views/graph-view/context/graph-actions-context.tsx (1)
apps/ui/src/components/views/graph-view/hooks/use-graph-nodes.ts (1)
NodeActionCallbacks(40-49)
apps/server/src/routes/red-giant/routes/create-star.ts (1)
apps/server/src/services/red-giant-service.ts (1)
RedGiantService(54-432)
apps/ui/src/hooks/use-settings-migration.ts (2)
apps/server/src/routes/settings/common.ts (1)
logger(12-12)libs/types/src/settings.ts (1)
GlobalSettings(445-627)
apps/server/src/services/red-giant-service.ts (1)
apps/server/src/lib/events.ts (1)
EventEmitter(13-16)
apps/ui/src/components/views/graph-view/components/dependency-edge.tsx (2)
apps/ui/src/store/app-store.ts (1)
Feature(334-350)apps/ui/src/components/views/graph-view/context/graph-actions-context.tsx (1)
useGraphActions(6-8)
🔇 Additional comments (21)
apps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsx (1)
260-260: Memo with custom comparator looks good.The integration correctly passes
areKanbanCardPropsEqualas the second argument tomemo. Correctness depends on the comparator covering all props that affect rendering.apps/server/src/services/red-giant-service.ts (2)
17-17: LGTM on EventEmitter integration.The import of the
EventEmittertype and its use via dependency injection in the constructor aligns well with the coding guidelines. The service correctly accepts an optionalEventEmitterand uses it for event emission, allowing the caller to provide an instance created viacreateEventEmitter()fromlib/events.ts.
404-431: Persistence stubs are adequate for scaffolding.The commented-out file I/O in
saveStarsandloadStarsis acceptable as scaffolding. Ensure these are implemented or removed before production use to avoid silent data loss.apps/server/src/routes/red-giant/routes/get-star.ts (1)
8-34: LGTM!The handler correctly validates input, handles the not-found case with 404, and follows the established pattern for Red Giant route handlers.
apps/server/src/routes/red-giant/routes/delete-star.ts (1)
8-34: LGTM!The handler follows the established pattern, properly validates input, and handles both success and failure cases appropriately. The service layer handles event emission for the delete operation.
apps/server/src/routes/red-giant/routes/create-star.ts (1)
32-36: LGTM on the core creation flow.The handler correctly delegates to
redGiantService.createRedGiant()and returns the created star. Error handling is appropriate.libs/provider-zai/src/zai-query-service.ts (5)
8-9: LGTM!Imports correctly use the shared
@automaker/providers-corepackage as required by coding guidelines.
15-25: LGTM!Model definitions are well-structured with clear descriptions. The
as constassertion enables proper type inference forZaiModel.
58-89: LGTM!The method correctly handles optional system prompts, uses proper error wrapping, and safely handles missing content with a fallback to empty string.
94-146: LGTM!Good streaming implementation with reasonable buffering thresholds (20 chars / 50ms). The final buffer flush ensures no content is lost.
196-204: Z.AI's OpenAI-compatible API does support theGET /v1/modelsendpoint, somodels.list()will work correctly. No changes needed.apps/ui/src/components/views/graph-view/graph-canvas.tsx (1)
321-328: LGTM!The provider integration is well-structured. The fallback to empty object
{}ensures the context always has a valid value, which aligns with the optional callback pattern used by consumers.apps/ui/src/components/views/graph-view/components/dependency-edge.tsx (1)
8-9: LGTM!Context hook import is correctly placed.
apps/ui/src/components/views/graph-view/hooks/use-graph-nodes.ts (1)
21-28: No changes needed — the callback properties are actively used.These callbacks are not unused. The
NodeActionCallbacksinterface (lines 40-49 in the same file) implements all of these callbacks, and they are actively invoked throughout the graph-view component. Specifically:
graph-view.tsxcreates anodeActionCallbacksobject that implements handlers for all these callbacks (onViewLogs, onViewDetails, onStartTask, onStopTask, onResumeTask, onSpawnTask, onDeleteTask)task-node.tsxactively calls these callbacks on button clicks (e.g.,actions.onViewLogs?.(data.id),actions.onStartTask?.(data.id), etc.)- Parent components (
world-model-view.tsx,board-view.tsx) pass these callbacks down through the component treeRemoving them would break the current action handling in the graph view.
Likely an incorrect or invalid review comment.
apps/ui/src/store/app-store.ts (4)
123-128: LGTM!Clean extension of the
ApiKeysinterface to support the new Z.AI provider, following the existing pattern.
1170-1175: LGTM!Initialization of
zai: ''is consistent with theApiKeysinterface extension.
1543-1560: LGTM!Good refactoring with functional state updater pattern and memory protection:
- Functional updater
set((state) => {...})is safer for concurrent state updates- Memory cap at 10,000 steps prevents unbounded growth in long-running sessions
- Using
slice(-10000)correctly preserves the most recent log entries
1145-1154: No type mismatch withmodel: 'GLM-4.7'.The
ModelAliastype includes bothkeyof typeof CLAUDE_MODEL_MAP | keyof typeof ZAI_MODEL_MAP. Since'glm-4.7'is defined inZAI_MODEL_MAP(libs/types/src/model.ts:83), it is a validModelAliasvalue for theAIProfile.modelfield. The code is type-correct as written.Likely an incorrect or invalid review comment.
apps/ui/src/hooks/use-settings-migration.ts (3)
420-425: Good parallelization of independent API calls.Using
Promise.allto fetch settings, credentials, and status concurrently is a good optimization that reduces startup latency.
495-496: LGTM!The credentials are correctly passed to the hydration function, and the log message accurately reflects the updated behavior.
632-634: LGTM!The credential updates are correctly applied to the store state.
| if (star.initialMass < 2.5 && progress >= 1.0) { | ||
| this.recordEvent({ | ||
| type: 'helium-flash', | ||
| starId, | ||
| timestamp: now, | ||
| details: { | ||
| mass: star.currentMass, | ||
| radius: star.currentRadius, | ||
| luminosity: star.currentLuminosity, | ||
| }, | ||
| }); | ||
| star.status = 'collapsed'; // Becomes helium-burning star | ||
| } | ||
|
|
||
| // Check for core collapse (high-mass stars) | ||
| if (star.initialMass >= 8.0 && progress >= 1.0) { | ||
| this.recordEvent({ | ||
| type: 'core-collapse', | ||
| starId, | ||
| timestamp: now, | ||
| details: { | ||
| mass: star.currentMass, | ||
| radius: star.currentRadius, | ||
| luminosity: star.currentLuminosity, | ||
| }, | ||
| }); | ||
| star.status = 'supernova'; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing end state for intermediate-mass stars (2.5–8 M☉).
Stars with initialMass between 2.5 and 8.0 solar masses will remain status: 'active' with rgbPhase: 'helium-flash' indefinitely after progress >= 1.0. Low-mass stars (< 2.5) transition to collapsed, and high-mass stars (≥ 8.0) transition to supernova, but intermediate-mass stars have no terminal state.
Consider adding an explicit transition for this mass range (e.g., to 'white-dwarf' after the horizontal branch).
Proposed fix
// Check for core collapse (high-mass stars)
if (star.initialMass >= 8.0 && progress >= 1.0) {
this.recordEvent({
type: 'core-collapse',
starId,
timestamp: now,
details: {
mass: star.currentMass,
radius: star.currentRadius,
luminosity: star.currentLuminosity,
},
});
star.status = 'supernova';
}
+
+ // Intermediate-mass stars (2.5 <= M < 8.0) become white dwarfs after RGB
+ if (star.initialMass >= 2.5 && star.initialMass < 8.0 && progress >= 1.0) {
+ star.status = 'white-dwarf';
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (star.initialMass < 2.5 && progress >= 1.0) { | |
| this.recordEvent({ | |
| type: 'helium-flash', | |
| starId, | |
| timestamp: now, | |
| details: { | |
| mass: star.currentMass, | |
| radius: star.currentRadius, | |
| luminosity: star.currentLuminosity, | |
| }, | |
| }); | |
| star.status = 'collapsed'; // Becomes helium-burning star | |
| } | |
| // Check for core collapse (high-mass stars) | |
| if (star.initialMass >= 8.0 && progress >= 1.0) { | |
| this.recordEvent({ | |
| type: 'core-collapse', | |
| starId, | |
| timestamp: now, | |
| details: { | |
| mass: star.currentMass, | |
| radius: star.currentRadius, | |
| luminosity: star.currentLuminosity, | |
| }, | |
| }); | |
| star.status = 'supernova'; | |
| } | |
| if (star.initialMass < 2.5 && progress >= 1.0) { | |
| this.recordEvent({ | |
| type: 'helium-flash', | |
| starId, | |
| timestamp: now, | |
| details: { | |
| mass: star.currentMass, | |
| radius: star.currentRadius, | |
| luminosity: star.currentLuminosity, | |
| }, | |
| }); | |
| star.status = 'collapsed'; // Becomes helium-burning star | |
| } | |
| // Check for core collapse (high-mass stars) | |
| if (star.initialMass >= 8.0 && progress >= 1.0) { | |
| this.recordEvent({ | |
| type: 'core-collapse', | |
| starId, | |
| timestamp: now, | |
| details: { | |
| mass: star.currentMass, | |
| radius: star.currentRadius, | |
| luminosity: star.currentLuminosity, | |
| }, | |
| }); | |
| star.status = 'supernova'; | |
| } | |
| // Intermediate-mass stars (2.5 <= M < 8.0) become white dwarfs after RGB | |
| if (star.initialMass >= 2.5 && star.initialMass < 8.0 && progress >= 1.0) { | |
| star.status = 'white-dwarf'; | |
| } |
🤖 Prompt for AI Agents
In @apps/server/src/services/red-giant-service.ts around lines 214 - 241,
Intermediate-mass stars (initialMass >= 2.5 && < 8.0) lack a terminal state:
when progress >= 1.0 add a branch that calls this.recordEvent (e.g., type
'planetary-nebula' or 'end-of-red-giant') with the same details payload and set
star.status = 'white-dwarf' and update star.rgbPhase (or equivalent) to the
post-horizontal-branch phase; place this branch alongside the existing
helium-flash and core-collapse checks so stars in the 2.5–8.0 M☉ range
transition correctly instead of remaining 'active'.
| private calculateLuminosity(mass: number, radius: number, temperature: number): number { | ||
| // In solar units | ||
| const radiusMeters = radius * this.SOLAR_RADIUS; | ||
| const solarRadiusMeters = this.SOLAR_RADIUS; | ||
|
|
||
| const luminosity = Math.pow(radius / 1.0, 2) * Math.pow(temperature / this.SOLAR_TEMP, 4); | ||
|
|
||
| // Apply main sequence mass-luminosity relation correction | ||
| const massLuminosityRelation = Math.pow(mass, 3.5); | ||
|
|
||
| return luminosity; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dead code: unused variables in calculateLuminosity.
radiusMeters, solarRadiusMeters, and massLuminosityRelation are computed but never used. The mass parameter is also unused in the final return value.
Proposed cleanup
- private calculateLuminosity(mass: number, radius: number, temperature: number): number {
- // In solar units
- const radiusMeters = radius * this.SOLAR_RADIUS;
- const solarRadiusMeters = this.SOLAR_RADIUS;
-
- const luminosity = Math.pow(radius / 1.0, 2) * Math.pow(temperature / this.SOLAR_TEMP, 4);
-
- // Apply main sequence mass-luminosity relation correction
- const massLuminosityRelation = Math.pow(mass, 3.5);
-
- return luminosity;
- }
+ private calculateLuminosity(_mass: number, radius: number, temperature: number): number {
+ // L ∝ R² T⁴ (Stefan-Boltzmann in solar units)
+ return Math.pow(radius, 2) * Math.pow(temperature / this.SOLAR_TEMP, 4);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private calculateLuminosity(mass: number, radius: number, temperature: number): number { | |
| // In solar units | |
| const radiusMeters = radius * this.SOLAR_RADIUS; | |
| const solarRadiusMeters = this.SOLAR_RADIUS; | |
| const luminosity = Math.pow(radius / 1.0, 2) * Math.pow(temperature / this.SOLAR_TEMP, 4); | |
| // Apply main sequence mass-luminosity relation correction | |
| const massLuminosityRelation = Math.pow(mass, 3.5); | |
| return luminosity; | |
| } | |
| private calculateLuminosity(_mass: number, radius: number, temperature: number): number { | |
| // L ∝ R² T⁴ (Stefan-Boltzmann in solar units) | |
| return Math.pow(radius, 2) * Math.pow(temperature / this.SOLAR_TEMP, 4); | |
| } |
🤖 Prompt for AI Agents
In @apps/server/src/services/red-giant-service.ts around lines 364 - 375, The
calculateLuminosity function computes radiusMeters, solarRadiusMeters, and
massLuminosityRelation (and accepts mass) but never uses them; either remove
these dead variables and the mass parameter or incorporate them into the
luminosity calculation. Update calculateLuminosity to consistently use solar
units/metrics (e.g., use radiusMeters or normalize radius by solarRadiusMeters
and apply massLuminosityRelation to the base luminosity) or drop mass and unused
constants if the simple L ∝ R^2 T^4 form is intended; ensure the returned
luminosity uses the declared variables
(radiusMeters/solarRadiusMeters/massLuminosityRelation) or remove them and the
mass parameter to eliminate dead code.
| function areKanbanCardPropsEqual(prev: KanbanCardProps, next: KanbanCardProps) { | ||
| try { | ||
| // Compare view props | ||
| if (prev.isCurrentAutoTask !== next.isCurrentAutoTask) return false; | ||
| if (prev.isSelected !== next.isSelected) return false; | ||
| if (prev.isSelectionMode !== next.isSelectionMode) return false; | ||
| if (prev.opacity !== next.opacity) return false; | ||
| if (prev.isOverlay !== next.isOverlay) return false; | ||
| if (prev.hasContext !== next.hasContext) return false; | ||
|
|
||
| // Compare feature identity and status | ||
| const p = prev.feature; | ||
| const n = next.feature; | ||
|
|
||
| // Safety check | ||
| if (!p || !n) return false; | ||
|
|
||
| if (p.id !== n.id) return false; | ||
| if (p.status !== n.status) return false; | ||
| if (p.title !== n.title) return false; | ||
| if (p.category !== n.category) return false; | ||
| if (p.priority !== n.priority) return false; | ||
| if (p.description !== n.description) return false; | ||
| if (p.error !== n.error) return false; | ||
|
|
||
| // Compare steps length (for progress bar) | ||
| if (p.steps?.length !== n.steps?.length) return false; | ||
|
|
||
| // Allow context content updates (summary etc) | ||
| if (prev.contextContent !== next.contextContent) return false; | ||
| if (prev.summary !== next.summary) return false; | ||
|
|
||
| // Ignore logs, heavy metadata, etc. | ||
| return true; | ||
| } catch (e) { | ||
| console.error('KanbanCard props comparison error:', e); | ||
| return false; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing comparisons for props that affect rendering.
The comparator omits several props that influence the rendered output:
| Prop | Used at | Effect if stale |
|---|---|---|
glassmorphism |
Lines 144, 172 | backdrop-blur class won't toggle |
cardBorderEnabled |
Lines 126, 147 | border styles/classes won't update |
cardBorderOpacity |
Line 126 | border opacity won't update |
shortcutKey |
Line 226 | CardActions receives stale value |
If any of these props change between renders, the component will incorrectly skip the update.
Suggested fix
function areKanbanCardPropsEqual(prev: KanbanCardProps, next: KanbanCardProps) {
try {
// Compare view props
if (prev.isCurrentAutoTask !== next.isCurrentAutoTask) return false;
if (prev.isSelected !== next.isSelected) return false;
if (prev.isSelectionMode !== next.isSelectionMode) return false;
if (prev.opacity !== next.opacity) return false;
if (prev.isOverlay !== next.isOverlay) return false;
if (prev.hasContext !== next.hasContext) return false;
+ if (prev.glassmorphism !== next.glassmorphism) return false;
+ if (prev.cardBorderEnabled !== next.cardBorderEnabled) return false;
+ if (prev.cardBorderOpacity !== next.cardBorderOpacity) return false;
+ if (prev.shortcutKey !== next.shortcutKey) return false;
// Compare feature identity and status
const p = prev.feature;🤖 Prompt for AI Agents
In
@apps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsx
around lines 262 - 300, The memo comparator areKanbanCardPropsEqual is missing
comparisons for props that affect rendering—add equality checks for
prev.glassmorphism vs next.glassmorphism, prev.cardBorderEnabled vs
next.cardBorderEnabled, prev.cardBorderOpacity vs next.cardBorderOpacity, and
prev.shortcutKey vs next.shortcutKey so changes to backdrop-blur, border
classes/opacity, and the CardActions shortcut prop correctly trigger re-renders;
update the function to return false when any of those props differ (keeping the
existing try/catch and other comparisons intact).
| // Extend TaskNodeData locally since we can't easily modify the hook file right now | ||
| // or assume it's imported from a central type definition that we can edit. | ||
| // Wait, the import is: import { TaskNodeData } from '../hooks/use-graph-nodes'; | ||
| // We should check if we can modify the source. | ||
| // For now, let's cast or extend locally if possible, but actually let's just use the imported type. | ||
| // If we can't modify the imported type easily, we might get a TS error. | ||
| // Let's assume we will modify the hook file too, but for this step let's modify the usage. | ||
| // Actually, earlier I tried to modify the interface in this file but it wasn't there? | ||
| // Ah, line 20 says: import { TaskNodeData } from '../hooks/use-graph-nodes'; | ||
| // So the interface is NOT in this file. I need to modify '../hooks/use-graph-nodes.ts' first? | ||
| // Or I can modify this file to export an extended type and use it. | ||
| // Let's look at the file content again. It imports TaskNodeData. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove developer notes/thinking comments.
These comments appear to be stream-of-consciousness developer notes rather than documentation. They don't provide value to readers and should be removed.
Proposed fix
-// Extend TaskNodeData locally since we can't easily modify the hook file right now
-// or assume it's imported from a central type definition that we can edit.
-// Wait, the import is: import { TaskNodeData } from '../hooks/use-graph-nodes';
-// We should check if we can modify the source.
-// For now, let's cast or extend locally if possible, but actually let's just use the imported type.
-// If we can't modify the imported type easily, we might get a TS error.
-// Let's assume we will modify the hook file too, but for this step let's modify the usage.
-// Actually, earlier I tried to modify the interface in this file but it wasn't there?
-// Ah, line 20 says: import { TaskNodeData } from '../hooks/use-graph-nodes';
-// So the interface is NOT in this file. I need to modify '../hooks/use-graph-nodes.ts' first?
-// Or I can modify this file to export an extended type and use it.
-// Let's look at the file content again. It imports TaskNodeData.
-
-// Let's modify the imports to include Sparkles for the AI action🤖 Prompt for AI Agents
In @apps/ui/src/components/views/graph-view/components/task-node.tsx around
lines 16 - 28, Remove the stray developer "thinking" comments in task-node.tsx
(the block above the import of TaskNodeData) so the file only contains
purposeful comments and code; specifically delete the multi-line notes that
discuss extending or modifying TaskNodeData and the decision process, leaving
the import statement (import { TaskNodeData } from '../hooks/use-graph-nodes';)
and the component code intact and well-commented only where necessary.
| <DropdownMenuItem | ||
| className="text-xs cursor-pointer text-purple-500 font-medium" | ||
| onClick={(e) => { | ||
| e.stopPropagation(); | ||
| // We need to verify if onExpand exists on data or actions | ||
| // @ts-ignore | ||
| actions.onExpand?.(data.id) ?? data.onExpand?.(); | ||
| }} | ||
| > | ||
| <Sparkles className="w-3 h-3 mr-2" /> | ||
| Smart Expand | ||
| </DropdownMenuItem> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove @ts-ignore and fix the type properly.
The @ts-ignore suppresses a type error that indicates onExpand is not defined on the NodeActionCallbacks interface. The fallback data.onExpand?.() also references a non-existent property on TaskNodeData.
Fix this by adding onExpand to NodeActionCallbacks in use-graph-nodes.ts, then simplify:
Proposed fix
<DropdownMenuItem
className="text-xs cursor-pointer text-purple-500 font-medium"
onClick={(e) => {
e.stopPropagation();
- // We need to verify if onExpand exists on data or actions
- // @ts-ignore
- actions.onExpand?.(data.id) ?? data.onExpand?.();
+ actions.onExpand?.(data.id);
}}
>🤖 Prompt for AI Agents
In @apps/ui/src/components/views/graph-view/components/task-node.tsx around
lines 307 - 318, The click handler uses @ts-ignore because onExpand is missing
from NodeActionCallbacks and the fallback to data.onExpand is invalid; add an
onExpand callback to the NodeActionCallbacks interface in use-graph-nodes.ts
(e.g., onExpand?: (id: string) => void), remove the fallback reference to
data.onExpand and the @ts-ignore in task-node.tsx, and update the handler to
simply call actions.onExpand?.(data.id) after stopping propagation.
| // Apply credentials if provided | ||
| const apiKeys = { ...current.apiKeys }; | ||
| if (credentials) { | ||
| if (credentials.anthropic?.configured) apiKeys.anthropic = credentials.anthropic.masked; | ||
| if (credentials.google?.configured) apiKeys.google = credentials.google.masked; | ||
| if (credentials.openai?.configured) apiKeys.openai = credentials.openai.masked; | ||
| if (credentials.zai?.configured) apiKeys.zai = credentials.zai.masked; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for how apiKeys is used in the codebase
rg -n 'apiKeys' apps/ui/src/hooks/use-settings-migration.ts -B3 -A3 | head -80Repository: AutoMaker-Org/automaker
Length of output: 1488
🏁 Script executed:
#!/bin/bash
# Check how apiKeys is used elsewhere - is it for display or API calls?
rg -n '\.apiKeys' --type ts -g '!*.test.*' -g '!*.spec.*' | head -100Repository: AutoMaker-Org/automaker
Length of output: 3433
🏁 Script executed:
#!/bin/bash
# Look for where the actual (unmasked) API keys are used for API calls
rg -n 'credentials\.(anthropic|google|openai|zai)' --type ts -g '!*.test.*' -g '!*.spec.*' -A2 | head -80Repository: AutoMaker-Org/automaker
Length of output: 695
🏁 Script executed:
#!/bin/bash
# Check the broader context of use-settings-migration.ts - what's this hook's purpose?
wc -l apps/ui/src/hooks/use-settings-migration.ts && head -50 apps/ui/src/hooks/use-settings-migration.tsRepository: AutoMaker-Org/automaker
Length of output: 2138
🏁 Script executed:
#!/bin/bash
# Look at the broader context of the migration function to understand what `current` is
sed -n '550,610p' apps/ui/src/hooks/use-settings-migration.tsRepository: AutoMaker-Org/automaker
Length of output: 2732
🏁 Script executed:
#!/bin/bash
# Find the function that contains this code block
rg -n 'const apiKeys = \{ \.\.\.current\.apiKeys \}' apps/ui/src/hooks/use-settings-migration.ts -B30 | head -60Repository: AutoMaker-Org/automaker
Length of output: 1205
🏁 Script executed:
#!/bin/bash
# Check if apiKeys are used for actual API calls anywhere
rg -n 'useAppStore.*apiKeys|getState.*apiKeys' apps/ui/src --type ts -A5 | grep -E '(api\.|request|fetch|call)' | head -40Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
#!/bin/bash
# Search for where apiKeys are actually used to make API requests
rg -n 'headers|authorization|Bearer|apiKey' apps/ui/src/lib --type ts -B3 -A3 | head -60Repository: AutoMaker-Org/automaker
Length of output: 4491
🏁 Script executed:
#!/bin/bash
# Check the http-api-client to see how credentials are used
cat -n apps/ui/src/lib/http-api-client.ts | head -100Repository: AutoMaker-Org/automaker
Length of output: 3723
🏁 Script executed:
#!/bin/bash
# Look for where actual keys (not masked) are obtained before making API calls
rg -n 'apiKeys\[' apps/ui/src --type ts -B2 -A2 | head -80Repository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
#!/bin/bash
# Search for where apiKeys from store are accessed and used
rg -n 'apiKeys' apps/ui/src/lib/http-api-client.ts -B3 -A3Repository: AutoMaker-Org/automaker
Length of output: 324
🏁 Script executed:
#!/bin/bash
# Check what happens during initialization and where actual keys come from
rg -n 'initApiKey|cachedApiKey' apps/ui/src/lib/http-api-client.ts -B2 -A5 | head -80Repository: AutoMaker-Org/automaker
Length of output: 2095
🏁 Script executed:
#!/bin/bash
# Look at the function that calls hydrate with credentials parameter
rg -n 'hydrate.*credentials|credentials.*hydrate' apps/ui/src/hooks/use-settings-migration.ts -B10 -A10Repository: AutoMaker-Org/automaker
Length of output: 1078
🏁 Script executed:
#!/antml/bash
# Search for where the credentials object with masked values is created/passed
rg -n 'credentials.*masked|masked.*credentials' apps/ui/src --type ts -B5 -A2 | head -100Repository: AutoMaker-Org/automaker
Length of output: 1038
🏁 Script executed:
#!/bin/bash
# Check the server-side credentials storage to confirm masked values are intentional
rg -n 'maskKey|masked.*credential' apps/server/src/services/settings-service.ts -B3 -A3 | head -80Repository: AutoMaker-Org/automaker
Length of output: 1497
🏁 Script executed:
#!/bin/bash
# Verify the flow: where actual keys are sent to server vs where masked keys are stored
rg -n 'updateCredentials|syncCredentialsToServer' apps/ui/src --type ts -B5 -A8 | head -120Repository: AutoMaker-Org/automaker
Length of output: 5055
🏁 Script executed:
#!/antml:bash
# Check if there's any code that tries to use apiKeys from store to make actual API calls
rg -n 'getState.*apiKeys|useAppStore.*apiKeys.*\(' apps/ui/src --type ts -A4 | head -100Repository: AutoMaker-Org/automaker
Length of output: 612
🏁 Script executed:
#!/bin/bash
# Check where apiKeys.anthropic/google/openai/zai are actually USED in the code
rg -n 'apiKeys\.(anthropic|google|openai|zai)' apps/ui/src --type ts -B2 -A3 | grep -v 'setApiKeys\|apiKeys =' | head -100Repository: AutoMaker-Org/automaker
Length of output: 9160
🏁 Script executed:
#!/antml:bash
# Double-check: Look for any place where stored apiKeys are passed to actual API calls
rg -n 'updateCredentials.*apiKeys|syncCredentials.*apiKeys' apps/ui/src --type ts -B5 -A5Repository: AutoMaker-Org/automaker
Length of output: 1533
🏁 Script executed:
#!/antml/bash
# Final check: confirm that actual API key usage happens server-side, not from client store
rg -n 'anthropic.*api.*key|openai.*api.*key' apps/server/src --type ts -i | head -50Repository: AutoMaker-Org/automaker
Length of output: 5475
🏁 Script executed:
#!/antml/bash
# Check if there's any initialization code that shows where real keys come from
grep -r "apiKeys\." apps/ui/src/hooks/use-settings-migration.ts -A20 | head -60Repository: AutoMaker-Org/automaker
Length of output: 1488
The code stores masked display values ("sk-...****") instead of actual API keys.
Lines 587-590 overwrite apiKeys with masked-only strings from the credentials object. While the actual API calls use environment variables (avoiding immediate breakage), this is problematic because:
- It destructively replaces any actual keys in
current.apiKeyswith unusable display values - It confuses the purpose of the
apiKeysstore (display vs. configuration) - The pattern violates the principle that masked values should only be used for UI display
The fix is to either: (1) avoid overwriting apiKeys during hydration if display-only, or (2) keep actual keys separate and use a different store property for masked display values.
🤖 Prompt for AI Agents
In @apps/ui/src/hooks/use-settings-migration.ts around lines 584 - 591, The
hydration currently overwrites real keys in apiKeys by assigning masked strings
from credentials (variables: apiKeys, current.apiKeys, credentials) which
replaces usable config with display-only values; change the logic to preserve
existing real keys in current.apiKeys and never replace them with
credentials.*.masked—either skip setting apiKeys.* when credentials.*.masked
exists or populate a separate display-only object (e.g., maskedApiKeys) for UI
use (keep apiKeys for real config values and add a new masked store for
presentation).
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | ||
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | ||
| app.commandLine.appendSwitch('no-sandbox'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate switch and security implications of sandbox disabling.
-
Duplicate line:
disable-gpu-sandboxis appended twice (lines 49 and 50). -
Security concern: Disabling both
--no-sandboxand--disable-gpu-sandboxremoves Chromium's security isolation. While the retrieved learnings indicate Docker is the project's primary security boundary for the server, this is the Electron desktop app running directly on the user's machine—a compromised renderer process could access system resources.
If this is an intentional trade-off for GPU stability, consider documenting the risk more explicitly and potentially making it opt-in via an environment variable for users who need maximum GPU performance.
Remove duplicate line
app.commandLine.appendSwitch('disable-gpu-sandbox');
-app.commandLine.appendSwitch('disable-gpu-sandbox');
app.commandLine.appendSwitch('no-sandbox');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | |
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | |
| app.commandLine.appendSwitch('no-sandbox'); | |
| app.commandLine.appendSwitch('disable-gpu-sandbox'); | |
| app.commandLine.appendSwitch('no-sandbox'); |
🤖 Prompt for AI Agents
In @apps/ui/src/main.ts around lines 49 - 51, Remove the duplicate call to
app.commandLine.appendSwitch('disable-gpu-sandbox') and stop disabling the
Chromium sandbox by default: leave
app.commandLine.appendSwitch('disable-gpu-sandbox') only once, remove
app.commandLine.appendSwitch('no-sandbox'), and make disabling the renderer
sandbox opt-in via an environment variable (e.g., ELECTRON_DISABLE_SANDBOX)
checked before calling app.commandLine.appendSwitch('no-sandbox'); also add a
concise runtime warning/log explaining the security risk when the env opt-in is
used so users are explicitly informed.
| autoModeByProject: {}, | ||
| autoModeActivityLog: [], | ||
| maxConcurrency: 3, // Default to 3 concurrent agents | ||
| maxConcurrency: 24, // Performance Workstation: Match 24-Core CPU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Examine the file context around line 1181
head -1200 apps/ui/src/store/app-store.ts | tail -50Repository: AutoMaker-Org/automaker
Length of output: 2086
🏁 Script executed:
# Search for any API rate limit handling or configuration in the codebase
rg -i "rate.*limit|ratelimit" --type ts --type tsx -A 3 -B 3Repository: AutoMaker-Org/automaker
Length of output: 94
🌐 Web query:
Anthropic Claude API rate limits concurrent requests
💡 Result:
Short answer: Anthropic's Claude API does not publish a single "concurrent requests" cap — it enforces rate limits by requests-per-minute (RPM), input‑tokens/min (ITPM) and output‑tokens/min (OTPM) per model and per organization tier. Standard RPM for Claude models is typically 50 RPM (per model class), and the Message Batches API also imposes limits on batch requests in the processing queue (e.g., up to 100,000 batch requests in the queue). You can view your org's exact limits in the Claude/Anthropic Console or request higher/custom limits from sales. (docs.anthropic.com)
Citations:
🏁 Script executed:
# Look for other concurrency-related settings or safeguards
rg -i "concurrency|threadpool|uv_threadpool" --type ts --type tsx -A 2 -B 2Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Retry with correct file types - ts and js only
rg -i "rate.*limit|ratelimit" --type ts --type js -A 3 -B 3Repository: AutoMaker-Org/automaker
Length of output: 50379
🏁 Script executed:
# Search for concurrency-related settings or safeguards
rg -i "concurrency|threadpool|uv_threadpool" --type ts --type js -A 2 -B 2Repository: AutoMaker-Org/automaker
Length of output: 50379
🏁 Script executed:
# Look for any adaptive concurrency or backoff mechanisms
rg -i "backoff|throttle|queue|concurrent" --type ts --type js | head -50Repository: AutoMaker-Org/automaker
Length of output: 4535
🏁 Script executed:
# Check if there's documentation or comments explaining the 24 choice
rg -i "24.*core|24.*cpu|workstation" --type ts --type js -A 3 -B 3Repository: AutoMaker-Org/automaker
Length of output: 632
Concurrency default increase is partially mitigated by UI constraints, but consider the 24→10 mismatch.
The concern about overwhelming lower-spec systems has merit, but several safeguards limit real-world impact:
- The UI slider enforces a maximum of 10 concurrent tasks (users cannot exceed this via UI)
- Rate limit handling is already implemented (detects 429 errors, pauses auto-mode, extracts retry delays)
- Backend includes built-in delays between API calls (100ms minimum for Z.AI)
However, note a mismatch: the store default is 24 but the slider's max is 10. Users will experience the slider constraint in practice, so the 24 default is largely a store-level artifact rather than a user-facing default.
The API rate limit concern is slightly overstated—Anthropic and other providers rate-limit by requests-per-minute (RPM) and token throughput, not by concurrent requests. Higher concurrency does increase burst rates, so the concern is valid but not as straightforward as presented.
🤖 Prompt for AI Agents
In @apps/ui/src/store/app-store.ts at line 1181, The store sets maxConcurrency:
24 while the UI enforces a 10-task ceiling, causing a mismatch; update the store
default to match the UI by changing the maxConcurrency value used in the app
store (symbol: maxConcurrency in app-store.ts / the app-store default config)
from 24 to 10, or alternatively adjust the UI slider max to 24—pick one
consistent change and ensure any related docs or comments are updated to reflect
the chosen default so store and UI limits remain aligned.
| enhancementModel: 'GLM-4.7', // Default to GLM-4.7 for feature enhancement | ||
| validationModel: 'GLM-4.7', // Default to GLM-4.7 for GitHub issue validation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verify 'GLM-4.7' is compatible with ModelAlias type for enhancement/validation models.
Both enhancementModel and validationModel are typed as ModelAlias (lines 556-559). If ModelAlias doesn't include 'GLM-4.7', this will cause TypeScript errors or runtime issues when these values are passed to functions expecting standard aliases.
Additionally, switching both defaults to Z.AI means users without a Z.AI API key configured will encounter errors when using enhancement or validation features out-of-box.
Consider keeping Claude model as fallback default
- enhancementModel: 'GLM-4.7', // Default to GLM-4.7 for feature enhancement
- validationModel: 'GLM-4.7', // Default to GLM-4.7 for GitHub issue validation
+ enhancementModel: 'sonnet', // Default to Claude Sonnet for feature enhancement
+ validationModel: 'sonnet', // Default to Claude Sonnet for GitHub issue validationOr ensure proper fallback logic exists when Z.AI API key is not configured.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| enhancementModel: 'GLM-4.7', // Default to GLM-4.7 for feature enhancement | |
| validationModel: 'GLM-4.7', // Default to GLM-4.7 for GitHub issue validation | |
| enhancementModel: 'sonnet', // Default to Claude Sonnet for feature enhancement | |
| validationModel: 'sonnet', // Default to Claude Sonnet for GitHub issue validation |
🤖 Prompt for AI Agents
In @apps/ui/src/store/app-store.ts around lines 1193 - 1194, The defaults
enhancementModel and validationModel are set to 'GLM-4.7' but may not be a valid
ModelAlias and will break for users without a Z.AI key; either add 'GLM-4.7' to
the ModelAlias union/type definition or change the defaults to a known
ModelAlias (e.g., a Claude alias) and implement runtime fallback logic in the
code paths that use enhancementModel/validationModel (check for a configured ZAI
API key before selecting a ZAI model and fall back to the Claude alias) so
TypeScript and runtime behavior remain safe; update references to
enhancementModel, validationModel, and the ModelAlias type accordingly.
| private resolveModel(model?: string): string { | ||
| if (!model) { | ||
| return this.defaultModel; | ||
| } | ||
|
|
||
| // If it's already a Z.AI model, use it | ||
| if (model in ZAI_MODELS) { | ||
| return model; | ||
| } | ||
|
|
||
| // Removed legacy mapping (sonnet -> plus) to respect Provider separation. | ||
| // The backend/UI should send the correct Z.AI model ID (e.g. glm-4.7) | ||
| // or rely on env defaults. | ||
|
|
||
| // Default to defaultModel | ||
| return this.defaultModel; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use resolveModelString() from @automaker/model-resolver for model alias resolution.
Per coding guidelines, model aliases (haiku, sonnet, opus) should be converted using the shared resolveModelString() utility. The current implementation silently falls back to the default model for unrecognized aliases instead of resolving them properly.
♻️ Suggested approach
+import { resolveModelString } from '@automaker/model-resolver';
+
// ...
private resolveModel(model?: string): string {
if (!model) {
return this.defaultModel;
}
// If it's already a Z.AI model, use it
if (model in ZAI_MODELS) {
return model;
}
- // Removed legacy mapping (sonnet -> plus) to respect Provider separation.
- // The backend/UI should send the correct Z.AI model ID (e.g. glm-4.7)
- // or rely on env defaults.
-
- // Default to defaultModel
- return this.defaultModel;
+ // Resolve model aliases using the shared resolver
+ const resolved = resolveModelString(model);
+ if (resolved in ZAI_MODELS) {
+ return resolved;
+ }
+
+ // Fall back to default if resolution doesn't yield a valid Z.AI model
+ return this.defaultModel;
}As per coding guidelines, use resolveModelString() from @automaker/model-resolver.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private resolveModel(model?: string): string { | |
| if (!model) { | |
| return this.defaultModel; | |
| } | |
| // If it's already a Z.AI model, use it | |
| if (model in ZAI_MODELS) { | |
| return model; | |
| } | |
| // Removed legacy mapping (sonnet -> plus) to respect Provider separation. | |
| // The backend/UI should send the correct Z.AI model ID (e.g. glm-4.7) | |
| // or rely on env defaults. | |
| // Default to defaultModel | |
| return this.defaultModel; | |
| } | |
| import { resolveModelString } from '@automaker/model-resolver'; | |
| // ... existing code ... | |
| private resolveModel(model?: string): string { | |
| if (!model) { | |
| return this.defaultModel; | |
| } | |
| // If it's already a Z.AI model, use it | |
| if (model in ZAI_MODELS) { | |
| return model; | |
| } | |
| // Resolve model aliases using the shared resolver | |
| const resolved = resolveModelString(model); | |
| if (resolved in ZAI_MODELS) { | |
| return resolved; | |
| } | |
| // Fall back to default if resolution doesn't yield a valid Z.AI model | |
| return this.defaultModel; | |
| } |
🤖 Prompt for AI Agents
In @libs/provider-zai/src/zai-query-service.ts around lines 209 - 225, The
resolveModel method currently ignores known aliases and falls back to
this.defaultModel; update it to import and call resolveModelString() from
@automaker/model-resolver to map aliases (e.g., haiku, sonnet, opus) to actual
Z.AI model IDs, then if the resolved string exists in ZAI_MODELS return it,
otherwise fall back to this.defaultModel; ensure the logic stays inside the
private resolveModel(model?: string): string function and replace the "model in
ZAI_MODELS" check with a check against the resolved value returned by
resolveModelString().
Unlocks concurrency limits (from 4 to 24+ via UV_THREADPOOL_SIZE fix) and enables 24GB VRAM usage via aggressive GPU flags. Also fixes a critical server crash in index.ts.
Summary by CodeRabbit
Release Notes
New Features
Enhancements
Backend Architecture
✏️ Tip: You can customize this high-level summary in your review settings.