Skip to content

Conversation

@NomadicDaddy
Copy link

@NomadicDaddy NomadicDaddy commented Dec 23, 2025

Adds a configurable pipeline system that allows users to define custom verification workflows that run automatically after feature completion. Users can create pipelines with multiple verification steps, configure each step with specific criteria, and manage execution order with dependencies.

Key Features

  • 5 Built-in Step Types: Review, Security, Performance, Test, and Custom
  • Iteration & Memory: Loops until success with memory to avoid repeating feedback
  • Flexible Model Selection: Per-step model choice (opus, sonnet, haiku)
  • Dependency Management: Steps can depend on other steps with circular dependency detection
  • Visual Pipeline Builder: Drag-and-drop reordering with flow visualization
  • Import/Export: Share pipeline configurations across projects
{713560B2-7F63-4E72-AA34-9BB41906E74D} {A25037AB-7146-4BB7-9667-1CABB114DE1F} {B6F17736-013C-402E-8518-8C68E52B6466} {CE7CC844-F1CD-45BE-8D9F-DDBD37696717} {CF56222E-EFC9-4E6A-AD4D-AFA135D5069F} {E73843F6-97B3-4AC8-BB42-4BE6678CEDCB} {FE8B741D-1289-4150-902F-C371C31D9B9A}

Summary by CodeRabbit

  • New Features

    • Configurable automated pipelines (Review, Security, Performance, Test, Custom) with memory, looping, dependencies, retries, persistent step results and execution metrics.
  • Integrations / API

    • CodeRabbit integration with API-key verification and AI fallback; new pipeline HTTP & WebSocket endpoints for config, execution, skipping, resetting and validation.
  • UI

    • Visual pipeline builder and settings, Kanban pipeline columns, per-step skip/retry/clear actions, and CodeRabbit key management.
  • Documentation

    • Full pipeline docs, examples, and troubleshooting.
  • Tests

    • Extensive unit, integration and end-to-end pipeline and UI tests.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 23, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds a full pipeline subsystem: types/schema, server services (config, storage, memory, prompt builder, executor, queue), CodeRabbit integration, REST/WebSocket routes, UI components/hooks for pipeline settings and Kanban integration, comprehensive tests, docs, and example configs.

Changes

Cohort / File(s) Summary
Types & Public API
libs/types/src/pipeline.ts, libs/types/src/index.ts, libs/types/src/feature.ts, libs/types/src/settings.ts
New pipeline types, JSON schema, DEFAULT_PIPELINE_CONFIG, validatePipelineConfig export; Feature extended with pipelineSteps and currentPipelineStep; settings add coderabbit API key and pipeline in ProjectSettings.
Server: Pipeline Core & Integrations
apps/server/src/services/pipeline-config-service.ts, apps/server/src/services/pipeline-storage.ts, apps/server/src/services/pipeline-memory.ts, apps/server/src/services/pipeline-prompt-builder.ts, apps/server/src/services/pipeline-step-executor.ts, apps/server/src/integrations/coderabbit.ts
New services for loading/validating/saving pipeline config, persistent step results, in-memory iteration memory, centralized prompt building, step execution orchestration, and CodeRabbit integration with AI fallback.
Server: Queue, AutoMode & Feature Loader
apps/server/src/services/auto-mode-service.ts, apps/server/src/services/feature-loader.ts
AutoModeService extended with pipeline queue/processing, AIStep orchestration, skip/clear APIs, metrics and per-project storage; FeatureLoader adds pipeline-config caching/validation and image handling helpers.
Server: Routes / API
apps/server/src/routes/pipeline/*, apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts, apps/server/src/index.ts
New /api/pipeline route wiring and sub-routes (config, execute-step, skip-step, reset-pipeline, validate-config), plus CodeRabbit auth verification endpoint and route registration in server index.
Server Tests
apps/server/src/__tests__/pipeline/*
Extensive unit, integration and E2E tests covering config loading/validation, executor behaviors, full pipeline flows, WebSocket event streaming and API endpoints.
UI: Settings, Builder & Step Configs
apps/ui/src/components/views/settings-view/pipeline-section.tsx, apps/ui/src/components/views/settings-view/pipeline-builder.tsx, apps/ui/src/components/views/settings-view/step-configs/*, apps/ui/src/components/views/settings-view/hooks/*
PipelineSection and visual PipelineBuilder components, per-step configuration UIs (review/security/performance/test/custom), hooks for loading/saving pipeline config and skip/retry/clear actions; import/export and validation UI.
UI: Board/Kanban Integration
apps/ui/src/components/views/board-view/*, apps/ui/src/components/views/board-view/hooks/*, apps/ui/src/components/views/board-view/constants.ts
Dynamic Kanban columns generated from pipelineConfig; KanbanBoard/KanbanCard/CardActions extended with pipeline callbacks (skip/retry/clear); hooks adapted to string column IDs and pipeline-aware grouping.
UI: Client API & Electron
apps/ui/src/lib/http-api-client.ts, apps/ui/src/lib/electron.ts
HttpApiClient adds pipeline namespace and verifyCodeRabbitAuth; Electron API surface extended with pipeline methods and CodeRabbit auth verification (including mock).
UI: Components, Styles & Test infra
apps/ui/src/components/ui/alert.tsx, apps/ui/src/components/ui/separator.tsx, apps/ui/src/styles/global.css, apps/ui/package.json, apps/ui/vitest.config.ts, apps/ui/src/test/setup.ts
New Alert and Separator UI components, pipeline color tokens and theme mappings, Vitest/testing-library setup and scripts, test setup file.
UI Tests & E2E
apps/ui/src/components/views/settings-view/*.{test,tsx}, apps/ui/tests/e2e/pipeline-builder.spec.ts
Unit tests for PipelineSection/Builder and Playwright E2E for builder flows (drag/drop, import/export, step config).
Docs & Examples
docs/pipeline/**, docs/pipeline/examples/*
Pipeline docs (readme, step-types, troubleshooting), adding-step guide, and many example JSON configs demonstrating workflows and CodeRabbit integrations.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    actor UI as Client / HttpApiClient
    participant Server as AutoModeService
    participant Executor as PipelineStepExecutor
    participant Storage as PipelineStorage
    participant CodeRabbit as CodeRabbit API

    UI->>Server: POST /api/pipeline/execute-step (projectPath, featureId, stepId)
    Server->>Executor: executeStep(feature, stepConfig, signal)
    activate Executor
    Executor->>Server: getStepModel(feature, stepConfig)
    Server-->>Executor: model
    alt coderabbitEnabled
        Executor->>CodeRabbit: submitReview(diff, feature, stepConfig)
        CodeRabbit-->>Executor: review response
    else aiPath
        Executor->>Server: executeAIStep(prompt, model)
        Server-->>Executor: ai output (streaming)
    end
    Executor->>Storage: saveStepResult(projectPath, featureId, stepId, result)
    Executor-->>Server: completion event / result
    deactivate Executor
    Server-->>UI: 200 { result }
Loading
sequenceDiagram
    autonumber
    participant Queue as PipelineQueue
    participant Server as AutoModeService
    participant FeatureStore as FeatureLoader
    participant Executor as PipelineStepExecutor

    Queue->>Server: processPipelineQueue()
    Server->>FeatureStore: loadFeature(featureId)
    FeatureStore-->>Server: feature
    loop for each step (ordered + deps)
        Server->>Executor: executeStep(feature, stepConfig, signal)
        Executor-->>Server: stepResult
        Server->>FeatureStore: updateFeatureStatus(..., pipelineState)
    end
    Server-->>Queue: processing complete / metrics emitted
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~150 minutes

Possibly related PRs

Suggested labels

Tests

Poem

🐇 I hopped through steps in tidy rows,
New columns sprung where pipeline flows.
Reviews and tests with carrot-bright light,
Memory stores whispers through looping night.
CodeRabbit hums — the workflow grows.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 34.38% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Configurable Pipelines' directly and specifically describes the main change: introducing a configurable pipeline system with custom verification workflows.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @NomadicDaddy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a robust configurable pipeline system, empowering users to automate and customize their feature verification processes. It integrates various AI-driven analysis types—review, security, performance, and testing—alongside custom steps that can leverage external tools like CodeRabbit. The system is designed for flexibility, incorporating features like iterative execution, memory retention, and dependency management, all managed through a new set of API endpoints and an intuitive visual interface.

Highlights

  • Configurable Pipeline System: Introduces a new configurable pipeline system that allows users to define custom verification workflows to run automatically after feature completion.
  • Diverse Step Types: Includes 5 built-in step types: Review, Security, Performance, Test, and Custom, each with specific configurations to tailor analysis.
  • Advanced Step Features: Steps can now support iteration (looping until success), memory (to avoid repeating feedback), flexible AI model selection per step, and dependency management with circular dependency detection.
  • CodeRabbit Integration: Custom pipeline steps can integrate with CodeRabbit for automated code reviews, offering rules, severity filtering, and an AI fallback mechanism.
  • Comprehensive API & UI: New API endpoints are added for pipeline configuration, step execution, skipping, resetting, and validation. The UI features a visual drag-and-drop pipeline builder, dynamic Kanban board columns reflecting pipeline steps, and detailed configuration panels for each step.
  • Extensive Testing & Documentation: The new system is backed by thorough unit and end-to-end tests, and comprehensive documentation has been added covering configuration, step types, examples, and troubleshooting.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive pipeline system to AutoMaker, enabling configurable, automated verification workflows for features. Key changes include new API endpoints for managing pipeline configurations, executing, skipping, and resetting steps, along with E2E tests for these functionalities. The backend now features dedicated services for pipeline configuration, step execution, memory management, and prompt building, supporting various step types like review, security, performance, test, and custom steps. A new integration with CodeRabbit is added for automated code reviews, including a fallback to AI if CodeRabbit fails. The AutoModeService is enhanced to orchestrate pipeline execution and queue management, while the FeatureLoader is updated to handle pipeline step persistence. On the UI side, the Kanban board dynamically generates columns based on pipeline configuration, displays step statuses, and provides actions to skip, retry, or clear pipeline steps. A new 'Pipeline' section is added to the settings, featuring a visual pipeline builder with drag-and-drop reordering, individual step configuration modals, and import/export capabilities. CodeRabbit API key management is also integrated into the settings. Review comments highlight several areas for improvement: ensuring the CodeRabbitIntegration class uses the correct projectPath instead of process.cwd() and dynamically retrieves the Git repository URL; correcting a flawed circular dependency check in PipelineConfigService; implementing proper validation to prevent skipping required pipeline steps with a 400 Bad Request status; and fixing two emoji typos in AutoModeService from ?? to 📎 and 🔧 respectively.

Comment on lines 50 to 52
constructor(apiKey: string) {
this.apiKey = apiKey;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The CodeRabbitIntegration class should be aware of the project path it's operating on to avoid issues with process.cwd(). Pass the projectPath into the constructor and store it as a private member. This makes the integration more robust and testable, ensuring that operations like git diff are executed in the correct directory.

  private projectPath: string;

  constructor(apiKey: string, projectPath: string) {
    this.apiKey = apiKey;
    this.projectPath = projectPath;
  }

Comment on lines 274 to 277
private getProjectPath(feature: Feature): string {
// This would extract the project path from feature context
return process.cwd(); // Placeholder
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Using process.cwd() as a placeholder for the project path is dangerous and can lead to incorrect behavior, especially in a server environment where the current working directory is not guaranteed to be the project's root. This should use the projectPath stored in the class instance, which should be passed in via the constructor.

  private getProjectPath(feature: Feature): string {
    // This would extract the project path from feature context
    return this.projectPath;
  }

Comment on lines 262 to 269
private getGitRepoUrl(): string {
try {
// This would get the remote origin URL
return 'https://github.com/example/repo'; // Placeholder
} catch {
return '';
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The git repository URL is hardcoded. This should be dynamically determined by executing git config --get remote.origin.url within the project directory. This will make the integration work for any git repository.

Comment on lines 153 to 178
const checkCircular = (
stepId: string,
deps: string[],
visited: Set<string>,
path: string[]
): boolean => {
if (visited.has(stepId)) {
errors.push(
`step ${index}: circular dependency detected: ${path.join(' -> ')} -> ${stepId}`
);
return true;
}
visited.add(stepId);

for (const dep of deps) {
const depStep = config.steps.find((s: any) => s.id === dep);
if (depStep && depStep.dependencies) {
if (
checkCircular(dep, depStep.dependencies, new Set(visited), [...path, stepId])
) {
return true;
}
}
}
return false;
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The circular dependency check is flawed. It re-initializes the visited set for each step in the main loop and creates a new Set in each recursive call, which can lead to incorrect cycle detection in complex dependency graphs. A more robust approach is to perform a single traversal of the dependency graph, keeping track of visited nodes and the current recursion stack to detect back edges, which indicate a cycle.

Comment on lines +1759 to +1762
const feature = await this.loadFeature(projectPath, featureId);
if (!feature || !feature.pipelineSteps) {
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This function allows skipping any pipeline step. However, required steps should not be skippable. You should load the pipeline configuration, find the step, and throw an error if step.required is true. This will be caught by the route handler and return a 400 error as expected by the E2E test.

    const feature = await this.loadFeature(projectPath, featureId);
    if (!feature) {
      return;
    }

    const pipelineConfig = await this.loadPipelineConfig(projectPath);
    const stepConfig = pipelineConfig?.steps.find((s) => s.id === stepId);

    if (stepConfig?.required) {
      throw new Error('Cannot skip required step');
    }

    if (!feature.pipelineSteps) {
      feature.pipelineSteps = {};
    }

Comment on lines 58 to 62
console.error('[Pipeline Skip Step] Error:', error);
res.status(500).json({
success: false,
error: error instanceof Error ? error.message : 'Failed to skip pipeline step',
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error handling for skipping a required step should return a 400 Bad Request status code, not a 500 Internal Server Error. The test should not skip required steps in pipeline-api-e2e.test.ts correctly expects a 400 status. You should check for the specific error message and return the appropriate status code.

    if (error instanceof Error && error.message.includes('Cannot skip required step')) {
      return res.status(400).json({ success: false, error: error.message });
    }
    console.error('[Pipeline Skip Step] Error:', error);
    res.status(500).json({
      success: false,
      error: error instanceof Error ? error.message : 'Failed to skip pipeline step',
    });


prompt += `
**📎 Context Images Attached:**
**?? Context Images Attached:**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This appears to be a typo. It should probably be 📎 (paperclip emoji) to indicate an attachment, not ??.

**📎 Context Images Attached:**

responseText += '\n';
}
responseText += `\n🔧 Tool: ${block.name}\n`;
responseText += `\n?? Tool: ${block.name}\n`;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This appears to be a typo. It should probably be 🔧 (wrench emoji) to indicate a tool, not ??.

            responseText += `\n🔧 Tool: ${block.name}\n`;

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
apps/ui/src/store/app-store.ts (1)

897-901: Missing coderabbit key initialization.

The ApiKeys interface was extended with coderabbit: string at line 56, but the initialState.apiKeys object doesn't include it. This means coderabbit will be undefined initially rather than an empty string, which could cause issues when checking for key presence.

🔎 Proposed fix
   apiKeys: {
     anthropic: '',
     google: '',
     openai: '',
+    coderabbit: '',
   },
apps/ui/src/components/views/board-view/hooks/use-board-column-features.ts (1)

77-80: Calling useAppStore.getState() inside useMemo is a React anti-pattern.

Accessing store state via getState() inside useMemo bypasses React's reactivity system. If isPrimaryWorktreeBranch logic depends on state that changes, the memoized value won't update. This could lead to stale data being displayed.

🔎 Suggested fix

Extract the function reference outside the memo, or pass it as a dependency:

+  const isPrimaryWorktreeBranch = useAppStore((state) => state.isPrimaryWorktreeBranch);
+
   const columnFeaturesMap = useMemo(() => {
     // ...
     filteredFeatures.forEach((f) => {
       // ...
       } else if (effectiveBranch === null) {
-        matchesWorktree = projectPath
-          ? useAppStore.getState().isPrimaryWorktreeBranch(projectPath, featureBranch)
-          : false;
+        matchesWorktree = projectPath
+          ? isPrimaryWorktreeBranch(projectPath, featureBranch)
+          : false;
       }
       // ...
     });
-  }, [features, runningAutoTasks, searchQuery, currentWorktreePath, currentWorktreeBranch, projectPath, pipelineConfig]);
+  }, [features, runningAutoTasks, searchQuery, currentWorktreePath, currentWorktreeBranch, projectPath, pipelineConfig, isPrimaryWorktreeBranch]);
🟠 Major comments (21)
apps/ui/src/components/ui/alert.tsx-41-47 (1)

41-47: Fix ref type to match the rendered element.

The ref is typed as HTMLParagraphElement, but the component renders a div element (which is HTMLDivElement). This type mismatch can cause TypeScript errors when consumers try to access div-specific properties or methods on the ref.

🔎 Proposed fix
 const AlertDescription = React.forwardRef<
-  HTMLParagraphElement,
+  HTMLDivElement,
   React.HTMLAttributes<HTMLParagraphElement>
 >(({ className, ...props }, ref) => (
   <div ref={ref} className={cn('text-sm [&_p]:leading-relaxed', className)} {...props} />
 ));
apps/ui/src/components/ui/alert.tsx-30-39 (1)

30-39: Fix ref type to match the rendered element.

The ref is typed as HTMLParagraphElement, but the component renders an h5 element (which is HTMLHeadingElement). This type mismatch can cause TypeScript errors when consumers try to access heading-specific properties or methods on the ref.

🔎 Proposed fix
-const AlertTitle = React.forwardRef<HTMLParagraphElement, React.HTMLAttributes<HTMLHeadingElement>>(
+const AlertTitle = React.forwardRef<HTMLHeadingElement, React.HTMLAttributes<HTMLHeadingElement>>(
   ({ className, ...props }, ref) => (
     <h5
       ref={ref}
       className={cn('mb-1 font-medium leading-none tracking-tight', className)}
       {...props}
     />
   )
 );
apps/ui/src/components/ui/separator.tsx-14-15 (1)

14-15: Remove aria-orientation when decorative=true.

The aria-orientation attribute should only be present when role="separator". When decorative is true and role="none", the element is removed from the accessibility tree, making aria-orientation both invalid and potentially confusing to assistive technologies.

🔎 Proposed fix
     <div
       ref={ref}
       role={decorative ? 'none' : 'separator'}
-      aria-orientation={orientation}
+      {...(!decorative && { 'aria-orientation': orientation })}
       className={cn(
         'shrink-0 bg-border',
         orientation === 'horizontal' ? 'h-[1px] w-full' : 'h-full w-[1px]',
         className
       )}
       {...props}
     />
docs/pipeline/examples/test-focused.json-13-16 (1)

13-16: Configuration fields don't match documented schemas.

Several configuration fields in this example don't align with the schemas documented in docs/pipeline/step-types.md:

  1. Line 14: Uses "coverage": 90 but TestStepConfig (lines 226-232) defines coverageThreshold (not coverage)
  2. Line 28: Uses "includePatterns" but ReviewStepConfig (lines 16-21) only defines excludePatterns (no include option)

Either update this example to match the documented schemas, or update the type definitions if these fields are intentional extensions.

🔎 Suggested fix to align with documented schemas
       "config": {
-        "coverage": 90,
+        "coverageThreshold": 90,
         "excludePatterns": ["*.spec.ts", "test/integration/"]
       }
       "config": {
         "focus": ["testing", "coverage"],
         "maxIssues": 3,
-        "includePatterns": ["*.test.ts", "*.spec.ts"]
+        "excludePatterns": ["!*.test.ts", "!*.spec.ts"]
       }

Note: If includePatterns is a valid feature, update the documentation in step-types.md to reflect it.

Also applies to: 25-29

docs/pipeline/examples/security-focused.json-13-17 (1)

13-17: Fix example configuration to match SecurityStepConfig schema.

The example uses incorrect field names. The SecurityStepConfig interface requires:

  • checklist (not focus)
  • minSeverity (not severity)
  • excludeTests (not excludePatterns, and optional)
  • checkDependencies (optional)

Update lines 13-17 in docs/pipeline/examples/security-focused.json to use the correct field names:

"config": {
  "checklist": ["vulnerabilities", "secrets", "authentication"],
  "minSeverity": "high",
  "excludeTests": true
}
apps/ui/src/components/views/board-view/hooks/use-board-actions.ts-862-965 (1)

862-965: Add pipeline property to the ElectronAPI interface.

The three handlers reference api.pipeline.skipStep, api.pipeline.executeStep, and api.pipeline.resetPipeline, but the pipeline property is not declared in the ElectronAPI interface at apps/ui/src/lib/electron.ts. The methods are implemented in HttpApiClient (lines 613-660 in http-api-client.ts), but the interface definition must be updated to include the pipeline property with its method signatures:

pipeline?: {
  getConfig: (projectPath: string) => Promise<{ success: boolean; config?: any; error?: string }>;
  updateConfig: (projectPath: string, config: any) => Promise<{ success: boolean; error?: string }>;
  executeStep: (projectPath: string, featureId: string, stepId: string) => Promise<{ success: boolean; result?: any; error?: string }>;
  skipStep: (projectPath: string, featureId: string, stepId: string) => Promise<{ success: boolean; error?: string }>;
  resetPipeline: (projectPath: string, featureId: string, stepId?: string) => Promise<{ success: boolean; error?: string }>;
  validateConfig: (config: any) => Promise<{ success: boolean; error?: string }>;
};
apps/ui/src/config/api-providers.ts-95-120 (1)

95-120: Wire the CodeRabbit test button implementation to the verification endpoint.

The backend verification endpoint (/verify-coderabbit-auth) is fully implemented and registered, but the test button in the UI configuration is not wired to use it. Currently it has an empty onClick handler and is unconditionally disabled.

To complete this feature:

  1. Add a verifyCodeRabbitAuth function to apps/ui/src/lib/http-api-client.ts that calls the /verify-coderabbit-auth endpoint (similar to how verifyClaudeAuth would be implemented for Anthropic)
  2. Update the CodeRabbit config in apps/ui/src/config/api-providers.ts to use coderabbit.onTest instead of the empty function (line 108)
  3. Change the disabled state from true to !coderabbit.value || coderabbit.testing to match the Anthropic pattern (line 109)
  4. Set loading: coderabbit.testing to properly indicate the test state (line 110)
  5. Use result: coderabbit.result to display verification results (line 113)
apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts-34-60 (1)

34-60: Use the documented /api/v1/report.generate endpoint instead of /api/v1/reviews for API key validation.

The implementation currently sends a POST request to an undocumented /api/v1/reviews endpoint. CodeRabbit's public API reference does not list this endpoint; the documented endpoint for API validation is /api/v1/report.generate. Replace the request with:

const response = await fetch('https://api.coderabbit.ai/api/v1/report.generate', {
  method: 'POST',
  headers: {
    'x-coderabbitai-api-key': apiKey,
    'Content-Type': 'application/json',
  },
  signal: abortController.signal,
  body: JSON.stringify({
    from: '2024-01-01',
    to: '2024-01-02',
  }),
})

This returns 200 OK on valid keys and 401 UNAUTHORIZED on invalid/missing keys, making it a reliable validation approach without creating unintended side effects.

apps/ui/src/components/views/settings-view.tsx-90-91 (1)

90-91: Remove debug console.log before merging.

This debug statement should be removed for production code.

🔎 Proposed fix
-  // Debug currentProject
-  console.log('[SettingsView] currentProject:', currentProject);
-
apps/server/src/__tests__/pipeline/pipeline-config.test.ts-52-56 (1)

52-56: Ensure .automaker directory exists before writing.

The test writes to .automaker/pipeline.json but doesn't ensure the .automaker directory exists first. This could cause the test to fail.

🔎 Proposed fix
+      await secureFs.ensureDir(path.join(tempDir, '.automaker'));
       await secureFs.writeFile(
         path.join(tempDir, '.automaker', 'pipeline.json'),
         JSON.stringify(validConfig),
         'utf-8'
       );
apps/server/src/__tests__/pipeline/pipeline-config.test.ts-67-75 (1)

67-75: Fix directory creation and verify error handling.

Two issues:

  1. The test writes to .automaker/pipeline.json without ensuring the directory exists (same issue as the first test).
  2. According to the implementation in the relevant code snippets, loadPipelineConfig() catches JSON parsing errors and returns DEFAULT_PIPELINE_CONFIG rather than throwing. Verify that the test expectation matches the actual behavior.
🔎 Proposed fixes
+      await secureFs.ensureDir(path.join(tempDir, '.automaker'));
       await secureFs.writeFile(
         path.join(tempDir, '.automaker', 'pipeline.json'),
         '{ invalid json }',
         'utf-8'
       );

-      await expect(configService.loadPipelineConfig()).rejects.toThrow();
+      const config = await configService.loadPipelineConfig();
+      expect(config).toEqual(DEFAULT_PIPELINE_CONFIG);
apps/server/src/routes/pipeline/skip-step.ts-44-49 (1)

44-49: Inject singleton AutoModeService into pipeline routes instead of creating per-request instances.

The pipeline routes are creating new AutoModeService instances per request with incomplete event emitters, then bypassing type safety with as any casts. A properly configured singleton already exists at server startup (see index.ts line 117) and is passed to other route factories. The execute-step.ts route even includes a comment acknowledging this is incorrect: "// Note: In a real implementation, you'd get this from a service registry."

Required changes:

  • Modify createPipelineRoutes() to accept autoModeService as a parameter
  • Pass the singleton instance into pipeline route handlers (skip-step, execute-step, reset-pipeline)
  • Remove inline event emitter objects and as any casts
  • Reuse the existing event subscription system instead of console.log
apps/server/src/routes/pipeline/validate-config.ts-153-181 (1)

153-181: Circular dependency detection has logic issues.

The algorithm has problems that may cause incorrect cycle detection:

  1. Line 159 checks if stepId (the current step being validated) is in visited, but at this point we're checking the step's dependencies, not the step itself being a dependency of itself.
  2. Line 171 creates new Set(visited) which breaks shared state needed for proper DFS cycle detection.
  3. The function is called with step.id as stepId but the cycle check doesn't properly trace back to the original step.
🔎 Proposed fix
-            // Check for circular dependencies
-            const checkCircular = (
-              stepId: string,
-              deps: string[],
-              visited: Set<string>,
-              path: string[]
-            ): boolean => {
-              if (visited.has(stepId)) {
-                errors.push(
-                  `step ${index}: circular dependency detected: ${path.join(' -> ')} -> ${stepId}`
-                );
-                return true;
-              }
-              visited.add(stepId);
-
-              for (const dep of deps) {
-                const depStep = config.steps.find((s: any) => s.id === dep);
-                if (depStep && depStep.dependencies) {
-                  if (
-                    checkCircular(dep, depStep.dependencies, new Set(visited), [...path, stepId])
-                  ) {
-                    return true;
-                  }
-                }
-              }
-              return false;
-            };
-
-            checkCircular(step.id, step.dependencies, new Set(), []);
+            // Check for circular dependencies using proper DFS
+            const checkCircular = (
+              startId: string,
+              currentId: string,
+              visiting: Set<string>,
+              path: string[]
+            ): boolean => {
+              if (visiting.has(currentId)) {
+                errors.push(
+                  `step ${index}: circular dependency detected: ${[...path, currentId].join(' -> ')}`
+                );
+                return true;
+              }
+
+              const currentStep = config.steps.find((s: any) => s.id === currentId);
+              if (!currentStep || !currentStep.dependencies) {
+                return false;
+              }
+
+              visiting.add(currentId);
+              for (const dep of currentStep.dependencies) {
+                if (checkCircular(startId, dep, visiting, [...path, currentId])) {
+                  return true;
+                }
+              }
+              visiting.delete(currentId);
+              return false;
+            };
+
+            for (const dep of step.dependencies) {
+              checkCircular(step.id, dep, new Set(), [step.id]);
+            }
apps/server/src/routes/pipeline/execute-step.ts-55-62 (1)

55-62: Creating new service instances per request loses shared state.

Creating a new AutoModeService per request with a mock emitter means:

  1. Pipeline metrics and queue state are lost between requests
  2. The emitter is a stub that only logs to console

Consider using dependency injection or a service registry as noted in the comment.

🔎 Suggested approach
// In a real implementation, inject the service:
router.post('/', async (req, res) => {
  const autoModeService = req.app.get('autoModeService') as AutoModeService;
  // ... rest of handler
});
apps/ui/src/components/views/settings-view/step-configs/custom-step-config.tsx-46-51 (1)

46-51: Variable format inconsistency between UI and custom step implementation.

The UI advertises variables like {{feature.title}} and {{feature.description}}, but the custom step implementation uses camelCase formats: {{featureTitle}} and {{featureDescription}}. Additionally, the custom step does not support {{feature.id}} or {{feature.category}} substitution shown in the UI. Update the UI variable helper to reflect the actual formats used in custom-step.ts (lines 191-194), or align the substitution logic in both implementations to use consistent variable naming.

apps/ui/src/components/views/settings-view/pipeline-builder.tsx-130-202 (1)

130-202: Move SortableStep outside the component to prevent re-creation on each render.

Defining SortableStep inside PipelineBuilder causes it to be recreated on every render, which can break drag-and-drop state and cause performance issues with @dnd-kit.

🔎 Proposed refactor

Move SortableStep outside as a separate component:

interface SortableStepProps {
  step: PipelineStepConfig;
  isLast: boolean;
  stepTypeIcons: Record<StepType, React.ReactNode>;
  stepTypeColors: Record<StepType, string>;
}

function SortableStep({ step, isLast, stepTypeIcons, stepTypeColors }: SortableStepProps) {
  const { attributes, listeners, setNodeRef, transform, transition, isDragging } = useSortable({
    id: step.id,
  });
  // ... rest of implementation
}

Then pass stepTypeIcons and stepTypeColors as props from PipelineBuilder.

apps/server/src/services/feature-loader.ts-375-394 (1)

375-394: Cache invalidation is not wired up—pipeline config updates via the API will return stale cached data.

The clearPipelineConfigCache() method exists but is never called. The POST endpoint at routes/pipeline/config.ts (line 87) saves pipeline configuration without invalidating the FeatureLoader's cache, causing subsequent loadPipelineConfig() calls to return stale cached data. Call clearPipelineConfigCache(projectPath) after savePipelineConfig() in the API route to maintain cache consistency.

apps/server/src/pipeline-steps/custom-step.ts-182-187 (1)

182-187: Potential ReDoS vulnerability with user-controlled regex.

The variable key comes from user-provided config.variables and is interpolated directly into a RegExp. Malicious patterns could cause Regular Expression Denial of Service (ReDoS) attacks.

🔎 Proposed fix - escape special regex characters
+  // Escape special regex characters in the key
+  private escapeRegExp(str: string): string {
+    return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
+  }

   private buildPrompt(...) {
     // ...
     if (config.variables) {
       for (const [key, value] of Object.entries(config.variables)) {
-        prompt = prompt.replace(new RegExp(`\\{\\{${key}\\}\\}`, 'g'), value);
+        const escapedKey = this.escapeRegExp(key);
+        prompt = prompt.replace(new RegExp(`\\{\\{${escapedKey}\\}\\}`, 'g'), value);
       }
     }

Based on static analysis hint flagging CWE-1333.

apps/server/src/integrations/coderabbit.ts-262-277 (1)

262-277: Placeholder implementations need to be completed before production use.

Both getGitRepoUrl() and getProjectPath() return placeholder values. getGitRepoUrl returns a hardcoded URL, and getProjectPath returns process.cwd() which may not reflect the actual project path.

🔎 Suggested implementation
  private getGitRepoUrl(): string {
    try {
-     // This would get the remote origin URL
-     return 'https://github.com/example/repo'; // Placeholder
+     // Get remote origin URL from git
+     const { spawnProcess } = await import('@automaker/platform');
+     const result = await spawnProcess({
+       command: 'git',
+       args: ['config', '--get', 'remote.origin.url'],
+       cwd: process.cwd(),
+     });
+     return result.stdout.trim() || '';
    } catch {
      return '';
    }
  }

  private getProjectPath(feature: Feature): string {
-   // This would extract the project path from feature context
-   return process.cwd(); // Placeholder
+   // Extract project path from feature context or metadata
+   return (feature as Record<string, unknown>).projectPath as string || process.cwd();
  }

Committable suggestion skipped: line range outside the PR's diff.

apps/server/src/services/auto-mode-service.ts-2936-2938 (1)

2936-2938: AbortController created but never used for cancellation.

A new AbortController is created inline but there's no way to abort this operation from outside. If a feature needs to be stopped during pipeline processing, this won't respond to abort signals.

🔎 Suggested fix

Store the abort controller or use an existing one from the running feature context:

  for (const item of itemsToProcess) {
    try {
      const feature = await this.loadFeature(item.projectPath, item.featureId);
      if (!feature) {
        console.error(`[Pipeline] Feature ${item.featureId} not found`);
        continue;
      }

+     // Check if feature is still running and get its abort controller
+     const runningFeature = this.runningFeatures.get(item.featureId);
+     const signal = runningFeature?.abortController.signal ?? new AbortController().signal;
+
      const startTime = Date.now();
-     await this.executePipelineSteps(item.projectPath, feature, new AbortController().signal);
+     await this.executePipelineSteps(item.projectPath, feature, signal);
apps/server/src/integrations/coderabbit.ts-119-149 (1)

119-149: Add timeout to fetch calls to prevent hanging requests.

The fetch call to CodeRabbit API has no timeout configured. If the CodeRabbit service is slow or unresponsive, this could block pipeline execution indefinitely.

🔎 Suggested fix using AbortController
  private async callCodeRabbitAPI(
    diff: string,
    feature: Feature,
    stepConfig: PipelineStepConfig
  ): Promise<CodeRabbitReview> {
    const customConfig = stepConfig.config as CustomConfig;
+   const timeout = stepConfig.timeout || 60; // Default 60 seconds
+   const controller = new AbortController();
+   const timeoutId = setTimeout(() => controller.abort(), timeout * 1000);
+
+   try {
      const response = await fetch(`${this.baseUrl}/v1/reviews`, {
        method: 'POST',
        headers: {
          Authorization: `Bearer ${this.apiKey}`,
          'Content-Type': 'application/json',
          'X-Source': 'automaker-pipeline',
          'User-Agent': 'AutoMaker/1.0',
        },
+       signal: controller.signal,
        body: JSON.stringify({
          // ...existing body
        }),
      });
+   } finally {
+     clearTimeout(timeoutId);
+   }

    // ... rest of method
  }
🟡 Minor comments (26)
apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts-65-111 (1)

65-111: Inconsistent response structure for error cases.

The error response structure is inconsistent:

  • Lines 67-77: Success responses return { success: true, authenticated: true/false, ... }
  • Lines 82-103: Network/API errors return { success: true, authenticated: false, error: ... }
  • Lines 108-111: Unexpected errors return { success: false, error: ... } (no authenticated field)

This inconsistency makes client-side error handling more complex. The client must check both success and authenticated fields to determine the outcome.

🔎 Proposed fix for consistent error responses
     } catch (error) {
       logError(error, 'Verify CodeRabbit auth failed');
       res.status(500).json({
-        success: false,
+        success: true,
+        authenticated: false,
         error: getErrorMessage(error),
       });
     }

This ensures all responses include the authenticated field, allowing clients to always check response.authenticated regardless of the error type.

apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts-34-41 (1)

34-41: Remove the redundant x-coderabbitai-api-key header; use only the Authorization: Bearer header.

CodeRabbit's API documentation specifies the standard HTTP Authorization: Bearer <ACCESS_TOKEN> header for authentication. The x-coderabbitai-api-key header on line 40 is non-standard and should be removed.

apps/ui/src/components/views/settings-view/pipeline-builder.test.tsx-67-68 (1)

67-68: Improve test query selector for better maintainability.

Using getAllByRole('generic')[1] is fragile and the index doesn't match the comment ("first step" vs index [1]). Consider using a more semantic role or adding a data-testid attribute to step containers.

🔎 Suggested improvement
-    const steps = screen.getAllByRole('generic')[1]; // Get the first step container
-    expect(steps).toHaveAttribute('draggable');
+    const stepContainers = screen.getAllByTestId('pipeline-step');
+    expect(stepContainers[0]).toHaveAttribute('draggable');

Note: This requires adding data-testid="pipeline-step" to the step containers in the PipelineBuilder component.

Committable suggestion skipped: line range outside the PR's diff.

apps/ui/src/components/views/settings-view/pipeline-builder.test.tsx-125-127 (1)

125-127: Make statistics assertions more specific.

The current assertions using screen.getByText('2') and screen.getByText('1') are ambiguous and could match unintended elements. Consider using more specific queries that include context or data-testid attributes for the statistics section.

🔎 Suggested improvement
-    expect(screen.getByText('2')).toBeInTheDocument(); // Total Steps
-    expect(screen.getByText('1')).toBeInTheDocument(); // Required
-    expect(screen.getByText('1')).toBeInTheDocument(); // Auto Trigger
+    const statsSection = screen.getByText('Pipeline Statistics').closest('div');
+    expect(within(statsSection).getByText('Total Steps').nextElementSibling).toHaveTextContent('2');
+    expect(within(statsSection).getByText('Required').nextElementSibling).toHaveTextContent('1');
+    expect(within(statsSection).getByText('Auto Trigger').nextElementSibling).toHaveTextContent('1');

Or add data-testid attributes to the stat values for more reliable queries.

docs/pipeline/examples/coderabbit-security.json-21-23 (1)

21-23: Inconsistent config field: excludeTests vs includeTests.

This step uses excludeTests: true while other example configs use includeTests: false. These are semantically opposite booleans. For consistency and to avoid confusion, consider using the same field name across all configs.

🔎 Proposed fix
-        "excludeTests": true
+        "includeTests": false
docs/pipeline/examples/comprehensive-pipeline.json-52-55 (1)

52-55: Config field name mismatch with codebase schema.

The test step config uses coverage: 80 but the codebase expects coverageThreshold. This field is not recognized and will be ignored during execution. The same issue exists in test-focused.json (line 14). Update both example files to use coverageThreshold instead of coverage.

apps/ui/src/components/views/settings-view/pipeline-section.test.tsx-25-38 (1)

25-38: Toggle test assertion may be incorrect.

DEFAULT_PIPELINE_CONFIG has enabled: false by default (per the relevant code snippets). Clicking the toggle should enable the pipeline, so the assertion should expect enabled: true, not enabled: false.

🔎 Proposed fix
-    expect(mockOnPipelineChange).toHaveBeenCalledWith(expect.objectContaining({ enabled: false }));
+    expect(mockOnPipelineChange).toHaveBeenCalledWith(expect.objectContaining({ enabled: true }));
apps/server/src/__tests__/pipeline/pipeline-api-e2e.test.ts-156-179 (1)

156-179: Mock is not effectively injected.

The mockExecuteModelPrompt on lines 158-160 is defined locally but never injected into the server's dependency chain. The comment acknowledges this: "In real tests, you would need to inject this mock properly." This test may produce unpredictable results depending on actual AI service behavior.

Consider either:

  1. Properly injecting the mock using dependency injection
  2. Adding a TODO comment and skipping this specific assertion
  3. Using a test-specific configuration that stubs external services
apps/server/src/__tests__/pipeline/pipeline-step-executor.test.ts-21-48 (1)

21-48: Reset mocks in beforeEach to ensure test isolation.

The mockAutoModeService.executeModelPrompt mock is defined at module level and not reset between tests. This causes call count assertions (lines 138, 166) to potentially fail or be flaky because they accumulate calls from previous tests.

Additionally, mockFeature is missing the required category field from the Feature type.

Proposed fix
 beforeEach(() => {
+  // Reset mocks to ensure test isolation
+  mockAutoModeService.executeModelPrompt.mockClear();
+  mockAutoModeService.emit.mockClear();
+  
+  // Reset default mock implementation
+  mockAutoModeService.executeModelPrompt.mockResolvedValue({ 
+    content: '[REVIEW_PASSED] No issues found' 
+  });
+
   executor = new PipelineStepExecutor(mockAutoModeService, '/test/project');

   mockFeature = {
     id: 'test-feature-1',
     title: 'Test Feature',
+    category: 'test',
     description: 'A test feature',
     status: 'in_progress',
-    createdAt: new Date().toISOString(),
-    updatedAt: new Date().toISOString(),
   };
apps/server/src/services/pipeline-prompt-builder.ts-314-319 (1)

314-319: Misleading comment - method only capitalizes first character.

The comment says "Capitalize first letter of each word" but the implementation only capitalizes the first character of the string.

🔎 Proposed fix - correct the comment
   /**
-   * Capitalize first letter of each word
+   * Capitalize first letter of string
    */
   private capitalizeFirst(str: string): string {
     return str.charAt(0).toUpperCase() + str.slice(1);
   }
apps/ui/src/components/views/settings-view/api-keys/hooks/use-api-key-management.ts-162-170 (1)

162-170: Empty test handler provides no user feedback.

The CodeRabbit provider has an empty onTest handler and result: null. Users clicking "Test Connection" will see no feedback.

🔎 Proposed fix - show placeholder message
     coderabbit: {
       value: coderabbitKey,
       setValue: setCoderabbitKey,
       show: showCoderabbitKey,
       setShow: setShowCoderabbitKey,
       testing: false,
-      onTest: async () => {},
-      result: null,
+      onTest: async () => {
+        // TODO: Implement CodeRabbit API key verification
+        console.log('CodeRabbit connection test not yet implemented');
+      },
+      result: { success: true, message: 'API key saved. Connection test not yet available.' },
     },
apps/server/src/__tests__/pipeline/pipeline-steps.test.ts-362-376 (1)

362-376: Test assertion appears incorrect - expects variables NOT to be substituted.

The test at lines 373-375 expects {{featureId}}, {{title}}, and {{description}} to remain in the output, but based on the substituteVariables test (lines 414-428) and the prompt builder implementation, these should be replaced with actual feature values.

🔎 Proposed fix
       const prompt = customStep['buildCustomPrompt'](feature, mockStepConfig as any);
 
       expect(prompt).toContain('Check if the code follows our custom standards');
-      expect(prompt).toContain('{{featureId}}'); // Should be substituted
-      expect(prompt).toContain('{{title}}');
-      expect(prompt).toContain('{{description}}');
+      expect(prompt).toContain('feature-123');
+      expect(prompt).toContain('My Feature');
+      expect(prompt).toContain('Does something cool');

Alternatively, if buildCustomPrompt intentionally doesn't substitute variables (leaving that to a separate step), the comment "// Should be substituted" is misleading and should be removed or clarified.

apps/server/src/services/pipeline-step-executor.ts-40-41 (1)

40-41: Two of the four identified items are not dead code.

The review contains inaccuracies:

  • parseStepResult (line 185) is used in the test suite at apps/server/src/__tests__/pipeline/pipeline-step-executor.test.ts (lines 173, 181, 189, 200), accessed via bracket notation for private method testing.
  • extractIssues (line 215) is called by parseStepResult at line 199 and is not dead code.

However, two items are confirmed as unused:

  • promptBuilder (line 40) — initialized in the constructor but never referenced anywhere in the codebase.
  • executeSingleStep (lines 130-156) — defined but never called; the executeStep method uses a switch statement to invoke individual step implementations directly instead.
apps/ui/src/components/views/settings-view/pipeline-section.tsx-278-324 (1)

278-324: validateConfig is defined but never called.

The validation function includes useful checks (empty steps, duplicate IDs, circular dependencies) but is never invoked. Consider calling it before saving or displaying validation errors in the UI.

🔎 Suggested integration
const handleStepUpdate = (updatedStep: PipelineStepConfig) => {
  const newConfig = {
    ...config,
    steps: config.steps.map((s) => (s.id === updatedStep.id ? updatedStep : s)),
  };
  
  // Validate before saving
  const validationErrors = validateConfig();
  if (validationErrors.length > 0) {
    setErrors(validationErrors);
    return;
  }
  
  setConfig(newConfig);
  onPipelineChange(newConfig);
  // ...
};

Note: validateConfig needs to be updated to accept a config parameter rather than using closure state for this to work correctly.

apps/server/src/pipeline-steps/performance-step.ts-36-39 (1)

36-39: Unused variable model.

The model variable is assigned but never used. Either remove it or use it in the result metadata.

🔎 Fix
-      // Get the model to use
-      const model = this.autoModeService.getStepModel(feature, stepConfig);
-
       // Execute the performance analysis using the AI model

Or if model info is needed in metadata:

       const model = this.autoModeService.getStepModel(feature, stepConfig);
       // ... later in return ...
       metadata: {
+        model,
         performanceIssues: parsedResult.issues,
apps/ui/src/components/views/settings-view/pipeline-builder.tsx-50-53 (1)

50-53: Remove debug console.log statements before merging.

Multiple debug logging statements are present throughout the component (lines 50-53, 100, 104, 109, 135, 183-184). These should be removed for production code.

🔎 Locations to clean up
-  useEffect(() => {
-    console.log('PipelineBuilder - localConfig.steps:', localConfig.steps);
-    console.log('PipelineBuilder - Number of steps:', localConfig.steps.length);
-  }, [localConfig.steps]);

Also remove console.log at lines 100, 104, 109, 135, 183, and 184.

apps/ui/src/components/views/settings-view/pipeline-section.tsx-132-132 (1)

132-132: isBuilderOpen state is never set to true.

The PipelineBuilder modal is conditionally rendered based on isBuilderOpen (line 448), but there's no UI element that sets this state to true. The visual pipeline builder feature appears incomplete.

🔎 Add a button to open the builder
               <Button variant="outline" size="sm" onClick={handleReset}>
                 <RotateCcw className="w-4 h-4 mr-2" />
                 Reset
               </Button>
+              <Button variant="outline" size="sm" onClick={() => setIsBuilderOpen(true)}>
+                <Settings2 className="w-4 h-4 mr-2" />
+                Visual Builder
+              </Button>
               <Button onClick={handleAddStep}>

Committable suggestion skipped: line range outside the PR's diff.

apps/server/src/pipeline-steps/review-step.ts-32-46 (1)

32-46: Unused model variable.

The result of getStepModel at line 34 is stored but never used. Either use it when calling executeAIStep or remove the call.

🔎 Proposed fix
   try {
-    // Get the model to use
-    const model = this.autoModeService.getStepModel(feature, stepConfig);

     // Execute the review using the AI model
     const result = await this.autoModeService.executeAIStep({
       feature,
       stepConfig,
       signal,
       prompt,
       projectPath,
       onProgress: (message) => {
         console.log(`[Review Step] ${message}`);
       },
     });
apps/server/src/services/pipeline-config-service.ts-98-110 (1)

98-110: Migration logic may incorrectly downgrade newer config versions.

The current check config.version !== '1.0' would trigger migration for any version that isn't exactly '1.0', including hypothetical future versions like '2.0'. This could inadvertently downgrade configs.

🔎 Proposed fix using semantic version comparison
 async migrateConfig(): Promise<void> {
   const config = await this.loadPipelineConfig();

-  // Check if migration is needed
-  if (config.version !== '1.0') {
-    // Add migration logic here for future versions
-    const migratedConfig = { ...config, version: '1.0' };
+  // Check if migration is needed (only migrate older versions)
+  const currentVersion = parseFloat(config.version) || 0;
+  const targetVersion = 1.0;
+  
+  if (currentVersion < targetVersion) {
+    // Add migration logic here for future versions
+    const migratedConfig = { ...config, version: targetVersion.toString() };
     await this.savePipelineConfig(migratedConfig);
   }
 }
apps/server/src/services/pipeline-storage.ts-73-84 (1)

73-84: compressedSize calculation measures base64 encoding, not actual compressed bytes.

At line 78, compressedSize is calculated as Buffer.byteLength(finalData, 'utf8') where finalData is the base64-encoded string. This measures the base64 string length (~33% larger than actual bytes), not the actual compressed buffer size.

🔎 Proposed fix
     if (size > this.options.compressionThreshold!) {
       try {
         const compressedBuffer = await gzipAsync(Buffer.from(serialized, 'utf8'));
         finalData = compressedBuffer.toString('base64');
         compressed = true;
+        // Store actual compressed size for accurate stats
+        const actualCompressedSize = compressedBuffer.length;
         console.log(
           `[Pipeline Storage] Compressed result for ${stepId}: ${size} -> ${compressedBuffer.length} bytes`
         );
       } catch (error) {
         console.warn(`[Pipeline Storage] Compression failed for ${stepId}:`, error);
       }
     }

     // Save with metadata
     const storageData = {
       version: '1.0',
       compressed,
       size,
-      compressedSize: compressed ? Buffer.byteLength(finalData, 'utf8') : null,
+      compressedSize: compressed ? actualCompressedSize : null,
       data: finalData,
       timestamp: new Date().toISOString(),
     };

Note: You'll need to capture actualCompressedSize in a variable accessible in the outer scope.

Committable suggestion skipped: line range outside the PR's diff.

apps/server/src/services/pipeline-storage.ts-20-32 (1)

20-32: Unused dataDir field in constructor.

The dataDir parameter is stored but never used. The getStepResultPath method constructs paths using projectPath instead. Either use dataDir as the base path or remove it.

🔎 Proposed fix - remove unused field or use it

Option 1: Remove if not needed:

 export class PipelineStorage {
   private options: PipelineStorageOptions;
-  private dataDir: string;

-  constructor(dataDir: string, options: PipelineStorageOptions = {}) {
-    this.dataDir = dataDir;
+  constructor(options: PipelineStorageOptions = {}) {
     this.options = {
       compressionThreshold: 1024, // 1KB
       maxResultSize: 1024 * 1024 * 10, // 10MB
       retentionDays: 30,
       ...options,
     };
   }

Option 2: Use dataDir as base path:

 private getStepResultPath(projectPath: string, featureId: string, stepId: string): string {
-  return path.join(projectPath, '.automaker', 'pipeline-results', featureId, `${stepId}.json`);
+  return path.join(this.dataDir, projectPath, 'pipeline-results', featureId, `${stepId}.json`);
 }
apps/server/src/integrations/coderabbit.ts-211-216 (1)

211-216: Severity mapping loses 'critical' granularity.

The CodeRabbitIssue interface supports 'critical' severity, but when mapping to PipelineStepResult.issues, the severity type only allows 'low' | 'medium' | 'high'. Critical issues will be downgraded to 'high'.

Consider documenting this limitation or updating the PipelineStepResult type to include 'critical' severity if it's important to preserve that distinction.

apps/server/src/services/auto-mode-service.ts-363-372 (1)

363-372: Potential memory leak - Maps grow unbounded without cleanup.

pipelineConfigs and pipelineExecutors Maps cache entries per project path but are never cleaned up. Over time with many projects, these could accumulate unbounded entries.

Consider adding a cleanup mechanism, such as:

  • LRU eviction when the cache exceeds a threshold
  • Cleanup when a project is closed/removed
  • TTL-based expiration
apps/server/src/services/auto-mode-service.ts-651-658 (1)

651-658: Redundant conditional - both branches produce the same result.

The if and else branches produce identical finalStatus values. This appears to be incomplete logic where pipeline-enabled features should have different status handling.

🔎 Suggested fix or clarification

Either remove the redundant conditional:

-     if (pipelineConfig?.enabled && pipelineConfig.steps.length > 0) {
-       // Pipeline is active, status is already set to the last step
-       // Move to waiting_approval or verified based on skipTests
-       finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
-     } else {
-       // No pipeline, use original logic
-       finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
-     }
+     // Determine final status based on skipTests setting
+     finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';

Or implement the intended differentiation for pipeline-enabled features:

      if (pipelineConfig?.enabled && pipelineConfig.steps.length > 0) {
-       finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
+       // Pipeline completed - feature stays at last step status until manual review
+       finalStatus = feature.status || 'verified';
      } else {
        finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
      }
apps/server/src/pipeline-steps/security-step.ts-185-206 (1)

185-206: JSON parsing is fragile and may extract incorrect data.

The regex /\{[\s\S]*\}/ will match from the first { to the last } in the output, which could include unrelated JSON or malformed content if the AI output contains multiple JSON-like structures.

🔎 Suggested improvement

Consider a more robust approach:

  private parseSecurityResult(output: string): {
    vulnerabilities: Array<Record<string, unknown>>;
    recommendations: Array<Record<string, unknown>>;
    securityScore: number;
  } {
    try {
-     // Try to extract JSON from the output
-     const jsonMatch = output.match(/\{[\s\S]*\}/);
-     if (!jsonMatch) {
-       throw new Error('No JSON found in output');
-     }
-
-     const parsed = JSON.parse(jsonMatch[0]);
+     // Try to find JSON code block first (more reliable)
+     const codeBlockMatch = output.match(/```(?:json)?\s*(\{[\s\S]*?\})\s*```/);
+     let jsonStr: string;
+     
+     if (codeBlockMatch) {
+       jsonStr = codeBlockMatch[1];
+     } else {
+       // Fallback to greedy match
+       const jsonMatch = output.match(/\{[\s\S]*\}/);
+       if (!jsonMatch) {
+         throw new Error('No JSON found in output');
+       }
+       jsonStr = jsonMatch[0];
+     }
+
+     const parsed = JSON.parse(jsonStr);
+     
+     // Validate expected structure
+     if (!Array.isArray(parsed.vulnerabilities)) {
+       parsed.vulnerabilities = [];
+     }
      return {
        vulnerabilities: parsed.vulnerabilities || [],
        recommendations: parsed.recommendations || [],
        securityScore: parsed.securityScore || 0,
      };
apps/server/src/integrations/coderabbit.ts-223-237 (1)

223-237: AI fallback returns 'passed' status, which may mask failures.

When CodeRabbit fails and fallbackToAI is enabled, the fallback returns status: 'passed' with a message saying the review was skipped. This could mask actual issues that would have been caught by a proper review.

Consider using a different status like 'skipped' (if supported) or adding metadata to clearly indicate the review was not actually performed:

  return {
-   status: 'passed',
+   status: 'passed', // Consider: should this be 'error' or add a 'skipped' flag?
    output: 'CodeRabbit unavailable - skipped automated review',
    metadata: {
      integration: 'coderabbit-fallback',
      error: 'CodeRabbit API unavailable',
+     reviewSkipped: true,
    },
  };

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +104 to +106
"@testing-library/jest-dom": "^6.1.0",
"@testing-library/react": "^16.0.0",
"@testing-library/user-event": "^14.5.0",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

@testing-library/react version 16 React 19 compatibility

💡 Result:

Short answer: Not reliably — v16 of @testing-library/react can be installed with React 19, but there are known incompatibilities (especially around Suspense/async rendering and act) and active issue reports. Test suites that worked on React 18 may break on React 19 without changes. [1][2][3]

What to do

  • If you must upgrade to React 19, run your test suite and fix failing tests (you’ll likely need to use async act patterns / new async testing helpers). [1][4]
  • Watch the testing-library repo/issues for fixes or guidance (there are open issues describing Suspense regressions on React 19). [1]
  • If you want less risk, keep React 18 (or pin a known-good @testing-library/react + React combo) until upstream fixes/official guidance arrives. [1][3]

Sources

  • testing-library/react issue reporting Suspense failures when upgrading to React 19. [1]
  • testing-library/react README / release notes (v16 notes; peerDeps change). [2]
  • React 19 upgrade guide describing Suspense/async rendering changes (explains why tests may need updates). [3]
  • Example migration guidance (React Native testing library showing new async APIs pattern for React 19). [4]

References
[1] GitHub issue: "@testing-library/react behaves differently when dealing with Suspense in React 18 and React 19".
[2] testing-library/react repo — installation & release notes (v16).
[3] React 19 Upgrade Guide — react.dev.
[4] React Native Testing Library docs on React 19 & Suspense.

If you want, I can fetch exact error examples and suggested test changes for Suspense/act in your codebase — tell me your @testing-library/react exact version and a minimal failing test.


🌐 Web query:

latest stable version @testing-library/jest-dom @testing-library/user-event 2025

💡 Result:

As of December 23, 2025:

  • @testing-library/jest-dom — 6.8.0. [1]
  • @testing-library/user-event — 14.6.1. [2]

Sources:
[1] npm package page for @testing-library/jest-dom.
[2] @testing-library/user-event package info (jsDelivr / Snyk / Maven listings).


@testing-library/react v16 has known incompatibilities with React 19—tests may fail.

@testing-library/react v16 can be installed alongside React 19.2.3, but has documented compatibility issues, particularly with Suspense, async rendering, and act() patterns. Test suites that pass on React 18 may break without updates. Verify your test suite and apply async testing patterns if necessary.

Also update the other Testing Library packages to their latest stable versions:

  • @testing-library/jest-dom: upgrade from ^6.1.0 to ^6.8.0
  • @testing-library/user-event: upgrade from ^14.5.0 to ^14.6.1

Applies to lines 104–106, 114, 119, 126.

Comment on lines +28 to +30
onSkipPipelineStep?: (stepId: string) => void;
onRetryPipelineStep?: (stepId: string) => void;
onClearPipelineStep?: (stepId: string) => void;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: New pipeline props are not passed to CardActions.

The three new callback props (onSkipPipelineStep, onRetryPipelineStep, onClearPipelineStep) are declared in the KanbanCardProps interface but are never passed to the CardActions component at lines 167-183. This means the pipeline step controls will not function.

🔎 Proposed fix

Add the missing props to the CardActions component:

 <CardActions
   feature={feature}
   isCurrentAutoTask={!!isCurrentAutoTask}
   hasContext={hasContext}
   shortcutKey={shortcutKey}
   onEdit={onEdit}
   onViewOutput={onViewOutput}
   onVerify={onVerify}
   onResume={onResume}
   onForceStop={onForceStop}
   onManualVerify={onManualVerify}
   onFollowUp={onFollowUp}
   onImplement={onImplement}
   onComplete={onComplete}
   onViewPlan={onViewPlan}
   onApprovePlan={onApprovePlan}
+  onSkipPipelineStep={onSkipPipelineStep}
+  onRetryPipelineStep={onRetryPipelineStep}
+  onClearPipelineStep={onClearPipelineStep}
 />

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
apps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsx
(props declared at lines 28-30, CardActions usage around lines 167-183): the
three new callback props onSkipPipelineStep, onRetryPipelineStep, and
onClearPipelineStep are declared on KanbanCardProps but not forwarded to the
CardActions component, preventing pipeline controls from working; update the
CardActions JSX invocation to pass these three props through (e.g., add
onSkipPipelineStep={onSkipPipelineStep}
onRetryPipelineStep={onRetryPipelineStep}
onClearPipelineStep={onClearPipelineStep}) so CardActions receives and can call
them.

Comment on lines +114 to +115
value={securityConfig.severity || 'medium'}
onValueChange={(value: any) => updateConfig({ severity: value })}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check server-side SecurityStepConfig for property names
echo "=== Server-side SecurityStepConfig ==="
rg -n -B5 -A10 "class SecurityStepConfig|interface SecurityStepConfig" apps/server/src/pipeline-steps/ 2>/dev/null || echo "File not found, searching broadly..."
rg -n "minSeverity|severity" apps/server/src/pipeline-steps/security-step.ts 2>/dev/null || rg -n "minSeverity|severity" apps/server/ 2>/dev/null | head -20

echo ""
echo "=== SecurityConfig type definition ==="
# Check the SecurityConfig type in the types file
rg -n -B2 -A8 "interface SecurityConfig|type SecurityConfig" libs/types/src/pipeline.ts 2>/dev/null || rg -n -B2 -A8 "interface SecurityConfig|type SecurityConfig" libs/types/src/ 2>/dev/null | head -30

Repository: AutoMaker-Org/automaker

Length of output: 2319


Property name mismatch with server-side config causes configuration to fail.

The server-side SecurityStepConfig (apps/server/src/pipeline-steps/security-step.ts, lines 9-14) uses minSeverity, but the UI is using severity. This mismatch will prevent the severity configuration from being applied correctly to the server.

           <Select
-            value={securityConfig.severity || 'medium'}
-            onValueChange={(value: any) => updateConfig({ severity: value })}
+            value={securityConfig.minSeverity || 'medium'}
+            onValueChange={(value: 'low' | 'medium' | 'high' | 'critical') => updateConfig({ minSeverity: value })}
           >

Update SecurityConfig in libs/types/src/pipeline.ts to align with the server interface: rename severity to minSeverity, change checklist from optional to required, and add the missing checkDependencies property.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
value={securityConfig.severity || 'medium'}
onValueChange={(value: any) => updateConfig({ severity: value })}
value={securityConfig.minSeverity || 'medium'}
onValueChange={(value: 'low' | 'medium' | 'high' | 'critical') => updateConfig({ minSeverity: value })}
🤖 Prompt for AI Agents
In
apps/ui/src/components/views/settings-view/step-configs/security-step-config.tsx
around lines 114-115, the UI is using securityConfig.severity which doesn't
match the server's SecurityStepConfig; update the shared type in
libs/types/src/pipeline.ts: rename property severity to minSeverity, change
checklist from optional to required, and add a new boolean checkDependencies
property; then update usages in the UI and any other clients to read/write
minSeverity (and set a default value where previously 'medium' was used) and
ensure checklist is treated as required and checkDependencies is handled.

],
"memoryEnabled": true,
"loopUntilSuccess": true,
"maxLoops: 3,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix JSON syntax error.

Line 364 has a syntax error: "maxLoops: 3, is missing the closing quote. This should be "maxLoops": 3,

🔎 Proposed fix
-    "maxLoops: 3,
+    "maxLoops": 3,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"maxLoops: 3,
"maxLoops": 3,
🤖 Prompt for AI Agents
In docs/pipeline/step-types.md around line 364, the JSON snippet has a syntax
error: the property key is written as "maxLoops: 3, missing the closing quote
and colon placement; change it to a valid JSON key/value pair by using
"maxLoops": 3, so the key is enclosed in quotes and followed by a colon before
the value.

@Shironex Shironex added Enhancement Improvements to existing functionality or UI. Testers-Requested Request for others to test an enhancement or bug fix/etc. Do Not Merge Use this label if something should not be merged. labels Dec 23, 2025
- Fixed all Critical, High, Medium, Major, and Low priority issues
- Added fetch timeouts to CodeRabbit integration
- Fixed service instance injection in pipeline routes
- Added cache clearing after pipeline config updates
- Fixed ReDoS vulnerability with regex escaping
- Wired AbortController for pipeline cancellation
- Updated example configs with correct field names
- Added comprehensive docstrings to pipeline feature
- Resolved 35 total issues from the punchlist
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

♻️ Duplicate comments (2)
apps/server/src/routes/pipeline/validate-config.ts (1)

153-183: Circular dependency detection algorithm is flawed.

The algorithm re-initializes the visited set with new Set() at line 182 for each dependency of each step. This loses traversal state and can miss cycles in complex graphs. The visiting set tracks the current recursion path correctly, but the issue is that independent traversals from different starting points don't share state.

Additionally, the function returns early on cycle detection but continues iterating through remaining dependencies at line 181-183, which may report duplicate cycles.

🔎 Proposed fix using proper DFS with shared state
       // Check for circular dependencies
-      const checkCircular = (
-        startId: string,
-        currentId: string,
-        visiting: Set<string>,
-        path: string[]
-      ): boolean => {
-        if (visiting.has(currentId)) {
-          errors.push(
-            `step ${index}: circular dependency detected: ${[...path, currentId].join(' -> ')}`
-          );
-          return true;
-        }
-
-        const currentStep = config.steps.find((s: any) => s.id === currentId);
-        if (!currentStep || !currentStep.dependencies) {
-          return false;
-        }
-
-        visiting.add(currentId);
-        for (const dep of currentStep.dependencies) {
-          if (checkCircular(startId, dep, visiting, [...path, currentId])) {
-            return true;
-          }
-        }
-        visiting.delete(currentId);
-        return false;
-      };
-
-      for (const dep of step.dependencies) {
-        checkCircular(step.id, dep, new Set(), [step.id]);
-      }
+      // Validate dependencies exist and check for cycles using DFS
+      const visited = new Set<string>();
+      const visiting = new Set<string>();
+      
+      const hasCycle = (stepId: string, path: string[]): boolean => {
+        if (visiting.has(stepId)) {
+          errors.push(
+            `step ${index}: circular dependency detected: ${[...path, stepId].join(' -> ')}`
+          );
+          return true;
+        }
+        if (visited.has(stepId)) {
+          return false;
+        }
+
+        const s = config.steps.find((st: any) => st.id === stepId);
+        if (!s?.dependencies) {
+          visited.add(stepId);
+          return false;
+        }
+
+        visiting.add(stepId);
+        for (const dep of s.dependencies) {
+          if (hasCycle(dep, [...path, stepId])) {
+            return true;
+          }
+        }
+        visiting.delete(stepId);
+        visited.add(stepId);
+        return false;
+      };
+
+      hasCycle(step.id, []);
apps/server/src/services/auto-mode-service.ts (1)

1717-1724: Malformed AbortController breaks abort functionality.

The object { signal } as AbortController only has the signal property. Calling abort() on this object won't actually abort the request since the abort method is missing. This issue was flagged in a previous review.

🔎 Proposed fix: Create proper AbortController
+      // Create a proper AbortController that wraps the parent signal
+      const controller = new AbortController();
+      if (signal) {
+        signal.addEventListener('abort', () => controller.abort(), { once: true });
+      }
+
       const options: ExecuteOptions = {
         prompt,
         model,
         maxTurns: 1, // Single turn for pipeline steps
         cwd: projectPath || '',
         allowedTools: ['Read', 'Glob', 'Grep'],
-        abortController: signal ? ({ signal } as AbortController) : undefined,
+        abortController: controller,
       };
🧹 Nitpick comments (17)
docs/pipeline/examples/test-focused.json (2)

14-15: Consider format consistency for excludePatterns.

The exclude patterns mix two different formats: a glob pattern ("*.spec.ts") and a directory path ("test/integration/"). Ensure the implementation handles both formats correctly, or standardize to one format for consistency. Also verify whether the trailing slash in "test/integration/" is required or optional.

🔎 Suggested format standardization

If the implementation supports glob patterns, consider standardizing both entries:

-        "excludePatterns": ["*.spec.ts", "test/integration/"]
+        "excludePatterns": ["*.spec.ts", "test/integration/**"]

Or if directory paths are preferred:

-        "excludePatterns": ["*.spec.ts", "test/integration/"]
+        "excludePatterns": ["**/*.spec.ts", "test/integration/"]

27-27: Consider if maxIssues: 3 is appropriate for an example configuration.

Setting maxIssues to 3 for a test review step might be overly restrictive, especially for larger codebases or during initial pipeline adoption. For a general example configuration, consider using a higher value or documenting that this strict limit is intentional for test-focused workflows.

apps/ui/src/components/ui/alert.tsx (1)

6-20: Consider documenting the expected icon structure.

The CSS selectors assume a specific DOM structure where an SVG icon is placed as a direct child of the Alert component. While this is a common pattern, a brief comment explaining the expected structure (e.g., <Alert><IconSvg />...content</Alert>) would improve maintainability.

apps/ui/src/components/views/settings-view/step-configs/performance-step-config.tsx (2)

67-90: Type assertion as any suggests type incompatibility.

Multiple as any casts (lines 70, 76, 83, 137, 189) indicate the metric string literals don't align with the expected type. This should be addressed by either:

  1. Updating the type definition in libs/types to include all valid metrics
  2. Using a union type that covers both UI and server metric options
🔎 Suggested approach

Define a shared metric type that both UI and server can use:

// In libs/types/src/index.ts or a shared location
export type PerformanceMetric = 
  | 'complexity' 
  | 'memory' 
  | 'cpu' 
  | 'network' 
  | 'bundle-size'
  | 'database'
  | 'bundle'
  | 'rendering';

Then update PerformanceConfig.metrics to use this type, eliminating the need for as any casts.


92-103: Edge case: empty string input is silently ignored.

When parseFloat(value) returns NaN (e.g., for empty string or non-numeric input), the threshold update is skipped. This is reasonable for preventing invalid data, but the user gets no feedback. Consider whether a visual indicator or fallback to 0 would improve UX.

apps/ui/src/components/views/settings-view/step-configs/review-step-config.tsx (1)

9-17: Remove unused import.

Textarea is imported but not used in this component.

🔎 Proposed fix
 import React from 'react';
 import { Label } from '@/components/ui/label';
 import { Input } from '@/components/ui/input';
-import { Textarea } from '@/components/ui/textarea';
 import { Switch } from '@/components/ui/switch';
 import { Button } from '@/components/ui/button';
 import { Badge } from '@/components/ui/badge';
 import { X, Plus } from 'lucide-react';
 import { PipelineStepConfig, ReviewConfig } from '@automaker/types';
apps/ui/src/lib/electron.ts (1)

484-506: Consider defining a proper type for pipeline metrics.

The getMetrics method returns metrics?: any, which loses type safety. Consider defining a PipelineMetrics interface to properly type the return value.

🔎 Suggested improvement
// Define a proper type for metrics
interface PipelineMetrics {
  totalExecutions: number;
  successRate: number;
  averageDuration: number;
  stepMetrics: Record<string, {
    executions: number;
    failures: number;
    avgDuration: number;
  }>;
  // ... other relevant metrics
}

// Then use it in the return type
getMetrics: (projectPath: string) => Promise<{
  success: boolean;
  metrics?: PipelineMetrics;
  error?: string;
}>;
apps/ui/src/components/views/settings-view.tsx (1)

90-96: Unused variables: isPipelineLoading and pipelineError.

The isLoading and error values are destructured from usePipelineConfig but never used. Consider either:

  1. Using them to show loading/error states in the pipeline section
  2. Removing them if PipelineSection handles these states internally
🔎 Option A: Use for conditional rendering
       case 'pipeline':
+        if (isPipelineLoading) {
+          return <div>Loading pipeline configuration...</div>;
+        }
+        if (pipelineError) {
+          return <div className="text-destructive">{pipelineError}</div>;
+        }
         return (
           <PipelineSection pipelineConfig={pipelineConfig} onPipelineChange={setPipelineConfig} />
         );
🔎 Option B: Remove if handled internally
   // Use pipeline configuration hook
   const {
     config: pipelineConfig,
     setConfig: setPipelineConfig,
-    isLoading: isPipelineLoading,
-    error: pipelineError,
   } = usePipelineConfig(currentProject?.path || null);
apps/ui/src/config/api-providers.ts (1)

64-68: Unused google parameter in function signature.

The function destructures anthropic and coderabbit from params but not google, even though google is part of ProviderConfigParams. While the Google provider is intentionally commented out, this creates an asymmetry. Consider either:

  1. Commenting out google from ProviderConfigParams
  2. Adding // @ts-expect-error or similar if intentionally unused
apps/server/src/routes/pipeline/validate-config.ts (1)

6-6: Unused import PIPELINE_CONFIG_SCHEMA.

The PIPELINE_CONFIG_SCHEMA is imported but never used in the validation logic. Either remove the import or leverage the schema for validation.

-import { PIPELINE_CONFIG_SCHEMA } from '@automaker/types';
apps/ui/src/components/views/settings-view/pipeline-builder.tsx (2)

143-146: Remove debug console.log statements before merging.

Multiple debug logging statements (lines 144-146, 193, 197, 202) should be removed or converted to a proper debug logger for production code.

Also applies to: 193-193, 197-197, 202-202


67-71: getStepStatus is a placeholder that always returns 'pending'.

This function is defined inside SortableStep but the status variable it produces is never used in the JSX. Consider removing it or implementing actual status tracking.

apps/server/src/services/pipeline-step-executor.ts (1)

146-172: Private methods executeSingleStep, parseStepResult, and extractIssues appear unused.

These methods are defined but never called in the public API flow. executeStep delegates directly to step implementations (ReviewStep, SecurityStep, etc.) which handle their own result parsing. Consider removing dead code or documenting intended usage.

Also applies to: 216-241, 252-288

apps/server/src/integrations/coderabbit.ts (1)

90-90: Unused variable branchName.

The branchName variable is assigned but never used. The git diff command uses hardcoded main...HEAD regardless of the feature's branch.

-    const branchName = feature.branchName || 'main';
apps/server/src/pipeline-steps/custom-step.ts (1)

227-253: checkSuccessCriteria relies on simple keyword matching.

The success criteria check uses hardcoded phrase matching which may not reliably detect success/failure for complex criteria. The comment acknowledges this limitation. Consider documenting expected phrases in the user-facing documentation or enhancing with more robust pattern matching.

apps/ui/src/components/views/board-view/hooks/use-board-column-features.ts (2)

94-123: Consider refactoring the status/column determination logic for clarity.

The logic correctly handles multiple cases (standard status, pipeline steps, unknown status) but is complex with nested conditions and multiple fallback paths. The interaction between f.status and f.currentPipelineStep adds cognitive overhead.

Consider extracting this logic into a helper function:

🔎 Suggested refactoring approach
function determineFeatureColumn(
  feature: Feature,
  columnMap: Record<string, Feature[]>,
  pipelineEnabled: boolean
): string {
  const status = feature.status as string;
  
  // Check standard status first
  if (columnMap[status] !== undefined) {
    return status;
  }
  
  // Check pipeline step
  if (pipelineEnabled) {
    const currentStep = feature.currentPipelineStep as string | undefined;
    if (currentStep && columnMap[currentStep] !== undefined) {
      return currentStep;
    }
  }
  
  // Fallback
  return status === 'backlog' ? 'backlog' : 
         pipelineEnabled ? 'in_progress' : 'backlog';
}

This would make the main loop clearer:

filteredFeatures.forEach((f) => {
  // ... isRunning and matchesWorktree logic ...
  
  if (isRunning && matchesWorktree) {
    map.in_progress.push(f);
  } else if (matchesWorktree) {
    const column = determineFeatureColumn(f, map, pipelineConfig?.enabled ?? false);
    map[column].push(f);
  }
});

131-165: Consider aligning Feature types to reduce casting.

The dependency-aware sorting logic requires extensive type casting (lines 133, 148, 151-152, 157, 164) to reconcile differences between the app-store Feature type and the dependency-resolver's expected Feature type.

While the casts work at runtime, they:

  • Add maintenance burden
  • Reduce type safety
  • Could hide real incompatibilities if types diverge

Consider one of these approaches:

  1. Align the Feature types: Make @automaker/dependency-resolver accept the same Feature type as the app-store, or vice versa
  2. Create an adapter: Define a lightweight adapter function that explicitly maps between types
  3. Use a shared Feature interface: Define a minimal FeatureWithDependencies interface in @automaker/types that both packages reference

This would eliminate the need for casts and make type relationships explicit.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e177705 and f6d549d.

📒 Files selected for processing (29)
  • apps/server/src/__tests__/pipeline/pipeline-config.test.ts
  • apps/server/src/index.ts
  • apps/server/src/integrations/coderabbit.ts
  • apps/server/src/pipeline-steps/custom-step.ts
  • apps/server/src/routes/pipeline/config.ts
  • apps/server/src/routes/pipeline/create-routes.ts
  • apps/server/src/routes/pipeline/execute-step.ts
  • apps/server/src/routes/pipeline/skip-step.ts
  • apps/server/src/routes/pipeline/validate-config.ts
  • apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts
  • apps/server/src/services/auto-mode-service.ts
  • apps/server/src/services/pipeline-config-service.ts
  • apps/server/src/services/pipeline-step-executor.ts
  • apps/ui/src/components/ui/alert.tsx
  • apps/ui/src/components/ui/separator.tsx
  • apps/ui/src/components/views/board-view/hooks/use-board-column-features.ts
  • apps/ui/src/components/views/board-view/hooks/use-pipeline-config.ts
  • apps/ui/src/components/views/settings-view.tsx
  • apps/ui/src/components/views/settings-view/pipeline-builder.tsx
  • apps/ui/src/components/views/settings-view/step-configs/custom-step-config.tsx
  • apps/ui/src/components/views/settings-view/step-configs/performance-step-config.tsx
  • apps/ui/src/components/views/settings-view/step-configs/review-step-config.tsx
  • apps/ui/src/components/views/settings-view/step-configs/security-step-config.tsx
  • apps/ui/src/components/views/settings-view/step-configs/test-step-config.tsx
  • apps/ui/src/config/api-providers.ts
  • apps/ui/src/lib/electron.ts
  • apps/ui/src/store/app-store.ts
  • docs/pipeline/examples/security-focused.json
  • docs/pipeline/examples/test-focused.json
🚧 Files skipped from review as they are similar to previous changes (9)
  • apps/ui/src/components/ui/separator.tsx
  • apps/server/src/routes/pipeline/skip-step.ts
  • apps/server/src/routes/pipeline/config.ts
  • apps/ui/src/store/app-store.ts
  • apps/ui/src/components/views/settings-view/step-configs/security-step-config.tsx
  • apps/server/src/routes/pipeline/execute-step.ts
  • apps/server/src/tests/pipeline/pipeline-config.test.ts
  • docs/pipeline/examples/security-focused.json
  • apps/server/src/index.ts
🧰 Additional context used
🧬 Code graph analysis (10)
apps/ui/src/components/views/settings-view.tsx (2)
apps/ui/src/components/views/settings-view/hooks/use-pipeline-config.ts (1)
  • usePipelineConfig (11-76)
apps/ui/src/components/views/settings-view/pipeline-section.tsx (1)
  • PipelineSection (126-457)
apps/ui/src/components/views/settings-view/step-configs/performance-step-config.tsx (2)
libs/types/src/index.ts (2)
  • PipelineStepConfig (89-89)
  • PerformanceConfig (98-98)
apps/server/src/pipeline-steps/performance-step.ts (1)
  • PerformanceStepConfig (9-18)
apps/ui/src/components/views/settings-view/pipeline-builder.tsx (3)
libs/types/src/index.ts (3)
  • PipelineConfig (88-88)
  • PipelineStepConfig (89-89)
  • StepType (93-93)
libs/types/src/pipeline.ts (3)
  • PipelineConfig (13-20)
  • PipelineStepConfig (25-40)
  • StepType (45-50)
apps/ui/src/lib/utils.ts (1)
  • cn (5-7)
apps/ui/src/components/views/settings-view/step-configs/custom-step-config.tsx (2)
libs/types/src/index.ts (2)
  • PipelineStepConfig (89-89)
  • CustomConfig (100-100)
apps/server/src/pipeline-steps/custom-step.ts (1)
  • CustomStepConfig (11-33)
apps/server/src/services/pipeline-step-executor.ts (10)
libs/types/src/index.ts (5)
  • Feature (19-19)
  • PipelineStepConfig (89-89)
  • StepStatus (95-95)
  • PipelineStepResult (91-91)
  • AgentModel (36-36)
libs/types/src/pipeline.ts (4)
  • PipelineStepConfig (25-40)
  • StepStatus (132-132)
  • PipelineStepResult (137-149)
  • AgentModel (8-8)
apps/server/src/services/pipeline-memory.ts (1)
  • PipelineMemory (51-309)
apps/server/src/pipeline-steps/index.ts (5)
  • ReviewStep (5-5)
  • SecurityStep (6-6)
  • PerformanceStep (7-7)
  • TestStep (8-8)
  • CustomStep (9-9)
apps/server/src/pipeline-steps/review-step.ts (1)
  • ReviewStep (16-242)
apps/server/src/pipeline-steps/security-step.ts (1)
  • SecurityStep (16-258)
apps/server/src/pipeline-steps/performance-step.ts (1)
  • PerformanceStep (20-293)
apps/server/src/pipeline-steps/test-step.ts (1)
  • TestStep (17-263)
apps/server/src/pipeline-steps/custom-step.ts (1)
  • CustomStep (35-278)
apps/server/src/integrations/coderabbit.ts (1)
  • CodeRabbitIntegration (46-328)
apps/server/src/integrations/coderabbit.ts (3)
libs/types/src/feature.ts (1)
  • Feature (25-62)
libs/types/src/index.ts (4)
  • Feature (19-19)
  • PipelineStepConfig (89-89)
  • PipelineStepResult (91-91)
  • CustomConfig (100-100)
libs/types/src/pipeline.ts (3)
  • PipelineStepConfig (25-40)
  • PipelineStepResult (137-149)
  • CustomConfig (104-114)
apps/ui/src/components/views/board-view/hooks/use-board-column-features.ts (3)
libs/types/src/index.ts (2)
  • PipelineConfig (88-88)
  • Feature (19-19)
libs/types/src/pipeline.ts (1)
  • PipelineConfig (13-20)
libs/types/src/feature.ts (1)
  • Feature (25-62)
apps/server/src/routes/pipeline/create-routes.ts (1)
apps/server/src/routes/pipeline/index.ts (5)
  • configRouter (12-12)
  • executeStepRouter (13-13)
  • skipStepRouter (14-14)
  • resetPipelineRouter (15-15)
  • validateConfigRouter (16-16)
apps/ui/src/components/ui/alert.tsx (1)
apps/ui/src/lib/utils.ts (1)
  • cn (5-7)
apps/server/src/services/pipeline-config-service.ts (2)
libs/types/src/pipeline.ts (3)
  • PipelineConfig (13-20)
  • DEFAULT_PIPELINE_CONFIG (164-169)
  • PipelineStepConfig (25-40)
apps/server/src/lib/secure-fs.ts (1)
  • secureFs (8-23)
🪛 ast-grep (0.40.3)
apps/server/src/pipeline-steps/custom-step.ts

[warning] 185-185: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(\\{\\{${escapedKey}\\}\\}, 'g')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html

(regexp-from-variable)

🔇 Additional comments (30)
docs/pipeline/examples/test-focused.json (3)

10-10: No action needed — "same" is a valid and documented model value.

The model value "same" is explicitly defined in the pipeline schema (libs/types/src/pipeline.ts) as a supported option alongside specific model names. When used, it inherits the model from the feature's configuration, with a fallback to the default model if not specified. This behavior is correctly implemented in the service layer and widely used across example configurations.


1-4: No action needed for onFailure field.

The onFailure value "stop" is valid per the pipeline schema. Remove the suggestion to add metadata fields (description, author, created) as these are not defined in the pipeline configuration schema.

Likely an incorrect or invalid review comment.


5-30: The dependencies field for pipeline steps is already documented and optional.

The field exists in the type definition (libs/types/src/pipeline.ts) and is documented in docs/pipeline/readme.md under "Step Dependencies." The test-focused.json example correctly uses sequential execution by array order, which is the default behavior. Explicit dependencies are only needed for more complex pipelines where steps don't follow sequential order.

Likely an incorrect or invalid review comment.

apps/ui/src/components/ui/alert.tsx (4)

1-4: LGTM!

Imports are clean and all dependencies are properly utilized throughout the component.


30-39: LGTM!

AlertTitle is properly implemented with appropriate heading semantics and styling. The use of <h5> is a reasonable default for alert titles.


41-46: LGTM!

AlertDescription is well-implemented with appropriate typography styling. The paragraph spacing rule enhances readability.


48-48: LGTM!

Named exports allow flexible component imports across the application.

apps/ui/src/components/views/settings-view/step-configs/test-step-config.tsx (1)

1-171: LGTM! Well-structured configuration component.

The component correctly implements immutable state updates and provides a clean interface for configuring test analysis steps. The default values are sensible and the UI is well-organized with helpful descriptions.

One minor note: using array index as key for exclude patterns (line 114) is acceptable here since patterns are simple strings and the list operations (add/remove/update) properly maintain array integrity.

apps/ui/src/components/views/settings-view/step-configs/review-step-config.tsx (1)

24-150: LGTM! Consistent implementation with sibling components.

The component follows the same well-structured pattern as the other step config components, with proper immutable updates and intuitive UI organization.

apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts (1)

28-61: Good implementation of timeout handling.

The 30-second timeout with AbortController is well-implemented, and the timeout is properly cleared in both success and error paths. The minimal test payload approach is sensible for validation.

apps/ui/src/lib/electron.ts (3)

467-471: LGTM! CodeRabbit auth verification API addition.

The new verifyCodeRabbitAuth method is properly typed and follows the existing pattern of setup API methods.


1063-1071: Consistent type definition in SetupAPI interface.

The verifyCodeRabbitAuth method signature correctly mirrors the ElectronAPI.setup definition, maintaining API consistency.


1163-1171: Mock implementation follows established patterns.

The mock correctly logs the action and returns a consistent response structure matching the interface contract.

apps/server/src/routes/pipeline/create-routes.ts (1)

17-32: Setter-based DI has timing sensitivity.

The current pattern relies on setAutoModeService being called before any route handlers execute. This is generally safe given the sequential setup, but consider documenting this requirement or using a more robust DI pattern if the codebase grows.

apps/ui/src/components/views/settings-view.tsx (1)

117-120: Clean integration of pipeline settings section.

The new pipeline case follows the established pattern of other settings sections, properly passing the config and change handler to the component.

apps/server/src/routes/pipeline/validate-config.ts (1)

14-201: LGTM on overall validation structure.

The route provides comprehensive validation for pipeline configurations with clear error messages. The step-by-step validation of required fields and type-specific configs is well-structured.

apps/ui/src/components/views/settings-view/step-configs/custom-step-config.tsx (1)

34-355: Well-structured component with comprehensive configuration options.

The component provides a clean UI for custom step configuration including variable insertion, loop settings, memory system, and CodeRabbit integration. The use of updateConfig helper for nested state updates is a good pattern.

apps/server/src/services/pipeline-step-executor.ts (1)

72-131: Step execution with proper status reporting.

The executeStep method correctly delegates to type-specific step implementations, handles status transitions, and provides progress callbacks. Error handling appropriately catches exceptions and returns structured failure results.

apps/server/src/services/pipeline-config-service.ts (2)

182-222: Topological sort with cycle detection is correctly implemented.

The sortStepsByDependencies method properly uses visiting (gray nodes in DFS) and visited (black nodes) sets to detect cycles and produce correct ordering. This is the correct algorithm for dependency resolution.


78-93: Atomic write pattern for configuration persistence.

Using temporary file + rename is a good practice for preventing corruption on write failures. This ensures the config file is either fully written or not modified.

apps/server/src/integrations/coderabbit.ts (1)

46-54: Past review concerns addressed: projectPath is properly handled.

The constructor now accepts and stores projectPath, and getProjectPath() returns the stored value instead of using process.cwd(). The getGitRepoUrl() method dynamically retrieves the repo URL using git commands in the correct directory.

Also applies to: 262-284

apps/server/src/services/auto-mode-service.ts (2)

1758-1788: skipPipelineStep correctly validates required steps.

The method now checks if the step is required before allowing skip (lines 1763-1765), addressing the previous review concern. This prevents users from skipping critical pipeline steps.


1534-1667: Pipeline step execution with proper status tracking.

The executePipelineSteps method correctly iterates through configured steps, respects autoTrigger settings, skips completed steps, and handles required step failures by stopping the pipeline. Status updates and event emissions provide good observability.

apps/server/src/pipeline-steps/custom-step.ts (2)

184-187: ReDoS mitigation via escapeRegExp is in place.

The static analysis tool flagged potential ReDoS at line 185, but the code uses escapeRegExp to sanitize the variable key before constructing the regex. This properly escapes special characters and mitigates the ReDoS risk for typical use cases.

For defense in depth, consider adding a length limit on keys:

if (key.length > 100) continue; // Skip abnormally long keys

48-172: Custom step execution flow is well-structured.

The execute method properly handles looping, memory context, CodeRabbit integration with AI fallback, and success criteria evaluation. Error handling wraps failures in a consistent result format with iteration tracking.

apps/ui/src/components/views/board-view/hooks/use-pipeline-config.ts (2)

24-48: LGTM! Proper error handling and fallback behavior.

The loadConfig function correctly handles all cases:

  • Early return when no project is loaded
  • Appropriate loading state management
  • Graceful fallback to DEFAULT_PIPELINE_CONFIG when no config exists or on error
  • Proper error logging for debugging

Note: The API dependency issue at line 21 affects this callback's memoization, but the logic itself is sound.


103-112: LGTM! Well-designed hook API.

The return object provides a clean, intuitive interface:

  • State management (config, isLoading, error)
  • Manual refresh capability (reload)
  • Pipeline actions (skipStep, retryStep, clearStep)

The API surface is well-structured for consumers.

apps/ui/src/components/views/board-view/hooks/use-board-column-features.ts (3)

4-4: LGTM! Clean PipelineConfig integration.

The PipelineConfig type is properly imported and integrated:

  • Correct import from @automaker/types
  • Appropriately typed as optional and nullable (pipelineConfig?: PipelineConfig | null)
  • Cleanly threaded through the interface and function parameters

Also applies to: 13-13, 23-23


25-26: LGTM! Good practice for Zustand store usage.

Extracting the function reference from the store hook prevents the anti-pattern of calling useAppStore.getState() during render. The explanatory comment aids maintainability.


179-184: LGTM! Proper support for dynamic column IDs.

The update to accept string column IDs (instead of a predefined enum) correctly supports dynamic pipeline step columns. The fallback to an empty array prevents errors when requesting non-existent columns.

Comment on lines +34 to +65
const metricOptions = [
{
id: 'complexity',
label: 'Algorithm Complexity',
description: 'Cyclomatic complexity of functions',
defaultThreshold: 15,
},
{
id: 'memory',
label: 'Memory Usage',
description: 'Peak memory consumption during execution',
defaultThreshold: 512,
},
{
id: 'cpu',
label: 'CPU Performance',
description: 'Execution time of critical paths',
defaultThreshold: 1000,
},
{
id: 'network',
label: 'Network Requests',
description: 'Number and efficiency of API calls',
defaultThreshold: 10,
},
{
id: 'bundle-size',
label: 'Bundle Size',
description: 'Size of compiled JavaScript bundles',
defaultThreshold: 500,
},
];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's find and examine the UI file mentioned
echo "=== UI metric options ==="
cat -n apps/ui/src/components/views/settings-view/step-configs/performance-step-config.tsx | head -80 | tail -50

Repository: AutoMaker-Org/automaker

Length of output: 1820


🏁 Script executed:

#!/bin/bash
# Find the server-side performance-step.ts file and examine its metrics
echo "=== Server-side performance metrics ==="
find . -name "performance-step.ts" -type f | head -5

Repository: AutoMaker-Org/automaker

Length of output: 159


🏁 Script executed:

#!/bin/bash
# Search for PerformanceConfig type definition in libs/types
echo "=== Looking for PerformanceConfig type definition ==="
find . -path ./node_modules -prune -o -type f -name "*.ts" -print | xargs grep -l "PerformanceConfig" 2>/dev/null | head -10

Repository: AutoMaker-Org/automaker

Length of output: 229


🏁 Script executed:

#!/bin/bash
# Examine the server-side performance-step.ts file
echo "=== Server performance-step.ts ==="
cat -n ./apps/server/src/pipeline-steps/performance-step.ts

Repository: AutoMaker-Org/automaker

Length of output: 11018


🏁 Script executed:

#!/bin/bash
# Examine libs/types for PerformanceConfig definition
echo "=== libs/types pipeline.ts - PerformanceConfig definition ==="
rg -n "PerformanceConfig" ./libs/types/src/pipeline.ts -A 15 -B 2

Repository: AutoMaker-Org/automaker

Length of output: 1018


🏁 Script executed:

#!/bin/bash
# Also check libs/types index.ts if it re-exports or defines it
echo "=== libs/types index.ts PerformanceConfig exports ==="
rg -n "PerformanceConfig|metrics.*:" ./libs/types/src/index.ts

Repository: AutoMaker-Org/automaker

Length of output: 144


Fix metric ID mismatches between UI and server implementations.

The UI defines metrics as complexity, memory, cpu, network, and bundle-size, but the server's PerformanceStepConfig interface expects complexity, memory, database, network, bundle, and rendering. This mismatch causes:

  • Metrics cpu and bundle-size selected in the UI won't trigger the corresponding server analysis (server checks for database and bundle instead)
  • Metrics database and rendering cannot be selected from the UI despite being supported by the server
  • The as any cast on line 70 masks this type incompatibility

Align metric IDs across libs/types/src/pipeline.ts (PerformanceConfig interface), the server's PerformanceStepConfig, and the UI component to ensure user selections correctly trigger server-side analysis.

- Fix AutoModeService import path in create-routes.ts
- Fix success response consistency in verify-coderabbit-auth.ts
- Replace hardcoded PipelineStorage path with lazy initialization
- Add retry tracking to PipelineQueueItem with max retries and drop handling
- Fix CodeRabbitIntegration constructor calls to include required projectPath
- Map 'critical' severity to 'high' in CodeRabbit integration
- Convert Alert component from forwardRef to plain function component
- Add validation to reject reserved base column names in pipeline step IDs
- Stabilize API client reference with useMemo in use-pipeline-config.ts
- Fix stepTypeIcons scope and remove duplicate stepTypeColors definitions
- Align coderabbitSeverity Select options with server-side types
- Add missing testButton property to CodeRabbit provider config
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
apps/server/src/services/auto-mode-service.ts (1)

1718-1773: Malformed AbortController breaks cancellation for AI steps.

Line 1743 creates a malformed AbortController object with only the signal property cast to AbortController. This fake object doesn't have a functional abort() method, which means cancellation won't work correctly for pipeline steps.

🔎 Proposed fix
   async executeAIStep(params: {
     feature: Feature;
     stepConfig: PipelineStepConfig;
     signal: AbortSignal;
     prompt: string;
     onProgress?: (message: string) => void;
     projectPath?: string;
   }): Promise<{ status: 'passed' | 'failed'; output: string; error?: string }> {
     const { feature, stepConfig, signal, prompt, onProgress, projectPath } = params;

     try {
       // Get the model to use
       const model = this.getStepModel(feature, stepConfig);

       // Get provider for this model
       const provider = ProviderFactory.getProviderForModel(model);

       onProgress?.(`Executing ${stepConfig.name} with model ${model}...`);

+      // Create a proper AbortController that wraps the parent signal
+      const controller = new AbortController();
+      signal.addEventListener('abort', () => controller.abort(), { once: true });
+
       const options: ExecuteOptions = {
         prompt,
         model,
         maxTurns: 1, // Single turn for pipeline steps
         cwd: projectPath || '',
         allowedTools: ['Read', 'Glob', 'Grep'],
-        abortController: signal ? ({ signal } as AbortController) : undefined,
+        abortController: controller,
       };
apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts (1)

88-104: Inconsistent success field semantics in error responses.

The success field has inconsistent meanings across error cases:

  • Line 93: AbortError (timeout) → success: true, authenticated: false
  • Line 100: Network errors → success: false, authenticated: false
  • Line 83: Non-401 API errors → success: false, authenticated: false

This inconsistency could confuse API consumers. Consider using success: false for all error cases to consistently indicate "the verification could not be completed", while authenticated indicates the auth status when verification succeeds.

🔎 Proposed fix for consistent semantics
       } catch (error) {
         clearTimeout(timeoutId);

         if (error instanceof Error && error.name === 'AbortError') {
           res.json({
-            success: true,
+            success: false,
             authenticated: false,
             error: 'Connection timeout. Please check your network connection.',
           });
         } else {
           logger.error('CodeRabbit API verification failed:', error);
           res.json({
             success: false,
             authenticated: false,
             error: 'Failed to connect to CodeRabbit API. Please check your network connection.',
           });
         }
       }
apps/server/src/routes/pipeline/validate-config.ts (1)

175-201: Consider refactoring circular dependency detection.

As noted in the previous review, while this algorithm may work for simple cases, the repeated initialization of the visiting set for each dependency check (line 204: new Set()) is inefficient and less robust than a standard graph cycle detection approach.

Recommended approach:

  1. Build the complete dependency graph once
  2. Perform a single DFS traversal maintaining visited nodes and recursion stack
  3. Detect back edges (cycles) during the traversal

This would be more efficient (O(V + E) vs O(V * E)), clearer, and more maintainable.

🔎 Alternative implementation approach

Consider implementing a standard DFS-based cycle detection:

// Build adjacency list once
const graph = new Map<string, string[]>();
for (const step of config.steps) {
  graph.set(step.id, step.dependencies || []);
}

// Single DFS traversal
const visited = new Set<string>();
const recStack = new Set<string>();

function hasCycleDFS(nodeId: string, path: string[]): boolean {
  if (recStack.has(nodeId)) {
    errors.push(`Circular dependency detected: ${[...path, nodeId].join(' -> ')}`);
    return true;
  }
  if (visited.has(nodeId)) return false;

  visited.add(nodeId);
  recStack.add(nodeId);

  const neighbors = graph.get(nodeId) || [];
  for (const neighbor of neighbors) {
    if (hasCycleDFS(neighbor, [...path, nodeId])) {
      return true;
    }
  }

  recStack.delete(nodeId);
  return false;
}

// Check all nodes
for (const step of config.steps) {
  if (!visited.has(step.id)) {
    hasCycleDFS(step.id, []);
  }
}
🧹 Nitpick comments (3)
apps/server/src/services/pipeline-step-executor.ts (1)

82-105: Consider stronger typing for step configurations.

The as any casts when passing stepConfig to step implementations (lines 84, 87, 92, 98, 101) suppress TypeScript's type checking. While this works, it could hide type mismatches between the generic PipelineStepConfig and the specific config types expected by each step.

Consider using type guards or discriminated unions to maintain type safety:

🔎 Possible approach using type guards
switch (stepConfig.type) {
  case 'review':
    if (!isReviewStepConfig(stepConfig)) {
      throw new Error('Invalid review step configuration');
    }
    result = await this.reviewStep.execute(feature, stepConfig, signal, projectPath);
    break;
  // ... similar for other types
}

Or use discriminated unions in the type system to make the config type dependent on the step type.

apps/ui/src/components/views/settings-view/pipeline-builder.tsx (2)

151-154: Remove debug logging.

This useEffect hook appears to be debug code for development. Consider removing it before production to avoid cluttering console logs.

🔎 Proposed fix
-  useEffect(() => {
-    console.log('PipelineBuilder - localConfig.steps:', localConfig.steps);
-    console.log('PipelineBuilder - Number of steps:', localConfig.steps.length);
-  }, [localConfig.steps]);
-

184-202: Remove debug logging statements.

Multiple console.log statements (lines 185, 189, 194) appear to be debug code. Consider removing them for production.

🔎 Proposed fix
   const handleDragEnd = (event: DragEndEvent) => {
-    console.log('Drag end event:', event);
     const { active, over } = event;

     if (active.id !== over?.id) {
-      console.log('Reordering from', active.id, 'to', over?.id);
       setLocalConfig((current) => {
         const oldIndex = current.steps.findIndex((step) => step.id === active.id);
         const newIndex = current.steps.findIndex((step) => step.id === over?.id);

-        console.log('Old index:', oldIndex, 'New index:', newIndex);

         return {
           ...current,
           steps: arrayMove(current.steps, oldIndex, newIndex),
         };
       });
     }
   };
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f6d549d and e505b7f.

📒 Files selected for processing (13)
  • apps/server/src/integrations/coderabbit.ts
  • apps/server/src/pipeline-steps/custom-step.ts
  • apps/server/src/routes/pipeline/create-routes.ts
  • apps/server/src/routes/pipeline/validate-config.ts
  • apps/server/src/routes/setup/routes/verify-coderabbit-auth.ts
  • apps/server/src/services/auto-mode-service.ts
  • apps/server/src/services/pipeline-step-executor.ts
  • apps/ui/src/components/ui/alert.tsx
  • apps/ui/src/components/views/board-view/hooks/use-pipeline-config.ts
  • apps/ui/src/components/views/settings-view/pipeline-builder.tsx
  • apps/ui/src/components/views/settings-view/step-configs/custom-step-config.tsx
  • apps/ui/src/config/api-providers.ts
  • libs/types/src/settings.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/server/src/routes/pipeline/create-routes.ts
  • apps/ui/src/components/views/board-view/hooks/use-pipeline-config.ts
  • apps/ui/src/components/ui/alert.tsx
🧰 Additional context used
🧬 Code graph analysis (5)
apps/ui/src/components/views/settings-view/pipeline-builder.tsx (3)
libs/types/src/pipeline.ts (3)
  • PipelineConfig (13-20)
  • PipelineStepConfig (25-40)
  • StepType (45-50)
libs/types/src/index.ts (3)
  • PipelineConfig (88-88)
  • PipelineStepConfig (89-89)
  • StepType (93-93)
apps/ui/src/lib/utils.ts (1)
  • cn (5-7)
apps/server/src/services/pipeline-step-executor.ts (2)
libs/types/src/feature.ts (1)
  • Feature (25-62)
libs/types/src/pipeline.ts (4)
  • PipelineStepConfig (25-40)
  • StepStatus (132-132)
  • PipelineStepResult (137-149)
  • AgentModel (8-8)
apps/server/src/integrations/coderabbit.ts (3)
libs/types/src/feature.ts (1)
  • Feature (25-62)
libs/types/src/index.ts (4)
  • Feature (19-19)
  • PipelineStepConfig (89-89)
  • PipelineStepResult (91-91)
  • CustomConfig (100-100)
libs/types/src/pipeline.ts (3)
  • PipelineStepConfig (25-40)
  • PipelineStepResult (137-149)
  • CustomConfig (104-114)
libs/types/src/settings.ts (2)
libs/types/src/pipeline.ts (1)
  • PipelineConfig (13-20)
libs/types/src/index.ts (2)
  • PipelineConfig (88-88)
  • validatePipelineConfig (74-74)
apps/ui/src/components/views/settings-view/step-configs/custom-step-config.tsx (2)
libs/types/src/index.ts (2)
  • PipelineStepConfig (89-89)
  • CustomConfig (100-100)
apps/server/src/pipeline-steps/custom-step.ts (1)
  • CustomStepConfig (11-33)
🪛 ast-grep (0.40.3)
apps/server/src/pipeline-steps/custom-step.ts

[warning] 185-185: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(\\{\\{${escapedKey}\\}\\}, 'g')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html

(regexp-from-variable)

🔇 Additional comments (10)
libs/types/src/settings.ts (4)

10-13: LGTM!

The import and re-export of PipelineConfig provides convenient access through the settings module, maintaining good API design.


318-319: LGTM!

The coderabbit API key field follows the established pattern for credential management and is properly initialized in the defaults.

Also applies to: 466-466


393-395: LGTM!

Adding pipeline as an optional field in ProjectSettings aligns with the pattern of project-specific overrides and supports per-project pipeline customization.


482-521: Well-structured validation with appropriate scope.

The validatePipelineConfig function provides essential client-side validation:

  • Validates required fields and structure
  • Prevents reserved column name conflicts (backlog, in_progress, etc.)
  • Type-guards the config for TypeScript

The lightweight approach is appropriate since comprehensive validation (including dependencies, circular checks, and type-specific config validation) is handled server-side in the /api/pipeline/validate-config route.

apps/server/src/routes/pipeline/validate-config.ts (2)

56-64: LGTM!

The reserved names validation prevents step IDs from conflicting with base kanban column names, ensuring the pipeline system integrates cleanly with the existing board structure.


67-167: Comprehensive per-step validation.

The validation logic thoroughly checks all required fields and type-specific configuration requirements, providing clear error messages with step indices for easy debugging.

apps/ui/src/components/views/settings-view/pipeline-builder.tsx (4)

132-146: LGTM! Past issues resolved.

Both stepTypeColors and stepTypeIcons are now correctly defined at module scope, resolving the scoping and duplication issues flagged in previous reviews. The SortableStep component can now properly access these constants.


48-130: Well-structured sortable component.

The SortableStep component cleanly implements drag-and-drop functionality using DnD Kit patterns, with proper touch handling (touchAction: 'none') and visual feedback during dragging.


209-358: Clean UI implementation with good UX.

The component provides an intuitive visual pipeline builder with:

  • Proper DnD context configuration (pointer + keyboard sensors)
  • Clear visual feedback (empty state, statistics)
  • Accessible interactions

The implementation follows React 19 best practices.


167-182: [Rewritten review comment]
[Exactly ONE classification tag]

- Improve checkSuccessCriteria in custom-step.ts to be more strict:
  - Default to false instead of true for unknown outputs
  - Add JSON parsing for structured success/failure indicators
  - Add AI-based evaluation with YES/NO prompt fallback
  - Include comprehensive logging for evaluation paths

- Update validate-config.ts:
  - Include 'skip-optional' as valid onFailure option
  - Update error message to list all three valid values

- Fix CodeRabbit provider in api-providers.ts:
  - Use actual test result (coderabbit.result) instead of null
  - Enable proper display of test connection outcomes
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e505b7f and f137b9e.

📒 Files selected for processing (3)
  • apps/server/src/pipeline-steps/custom-step.ts
  • apps/server/src/routes/pipeline/validate-config.ts
  • apps/ui/src/config/api-providers.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/server/src/routes/pipeline/validate-config.ts
  • apps/ui/src/config/api-providers.ts
🧰 Additional context used
🧬 Code graph analysis (1)
apps/server/src/pipeline-steps/custom-step.ts (6)
apps/server/src/pipeline-steps/index.ts (2)
  • CustomStepConfig (15-15)
  • CustomStep (9-9)
apps/server/src/services/pipeline-memory.ts (2)
  • PipelineMemory (51-309)
  • IterationMemory (40-49)
apps/server/src/integrations/coderabbit.ts (1)
  • CodeRabbitIntegration (46-337)
libs/types/src/feature.ts (1)
  • Feature (25-62)
libs/types/src/pipeline.ts (2)
  • PipelineStepConfig (25-40)
  • PipelineStepResult (137-149)
apps/server/src/providers/provider-factory.ts (1)
  • ProviderFactory (13-108)
🪛 ast-grep (0.40.3)
apps/server/src/pipeline-steps/custom-step.ts

[warning] 185-185: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(\\{\\{${escapedKey}\\}\\}, 'g')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html

(regexp-from-variable)

🔇 Additional comments (2)
apps/server/src/pipeline-steps/custom-step.ts (2)

40-46: Constructor fix confirmed - CodeRabbit now receives projectPath.

The CodeRabbitIntegration initialization now correctly passes both codeRabbitApiKey and projectPath, addressing the previous review concern. The guard condition ensures both values are present before instantiation.


227-335: Excellent improvements to success criteria evaluation.

The revised implementation comprehensively addresses the previous concern about defaulting to success. The method now employs a robust multi-stage evaluation strategy:

  1. Explicit keyword detection for success/failure indicators
  2. JSON parsing to extract structured success/failure signals
  3. AI-based evaluation as a fallback for ambiguous cases
  4. Safe default to false (line 334) when no explicit success is found

The detailed logging at each stage aids debugging and transparency. This prevents false positives and ensures steps only pass when success is clearly indicated.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
apps/server/src/pipeline-steps/custom-step.ts (3)

149-162: Past issue addressed: success criteria not met correctly results in 'failed' status.

The logic now correctly sets status: 'failed' when success criteria are defined (hasSuccessCriteria) but not met (!successMet), addressing the previous review concern mentioned in the commit message.

Optional: Consider refactoring the nested ternary for improved readability

The current nested ternary (lines 155-160) is logically correct but could be simplified:

-        status:
-          hasSuccessCriteria && !successMet
-            ? 'failed'
-            : result.status === 'failed'
-              ? 'failed'
-              : 'passed',
+        status: (hasSuccessCriteria && !successMet) || result.status === 'failed' 
+          ? 'failed' 
+          : 'passed',

Or using explicit if-else for maximum clarity:

let status: 'passed' | 'failed' | 'error' = 'passed';
if ((hasSuccessCriteria && !successMet) || result.status === 'failed') {
  status = 'failed';
}
finalResult = {
  ...result,
  status,
  iterations: loopCount,
};

165-165: Consider simplifying the abort signal check.

The condition signal.aborted === false could be simplified to !signal.aborted for improved readability.

-    } while (signal.aborted === false);
+    } while (!signal.aborted);

235-343: Past issue addressed: now defaults to false with robust evaluation.

The checkSuccessCriteria method now implements a multi-layered evaluation strategy and correctly defaults to false (line 342) when no explicit success is detected, addressing the previous review concern about false positives.

The evaluation flow is well-designed:

  1. Check for explicit success/failure indicators (lines 241-258)
  2. Parse structured JSON output (lines 260-285)
  3. Fallback to AI-based binary evaluation (lines 287-338)
  4. Default to false if all checks fail (lines 340-342)
Optional: Consider using AutoModeService for AI evaluation

The AI fallback evaluation (lines 300-326) directly imports and uses ProviderFactory, which bypasses the standard autoModeService execution path. While this works, consider delegating to autoModeService.executeAIStep for consistency:

  • Maintains consistent AI execution patterns across the codebase
  • Leverages existing progress tracking and error handling
  • Reduces direct provider coupling

This is a minor architectural suggestion and the current implementation is functional.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f137b9e and 9555324.

📒 Files selected for processing (1)
  • apps/server/src/pipeline-steps/custom-step.ts
🧰 Additional context used
🧬 Code graph analysis (1)
apps/server/src/pipeline-steps/custom-step.ts (5)
apps/server/src/services/auto-mode-service.ts (1)
  • AutoModeService (356-3090)
apps/server/src/services/pipeline-memory.ts (2)
  • PipelineMemory (51-309)
  • IterationMemory (40-49)
apps/server/src/integrations/coderabbit.ts (1)
  • CodeRabbitIntegration (46-337)
libs/types/src/feature.ts (1)
  • Feature (25-62)
libs/types/src/pipeline.ts (2)
  • PipelineStepConfig (25-40)
  • PipelineStepResult (137-149)
🪛 ast-grep (0.40.3)
apps/server/src/pipeline-steps/custom-step.ts

[warning] 193-193: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(\\{\\{${escapedKey}\\}\\}, 'g')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html

(regexp-from-variable)

🔇 Additional comments (9)
apps/server/src/pipeline-steps/custom-step.ts (9)

1-9: LGTM!

The imports and file structure are well-organized, with appropriate use of type imports and clear documentation.


11-33: LGTM!

The CustomStepConfig interface is comprehensive and well-structured, with clear separation of concerns across looping, memory, and CodeRabbit configuration.


40-46: LGTM! Past issue addressed.

The constructor now correctly guards CodeRabbit instantiation with both codeRabbitApiKey && projectPath and passes both parameters to the constructor, addressing the previous review concern.


76-100: LGTM!

The CodeRabbit integration path correctly implements fallback behavior: it uses the CodeRabbit result if successful or when fallback is disabled, and gracefully falls back to AI execution when errors occur and fallbackToAI is enabled.


102-147: LGTM!

The AI execution path correctly delegates to autoModeService.executeAIStep, evaluates success criteria, and manages memory persistence for both successful and failed attempts. The loop continuation logic appropriately checks loopUntilSuccess and maxLoops.


172-179: LGTM!

The error handling appropriately returns a standardized PipelineStepResult with status 'failed', error details in both output and metadata, and preserves the iteration count.


182-233: LGTM! Static analysis warning is a false positive.

The buildPrompt method correctly sanitizes variable keys using escapeRegExp (line 193) before constructing regular expressions (line 194), which prevents ReDoS attacks. The static analysis warning on line 194 appears to be a false positive, as it does not detect that the input is escaped.

The method comprehensively builds prompts by:

  • Substituting user-defined variables with proper escaping
  • Injecting feature context (title, description, status)
  • Including memory/iteration context when available
  • Appending success criteria and loop guidance

345-360: LGTM!

The executeCodeRabbit method appropriately delegates to CodeRabbitIntegration.submitReview with proper error handling and a clear error message when the integration is not configured.


365-367: LGTM!

The escapeRegExp utility correctly escapes all special regular expression metacharacters, providing robust protection against ReDoS attacks in variable substitution.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
apps/server/src/pipeline-steps/custom-step.ts (2)

187-192: Consider simplifying variable substitution to address static analysis warning.

While the current regex-based approach with escapeRegExp should be safe, the static analysis tool flags line 189 for potential ReDoS. Consider using a simpler string replacement approach that doesn't require regex construction from variables.

🔎 Safer alternative using replaceAll
     // Substitute variables
     if (config.variables) {
       for (const [key, value] of Object.entries(config.variables)) {
-        const escapedKey = this.escapeRegExp(key);
-        prompt = prompt.replace(new RegExp(`\\{\\{${escapedKey}\\}\\}`, 'g'), value);
+        prompt = prompt.replaceAll(`{{${key}}}`, value);
       }
     }

This eliminates the regex construction entirely, addressing the static analysis concern while maintaining the same functionality.


297-310: Simplify evaluation Feature object to only include required fields.

The evaluation Feature object includes several fields (createdAt, updatedAt, branch, files, plan, implementation, testResults) that aren't defined in the Feature type and aren't needed for AI evaluation. These add unnecessary complexity.

🔎 Simplified Feature object
       const evaluationFeature: Feature = {
         id: 'evaluation',
         title: 'Success Criteria Evaluation',
         description: '',
-        status: 'in_progress',
         category: 'evaluation',
-        createdAt: new Date(),
-        updatedAt: new Date(),
-        branch: '',
-        files: [],
-        plan: '',
-        implementation: '',
-        testResults: '',
       };

This includes only the required fields (id, category, description) plus title for clarity, reducing unnecessary clutter.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9555324 and dda343f.

📒 Files selected for processing (1)
  • apps/server/src/pipeline-steps/custom-step.ts
🧰 Additional context used
🧬 Code graph analysis (1)
apps/server/src/pipeline-steps/custom-step.ts (4)
apps/server/src/services/pipeline-memory.ts (2)
  • PipelineMemory (51-309)
  • IterationMemory (40-49)
apps/server/src/integrations/coderabbit.ts (1)
  • CodeRabbitIntegration (46-337)
libs/types/src/feature.ts (1)
  • Feature (25-62)
libs/types/src/pipeline.ts (2)
  • PipelineStepConfig (25-40)
  • PipelineStepResult (137-149)
🪛 ast-grep (0.40.3)
apps/server/src/pipeline-steps/custom-step.ts

[warning] 189-189: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(\\{\\{${escapedKey}\\}\\}, 'g')
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html

(regexp-from-variable)

🔇 Additional comments (5)
apps/server/src/pipeline-steps/custom-step.ts (5)

40-46: LGTM! Constructor properly addresses previous feedback.

The constructor now correctly checks for both codeRabbitApiKey and projectPath before instantiating CodeRabbitIntegration, ensuring the integration receives all required parameters.


48-176: LGTM! Execute method properly implements iteration and memory.

The execution flow correctly addresses previous feedback:

  • Success criteria validation now properly fails when criteria aren't met (lines 151-156)
  • Memory integration works correctly for both success and failure cases
  • CodeRabbit fallback logic is sound
  • Error handling returns appropriate failed results

The iteration logic with memory context and CodeRabbit integration is well-structured.


231-348: LGTM! Success criteria evaluation properly addresses previous feedback.

The method now correctly defaults to false (line 347) when no explicit success indicators are found, addressing the previous concern about false positives. The multi-layered evaluation approach (explicit indicators → JSON parsing → AI fallback) is thorough and robust.


350-365: LGTM! CodeRabbit integration is properly implemented.

The method correctly validates that CodeRabbit is configured before use and provides clear error messages. The integration with CodeRabbitIntegration.submitReview is straightforward and correct.


370-372: LGTM! Standard regex escape implementation.

The escapeRegExp helper correctly escapes all special regex characters, providing protection against regex injection.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
apps/server/src/pipeline-steps/custom-step.ts (3)

64-161: Consider simplifying loop control flow.

The do-while (!signal.aborted) condition on line 161 is somewhat unusual since the loop is primarily controlled by break statements rather than the while condition. The loop will execute at least once before checking signal.aborted, and there's no explicit abort handling within the loop body.

Consider either:

  1. Explicitly checking signal.aborted before expensive operations within the loop body
  2. Using a simple while (true) with break statements if the abort signal isn't meant to interrupt mid-iteration

303-322: Consider model and projectPath consistency for AI evaluation.

The AI evaluation hardcodes the model to 'sonnet' (line 307) which may differ from the user's configured model for the step. While using a lighter model for evaluation might be intentional for performance/cost reasons, consider documenting this behavior or making it configurable.

Additionally, the projectPath parameter available in the parent execute() method is not passed to the evaluation. Verify whether autoModeService.executeAIStep() requires this parameter for proper operation.


341-356: LGTM with minor observation.

The CodeRabbit integration is implemented correctly. The _signal parameter is appropriately marked as unused with the underscore prefix.

One optional improvement: the error wrapping at line 354 uses ${error} which calls toString() on the error object. Consider preserving the original error as a cause for better debugging:

throw new Error(`CodeRabbit integration failed: ${error}`, { cause: error });
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dda343f and fd0ed0b.

📒 Files selected for processing (1)
  • apps/server/src/pipeline-steps/custom-step.ts
🔇 Additional comments (2)
apps/server/src/pipeline-steps/custom-step.ts (2)

40-46: Previous issue correctly addressed.

The constructor now properly checks for both codeRabbitApiKey and projectPath before initializing CodeRabbitIntegration, which resolves the previous critical issue.


144-147: Off-by-one error in loop iteration logic.

The condition loopCount < (config.loopConfig.maxLoops || 1) will execute one fewer iteration than intended. If maxLoops is 3, the loop will only execute twice (when loopCount is 1 and 2), not three times.

🔎 Proposed fix
-        if (config.loopConfig?.loopUntilSuccess && loopCount < (config.loopConfig.maxLoops || 1)) {
+        if (config.loopConfig?.loopUntilSuccess && loopCount < (config.loopConfig?.maxLoops || 1)) {
           console.log(`[Custom Step] Success criteria not met, retrying...`);
           continue;
         }

Wait, that's not the fix. The actual fix should be:

-        if (config.loopConfig?.loopUntilSuccess && loopCount < (config.loopConfig.maxLoops || 1)) {
+        if (config.loopConfig?.loopUntilSuccess && loopCount < (config.loopConfig?.maxLoops || 1)) {
           console.log(`[Custom Step] Success criteria not met, retrying...`);
           continue;
         }

Actually, reviewing more carefully: the loop starts with loopCount = 0, then increments to 1 on line 67, so if maxLoops is 3, we get iterations 1, 2, 3 and the check at iteration 3 would be 3 < 3 which is false, so we'd stop. This is actually correct behavior.

But wait - on the first iteration, loopCount becomes 1. If success criteria aren't met and maxLoops is 1, we check 1 < 1 which is false, so we don't continue. That means we only do 1 iteration, which is correct.

If maxLoops is 3:

  • Iteration 1: loopCount = 1, if fail check 1 < 3 → true, continue
  • Iteration 2: loopCount = 2, if fail check 2 < 3 → true, continue
  • Iteration 3: loopCount = 3, if fail check 3 < 3 → false, don't continue

So actually this IS correct. Let me reconsider...

Actually no, I need to re-read the logic more carefully. The loop increment happens at line 67 at the START of each iteration. So:

  • Enter loop: loopCount = 0
  • Line 67: loopCount = 1 (first iteration starts)
  • If fail, line 144: check 1 < maxLoops
  • If maxLoops is 1, we check 1 < 1 = false, so we DON'T continue, meaning we did 1 attempt. Correct.
  • If maxLoops is 3 and we failed:
    • After iteration 1: 1 < 3 = true, continue
    • After iteration 2: 2 < 3 = true, continue
    • After iteration 3: 3 < 3 = false, stop
    • Total: 3 iterations. Correct!

So actually the logic is correct. I was wrong. Let me mark this for skip.

[pending]
Skip this comment, logic is actually correct.
[scratchpad_end] -->

Likely an incorrect or invalid review comment.

- Refactor nested ternary to simplified conditional for better readability
- Simplify abort signal check from 'signal.aborted === false' to '!signal.aborted'
- Improve checkSuccessCriteria with multi-layered evaluation:
  - Default to false instead of true to prevent false positives
  - Add JSON parsing for structured success/failure indicators
  - Add AI-based evaluation with YES/NO prompt fallback
  - Include comprehensive logging for evaluation paths
- Use autoModeService.executeAIStep for consistent AI execution patterns
- Simplify variable substitution using replaceAll() instead of regex to avoid ReDoS
- Remove unused escapeRegExp helper method
- Simplify evaluation Feature object to only required fields
- Propagate abort signal from execute() through checkSuccessCriteria to AI evaluation
- Ensure proper abort behavior throughout the execution chain
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (1)
apps/server/src/services/auto-mode-service.ts (1)

1782-1789: Critical: Malformed AbortController breaks cancellation.

Line 1788 creates a malformed AbortController object by casting { signal } as AbortController. This object is missing the abort() method, so calling abort() on it will have no effect, making it impossible to cancel the pipeline step.

This is a duplicate of a previous review comment that was marked as critical.

🔎 Recommended fix
+  // Create a proper AbortController that wraps the parent signal
+  const controller = new AbortController();
+  if (signal) {
+    signal.addEventListener('abort', () => controller.abort(), { once: true });
+  }
+
   const options: ExecuteOptions = {
     prompt,
     model,
     maxTurns: 1,
     cwd: projectPath || '',
     allowedTools: ['Read', 'Glob', 'Grep'],
-    abortController: signal ? ({ signal } as AbortController) : undefined,
+    abortController: controller,
   };
🧹 Nitpick comments (3)
apps/server/src/services/auto-mode-service.ts (3)

392-399: Consider using a Map for multi-project storage instances.

The current implementation only caches one PipelineStorage instance and recreates it when projectPath changes. This could cause issues if the service needs to handle multiple projects concurrently or switch between projects frequently.

🔎 Proposed refactor
-  private pipelineStorage: PipelineStorage | null = null;
-  private projectPath: string | null = null;
+  private pipelineStorages = new Map<string, PipelineStorage>();

   private getPipelineStorage(projectPath: string): PipelineStorage {
-    if (!this.pipelineStorage || this.projectPath !== projectPath) {
+    if (!this.pipelineStorages.has(projectPath)) {
       const storagePath = path.join(projectPath, '.automaker', 'pipeline-data');
-      this.pipelineStorage = new PipelineStorage(storagePath);
-      this.projectPath = projectPath;
+      this.pipelineStorages.set(projectPath, new PipelineStorage(storagePath));
     }
-    return this.pipelineStorage;
+    return this.pipelineStorages.get(projectPath)!;
   }

Note: This would also benefit from the LRU cache pattern mentioned in the previous comment to prevent unbounded growth.


677-693: Simplify duplicated status logic.

The final status determination duplicates the skipTests ternary in both branches of the pipeline check. This can be simplified:

🔎 Proposed refactor
   // Execute pipeline steps if configured
   await this.executePipelineSteps(projectPath, feature, abortController.signal);

-  // Determine final status based on testing mode and pipeline:
-  // - If pipeline is enabled and has steps, feature status will be the last step
-  // - If no pipeline or pipeline disabled, use original logic
-  const pipelineConfig = await this.loadPipelineConfig(projectPath);
-  let finalStatus: string;
-
-  if (pipelineConfig?.enabled && pipelineConfig.steps.length > 0) {
-    // Pipeline is active, status is already set to the last step
-    // Move to waiting_approval or verified based on skipTests
-    finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
-  } else {
-    // No pipeline, use original logic
-    finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
-  }
+  // Determine final status based on testing mode
+  const finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';

   await this.updateFeatureStatus(projectPath, featureId, finalStatus);

The pipeline presence doesn't affect the final status choice since both branches use the same logic.


3114-3128: Potential division by zero in metrics calculation.

Line 3127 calculates averageExecutionTime by dividing by this.pipelineMetrics.totalExecuted. If totalExecuted is somehow 0 (which shouldn't happen given line 3115 increments it first, but defensive coding is good), this would result in NaN.

While the current logic prevents this (increment happens first), consider making it more explicit:

if (this.pipelineMetrics.totalExecuted > 0) {
  const totalTime = 
    this.pipelineMetrics.averageExecutionTime * (this.pipelineMetrics.totalExecuted - 1) +
    executionTime;
  this.pipelineMetrics.averageExecutionTime = totalTime / this.pipelineMetrics.totalExecuted;
}
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fd0ed0b and c617eba.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (14)
  • apps/server/src/index.ts
  • apps/server/src/pipeline-steps/custom-step.ts
  • apps/server/src/services/auto-mode-service.ts
  • apps/ui/package.json
  • apps/ui/src/components/views/board-view.tsx
  • apps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsx
  • apps/ui/src/components/views/board-view/hooks/use-board-actions.ts
  • apps/ui/src/components/views/board-view/kanban-board.tsx
  • apps/ui/src/components/views/settings-view.tsx
  • apps/ui/src/lib/electron.ts
  • apps/ui/src/lib/http-api-client.ts
  • apps/ui/src/store/app-store.ts
  • libs/types/src/index.ts
  • libs/types/src/settings.ts
🚧 Files skipped from review as they are similar to previous changes (10)
  • apps/ui/src/store/app-store.ts
  • apps/ui/src/components/views/board-view/components/kanban-card/kanban-card.tsx
  • libs/types/src/index.ts
  • apps/ui/package.json
  • apps/ui/src/components/views/settings-view.tsx
  • libs/types/src/settings.ts
  • apps/ui/src/components/views/board-view/hooks/use-board-actions.ts
  • apps/ui/src/lib/http-api-client.ts
  • apps/server/src/pipeline-steps/custom-step.ts
  • apps/ui/src/lib/electron.ts
🧰 Additional context used
🧬 Code graph analysis (1)
apps/ui/src/components/views/board-view/kanban-board.tsx (6)
libs/types/src/index.ts (2)
  • Feature (19-19)
  • PipelineConfig (88-88)
libs/types/src/settings.ts (1)
  • PipelineConfig (13-13)
libs/types/src/pipeline.ts (1)
  • PipelineConfig (13-20)
apps/ui/src/components/views/board-view/constants.ts (1)
  • generateColumns (70-84)
apps/ui/src/hooks/use-responsive-kanban.ts (1)
  • useResponsiveKanban (48-194)
apps/ui/src/components/views/board-view/components/kanban-column.tsx (1)
  • KanbanColumn (24-98)
🔇 Additional comments (13)
apps/ui/src/components/views/board-view/kanban-board.tsx (5)

11-12: LGTM!

Imports are correctly added for dynamic column generation and the PipelineConfig type needed for the new pipeline feature.


14-57: LGTM!

The interface changes are well-structured:

  • getColumnFeatures type changed from ColumnId to string to accommodate dynamic pipeline step IDs.
  • pipelineConfig prop correctly typed as optional and nullable.
  • Pipeline step callbacks follow a consistent pattern with (featureId: string, stepId: string) signature.

95-100: LGTM!

Good approach:

  • generateColumns(pipelineConfig) is called directly since memoization happens at the parent level (board-view.tsx line 298).
  • Responsive hook correctly uses columns.length instead of a hardcoded value, adapting to dynamic pipeline column counts.

111-121: LGTM!

Column rendering correctly:

  • Iterates over dynamically generated columns.
  • Passes new description and required props to KanbanColumn for pipeline step metadata display.

203-217: LGTM!

Pipeline step handlers are correctly wired to each card:

  • Conditionally passed only when the parent provides them.
  • Each handler closes over feature.id and accepts stepId from the card, matching the expected (featureId, stepId) signature in the parent.
apps/ui/src/components/views/board-view.tsx (6)

46-59: LGTM!

Import changes correctly bring in generateColumns (replacing static COLUMNS) and the new usePipelineConfig hook for pipeline configuration management.


297-298: LGTM!

Column generation is correctly memoized with pipelineConfig as the dependency, ensuring columns are recalculated only when the pipeline configuration changes.


300-318: LGTM!

Collision detection strategy correctly updated:

  • Uses dynamic columns array for column detection.
  • Falls back to rectIntersection for card-level collisions.
  • Properly memoized with columns dependency.

804-812: LGTM!

The pipelineConfig is correctly passed to useBoardColumnFeatures, enabling the hook to compute column-feature mappings based on the dynamic pipeline configuration.


1059-1098: LGTM!

KanbanBoard correctly receives all pipeline-related props:

  • pipelineConfig for dynamic column generation.
  • onSkipPipelineStep, onRetryPipelineStep, and onClearPipelineStep handlers for per-feature step control.

The wiring is consistent with the interface defined in kanban-board.tsx.


213-219: Hook null handling is properly implemented.

The usePipelineConfig hook correctly handles null project paths: the loadConfig function checks for falsy projectPath and sets config to null (lines 25-27), while all action handlers (skipStep, retryStep, clearStep) return false when no project path is present. Error cases fall back to DEFAULT_PIPELINE_CONFIG, ensuring safe defaults are always available.

apps/server/src/services/auto-mode-service.ts (2)

14-20: LGTM! Pipeline types and service imports are correct.

The import additions properly bring in the pipeline-related types from @automaker/types and the new service classes that support pipeline functionality.

Also applies to: 41-43


1703-1708: Verify: Setting feature.status to step ID may be incorrect.

Line 1704 sets feature.status = stepConfig.id, which assigns the step's ID (e.g., "review", "security") as the feature's status. This seems unusual—typically a status would be a standardized value like 'in_progress', 'review', 'testing', etc.

If the intention is to track which pipeline step is currently active, consider:

  1. Using feature.currentPipelineStep (which is already set on line 1634), or
  2. Using a more descriptive status like feature.status = 'pipeline:' + stepConfig.id

Please verify this is the intended behavior.

Comment on lines +371 to +375
private pipelineConfigs = new Map<string, PipelineConfig>();
private pipelineExecutors = new Map<string, PipelineStepExecutor>();
private pipelineQueue: PipelineQueueItem[] = [];
private pipelineStorage: PipelineStorage | null = null;
private projectPath: string | null = null;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential memory leak: Maps grow unbounded.

The pipelineConfigs and pipelineExecutors Maps use projectPath as keys and accumulate entries over time without any cleanup mechanism. If AutoModeService processes many different projects over its lifetime, these Maps will grow indefinitely.

Consider adding a cleanup method or implementing an LRU cache pattern:

private readonly MAX_CACHED_PROJECTS = 10;

private evictOldestProject(): void {
  if (this.pipelineConfigs.size > this.MAX_CACHED_PROJECTS) {
    const oldestKey = this.pipelineConfigs.keys().next().value;
    this.pipelineConfigs.delete(oldestKey);
    this.pipelineExecutors.get(oldestKey)?.cleanup?.();
    this.pipelineExecutors.delete(oldestKey);
  }
}
🤖 Prompt for AI Agents
In apps/server/src/services/auto-mode-service.ts around lines 371 to 375, the
pipelineConfigs and pipelineExecutors Maps are unbounded and risk a memory leak;
add a bounded cache/eviction strategy (e.g. a MAX_CACHED_PROJECTS constant) and
implement an evictOldestProject() helper that removes the oldest map key when
the size limit is exceeded, invoking any cleanup method on the corresponding
PipelineStepExecutor before deleting it; call this eviction helper whenever you
insert a new project entry (or update projectPath) so the Maps never grow beyond
the configured limit.

Comment on lines +1832 to +1835
const feature = await this.loadFeature(projectPath, featureId);
if (!feature || !feature.pipelineSteps) {
return;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent return may hide errors.

Lines 1832-1835 silently return if the feature doesn't exist or has no pipelineSteps. This could hide programming errors where the method is called with an invalid featureId or before the pipeline is initialized.

Consider throwing an error instead:

const feature = await this.loadFeature(projectPath, featureId);
if (!feature) {
  throw new Error(`Feature ${featureId} not found in project ${projectPath}`);
}
if (!feature.pipelineSteps) {
  throw new Error(`Feature ${featureId} has no pipeline steps to skip`);
}

This provides clearer feedback to API callers and helps catch bugs during development.

🤖 Prompt for AI Agents
In apps/server/src/services/auto-mode-service.ts around lines 1832 to 1835, the
current code silently returns when a feature is missing or has no pipelineSteps
which can hide errors; replace the silent return with explicit thrown errors:
after loading the feature, if feature is falsy throw an Error indicating the
featureId and projectPath were not found, and if feature.pipelineSteps is falsy
throw an Error indicating the feature has no pipeline steps to skip; ensure the
thrown messages include featureId and projectPath for clear diagnostics so
callers receive immediate feedback instead of a no-op.

Comment on lines +2996 to +3000
// Get executor
const executor = this.pipelineExecutors.get(this.config?.projectPath || '');
if (!executor) {
return;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Executor lookup uses wrong projectPath.

Line 2997 gets the executor using this.config?.projectPath, but the queue items being processed have their own item.projectPath which may differ. This means pipeline steps could be executed in the wrong project context or fail to find the executor.

🔎 Proposed fix
-  // Get executor
-  const executor = this.pipelineExecutors.get(this.config?.projectPath || '');
-  if (!executor) {
-    return;
-  }
-
   // Process items in order
   const itemsToProcess = this.pipelineQueue.filter((item) => {
     // ... dependency logic
   });

   for (const item of itemsToProcess) {
+    // Get executor for this specific item's project
+    let executor = this.pipelineExecutors.get(item.projectPath);
+    if (!executor) {
+      executor = new PipelineStepExecutor(this, item.projectPath);
+      this.pipelineExecutors.set(item.projectPath, executor);
+    }
+
     try {
       // ... rest of processing

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/server/src/services/auto-mode-service.ts around lines 2996 to 3000, the
executor lookup uses this.config?.projectPath but the queue item being processed
may have its own item.projectPath; update the lookup to prefer item.projectPath
and fall back to the service config (e.g. const executor =
this.pipelineExecutors.get(item.projectPath || this.config?.projectPath || ''))
so the correct executor for the item's project is used, and keep the existing
null check/return if no executor is found.

Comment on lines +3003 to +3011
const itemsToProcess = this.pipelineQueue.filter((item) => {
// Check dependencies
if (item.dependencies.length > 0) {
return item.dependencies.every(
(dep) => this.pipelineQueue.findIndex((q) => q.featureId === dep) === -1
);
}
return true;
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Dependency logic may be incorrect.

Lines 3005-3008 filter items where all dependencies are not in the queue:

return item.dependencies.every(
  (dep) => this.pipelineQueue.findIndex((q) => q.featureId === dep) === -1
);

This logic says "process item if all its dependencies are NOT in the queue anymore." However, this doesn't verify that those dependencies actually completed successfully—they might have failed or been dropped.

Consider tracking completed/failed features separately:

private completedFeatures = new Set<string>();
private failedFeatures = new Set<string>();

// In dependency check:
return item.dependencies.every((dep) => 
  this.completedFeatures.has(dep) && 
  !this.failedFeatures.has(dep)
);

Then update these sets when items complete or fail.

🤖 Prompt for AI Agents
In apps/server/src/services/auto-mode-service.ts around lines 3003 to 3011, the
dependency check currently treats an item as ready if all its dependencies are
absent from pipelineQueue, which can wrongly allow processing when dependencies
were dropped or failed; change the logic to track completed and failed features
(e.g., add private completedFeatures and failedFeatures sets) and update those
sets when an item finishes or fails, then replace the filter to only allow an
item when every dep is present in completedFeatures and not present in
failedFeatures; ensure updates to these sets occur in the existing
success/failure handlers and remove reliance on pipelineQueue absence to
determine readiness.

Comment on lines +3022 to +3036
// Create and store AbortController for this feature
const abortController = new AbortController();
this.runningFeatures.set(item.featureId, {
featureId: item.featureId,
projectPath: item.projectPath,
worktreePath: null,
branchName: null,
abortController,
isAutoMode: true,
startTime: Date.now(),
});

// Execute steps with the controller signal
const startTime = Date.now();
await this.executePipelineSteps(item.projectPath, feature, abortController.signal);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider duplicate runningFeatures entry risk.

Lines 3022-3032 add an entry to runningFeatures for each queued pipeline item. However, if the same feature is already running (from a manual execution or auto-loop), this could create conflicts or duplicate entries.

Add a check before creating the running feature entry:

// Check if already running
if (this.runningFeatures.has(item.featureId)) {
  console.warn(`[Pipeline] Feature ${item.featureId} is already running, skipping`);
  continue;
}

// Create and store AbortController for this feature
const abortController = new AbortController();
this.runningFeatures.set(item.featureId, {
  // ...
});
🤖 Prompt for AI Agents
In apps/server/src/services/auto-mode-service.ts around lines 3022 to 3036,
before creating and storing the AbortController and adding an entry to
this.runningFeatures, check if this.runningFeatures already contains
item.featureId; if it does, log or warn and skip processing this queued item
(e.g., continue the loop) to avoid duplicate/conflicting entries, otherwise
create the AbortController and set the runningFeatures entry as currently
implemented; ensure the check happens atomically before any side effects so no
duplicate entries are added.

@webdevcody
Copy link
Collaborator

When I checkout this branch, I tried enabling the pipeline in settings, added a code review, but then go to my kanban and I do not see any extra column, then I went back to settings and it was disabled again. I've tried multiple times to enable it but it never seems to persist.

Shironex and others added 2 commits December 27, 2025 04:07
The pipeline routes were imported but never mounted to the Express app,
causing 404 errors when the UI tried to save/load pipeline configuration.
This resulted in "SyntaxError: Unexpected token '<'" errors as the server
returned HTML error pages instead of JSON responses.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@webdevcody
Copy link
Collaborator

I tried your PR again related to the pipelines, the other issue is fixed, but now when I drag an item into in progress, I never saw it automatically get moved to code review or security review kanban, it just directly moved into manual review
does it automatically move from in progress -> next pipeline step -> another pipeline step -> manual review for you?

@NomadicDaddy
Copy link
Author

I tried your PR again related to the pipelines, the other issue is fixed, but now when I drag an item into in progress, I never saw it automatically get moved to code review or security review kanban, it just directly moved into manual review does it automatically move from in progress -> next pipeline step -> another pipeline step -> manual review for you?

Should be good now. I retried several other step combinations and was able to repro failure and fix those issues. Should be good now!

summary of test based on console output:

Summary of Successful Pipeline Execution
1. Feature Start
Feature "expense-schema" moved to In Progress
Agent started and completed the expense schema implementation
2. Pipeline Steps Executed in Correct Order
Code Review ✅
Started: "Starting Code Review..."
Completed: "Code Review completed successfully"
Smoke:QC ✅
Started: "Starting Smoke:QC..."
Completed: "Smoke:QC completed successfully"
3. Feature Completion
Both pipeline steps passed
Feature completed successfully in 518 seconds
Auto-verified and moved to final lane
Key Fixes Confirmed Working
Pipeline Reset Logic - When feature moved to In Progress, pipeline steps were properly reset
Code Review Step - No longer incorrectly skipped, executed first as intended
Smoke:QC Step - Fixed the toLowerCase() error, runs without issues
Sequential Execution - Steps run in the correct order defined in pipeline.json

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
apps/server/src/services/auto-mode-service.ts (3)

3068-3078: Risk of duplicate runningFeatures entries.

Lines 3070-3078 add an entry to runningFeatures without checking if the feature is already running (from manual execution or another queue). This could create conflicts or duplicate entries.

🔎 Proposed fix
+    // Check if already running
+    if (this.runningFeatures.has(item.featureId)) {
+      console.warn(`[Pipeline] Feature ${item.featureId} is already running, skipping queue item`);
+      continue;
+    }
+
     // Create and store AbortController for this feature
     const abortController = new AbortController();
     this.runningFeatures.set(item.featureId, {

3048-3057: Dependency logic may allow processing despite failures.

Lines 3052-3054 check if dependencies are absent from the queue to determine readiness. However, absence doesn't verify successful completion—dependencies might have failed or been dropped. This could allow dependent pipeline items to execute prematurely.

🔎 Proposed fix

Track completed and failed features separately:

+ private completedPipelineFeatures = new Set<string>();
+ private failedPipelineFeatures = new Set<string>();

  // In dependency check:
  const itemsToProcess = this.pipelineQueue.filter((item) => {
    if (item.dependencies.length > 0) {
-     return item.dependencies.every(
-       (dep) => this.pipelineQueue.findIndex((q) => q.featureId === dep) === -1
-     );
+     return item.dependencies.every(
+       (dep) => this.completedPipelineFeatures.has(dep) && !this.failedPipelineFeatures.has(dep)
+     );
    }
    return true;
  });

  // In success handler (around line 3092):
+ this.completedPipelineFeatures.add(item.featureId);

  // In failure handler (around line 3113):
+ this.failedPipelineFeatures.add(item.featureId);

3042-3046: Incorrect executor lookup for pipeline queue items.

Line 3043 retrieves the executor using this.config?.projectPath, but queue items have their own item.projectPath which may differ. This means pipeline steps could execute in the wrong project context.

🔎 Proposed fix

Move the executor lookup inside the loop and use item.projectPath:

-  // Get executor
-  const executor = this.pipelineExecutors.get(this.config?.projectPath || '');
-  if (!executor) {
-    return;
-  }
-
   // Process items in order
   const itemsToProcess = this.pipelineQueue.filter((item) => {
     // Check dependencies
     ...
   });

   for (const item of itemsToProcess) {
+    // Get executor for this item's project
+    let executor = this.pipelineExecutors.get(item.projectPath);
+    if (!executor) {
+      executor = new PipelineStepExecutor(this, item.projectPath);
+      this.pipelineExecutors.set(item.projectPath, executor);
+    }
+
     try {
       // Load feature
🧹 Nitpick comments (3)
libs/utils/src/error-handler.ts (1)

148-150: Consider aligning behavior with classifyError.

getErrorMessage returns 'Unknown error' for non-Error objects, while classifyError uses String(error || 'Unknown error'), preserving the actual error value. This inconsistency could be confusing.

🔎 Optional: align with classifyError
 export function getErrorMessage(error: unknown): string {
-  return error instanceof Error ? error.message : 'Unknown error';
+  return error instanceof Error ? error.message : String(error || 'Unknown error');
 }
libs/types/src/settings.ts (2)

531-532: Consider per-step-type config validation.

The validation only checks that step.config is a non-null object but doesn't validate its contents. Different step types (review, security, performance, test, custom) likely require different configuration fields. Consider implementing type-specific validation to catch misconfigurations early, such as:

  • review steps might require criteria or reviewers
  • custom steps might require command or script
  • performance steps might require thresholds or metrics

Shallow validation could allow invalid configurations to pass, leading to runtime failures during pipeline execution.

Example: Add per-type config validation
function validateStepConfig(step: any): boolean {
  // Common validation
  if (typeof step.config !== 'object' || step.config === null) {
    return false;
  }

  // Type-specific validation
  switch (step.type) {
    case 'review':
      return typeof step.config.criteria === 'string';
    case 'custom':
      return typeof step.config.command === 'string' || 
             typeof step.config.script === 'string';
    case 'performance':
      return typeof step.config.thresholds === 'object';
    // Add other type-specific validations
    default:
      return true;
  }
}

// Then use it in the main validation:
config.steps.every((step: any) => {
  return (
    // ... existing validations ...
    validateStepConfig(step)
  );
})

513-513: Consider validating the version field format.

The validation checks that version is a string but doesn't validate its format. If the pipeline config uses semantic versioning or another format convention, consider adding format validation to catch malformed versions early.

Example:

typeof config.version === 'string' &&
/^\d+\.\d+\.\d+$/.test(config.version) && // Semantic versioning
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1bd84f9 and fee79db.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (10)
  • apps/server/src/index.ts
  • apps/server/src/pipeline-steps/custom-step.ts
  • apps/server/src/providers/provider-factory.ts
  • apps/server/src/services/auto-mode-service.ts
  • apps/server/src/services/pipeline-step-executor.ts
  • apps/ui/src/components/views/board-view.tsx
  • apps/ui/src/components/views/settings-view.tsx
  • apps/ui/src/store/app-store.ts
  • libs/types/src/settings.ts
  • libs/utils/src/error-handler.ts
🚧 Files skipped from review as they are similar to previous changes (5)
  • apps/ui/src/components/views/settings-view.tsx
  • apps/ui/src/store/app-store.ts
  • apps/server/src/services/pipeline-step-executor.ts
  • apps/ui/src/components/views/board-view.tsx
  • apps/server/src/index.ts
🧰 Additional context used
🧬 Code graph analysis (1)
apps/server/src/pipeline-steps/custom-step.ts (5)
apps/server/src/services/pipeline-memory.ts (1)
  • IterationMemory (40-49)
apps/server/src/integrations/coderabbit.ts (1)
  • CodeRabbitIntegration (46-337)
libs/types/src/index.ts (3)
  • Feature (19-19)
  • PipelineStepConfig (89-89)
  • PipelineStepResult (91-91)
libs/types/src/feature.ts (1)
  • Feature (25-62)
libs/types/src/pipeline.ts (2)
  • PipelineStepConfig (25-40)
  • PipelineStepResult (137-149)
🔇 Additional comments (27)
libs/utils/src/error-handler.ts (2)

30-32: Good defensive input validation.

These guards handle edge cases gracefully when the function is called with unexpected types at runtime.


49-51: Consistent defensive validation.

Matches the pattern in isCancellationError for runtime safety.

libs/types/src/settings.ts (4)

10-10: LGTM: Clean type imports and exports.

The import and re-export of PipelineConfig follows the existing pattern for AgentModel and maintains consistency with ESM conventions.

Also applies to: 12-13


326-327: LGTM: CodeRabbit API key integration follows established pattern.

The new coderabbit API key field is consistent with existing credential fields, includes appropriate documentation, and uses the standard empty string default.

Also applies to: 481-481


401-403: LGTM: Pipeline configuration properly integrated into project settings.

The optional pipeline field follows the established pattern for project-specific overrides and is correctly typed.


528-528: These special model selection values are properly documented and implemented.

The 'different' and 'same' model values are valid and well-defined:

  1. Documented in type definition: Explicitly defined in PipelineStepConfig (libs/types/src/pipeline.ts:30) as model: AgentModel | 'different' | 'same', including JSON schema validation.

  2. Behavior is clear: JSDoc comments in getModelForStep() (libs/types/src/pipeline-step-executor.ts:333-335) explain both modes:

    • 'same': Reuses the feature's configured model (or defaults to 'opus')
    • 'different': Selects an alternate model (opus→sonnet, sonnet→opus, haiku→sonnet)
  3. Correctly handled by executor: Implemented in getModelForStep() (lines 345-360) with proper logic for each mode, covered by unit tests and actively used in integration tests and pipeline examples.

apps/server/src/providers/provider-factory.ts (2)

21-23: LGTM!

The input validation prevents potential issues downstream by ensuring modelId is always provided. Throwing an error is appropriate for a required parameter.


79-81: LGTM!

Returning null for empty provider names is appropriate for a lookup method and aligns with the method's return type signature.

apps/server/src/pipeline-steps/custom-step.ts (6)

40-46: LGTM!

The constructor properly initializes CodeRabbitIntegration with both codeRabbitApiKey and projectPath when both are provided, addressing the previous review feedback.


64-171: LGTM!

The execute method correctly orchestrates:

  • Optional CodeRabbit execution with AI fallback
  • Loop-until-success with configurable max loops
  • Memory-based feedback integration
  • Proper status assignment (lines 165-166 correctly set 'failed' when success criteria are not met)

All previous review concerns have been addressed.


178-197: LGTM!

The error handling is thorough and returns a well-structured PipelineStepResult. The defensive error message extraction ensures the step always returns a meaningful error status even if error processing itself fails.


200-250: LGTM!

The buildPrompt method correctly uses replaceAll() consistently throughout (lines 211, 216-219, 226-228), addressing the previous review concern about regex interpretation issues with special characters.


252-381: LGTM!

The checkSuccessCriteria method correctly implements:

  • Explicit success/failure indicator checking
  • JSON-based structured output parsing
  • AI-based evaluation fallback with proper signal propagation (line 359)
  • Safe default to false (line 380) to prevent false positives

All previous review concerns have been addressed.


383-398: LGTM!

The executeCodeRabbit method properly validates the integration is configured before calling submitReview, and appropriately marks the unused _signal parameter with an underscore prefix.

apps/server/src/services/auto-mode-service.ts (13)

357-387: LGTM!

The PipelineQueueItem interface correctly includes retryCount and maxRetries fields (lines 364-365) to prevent infinite retry loops, addressing previous review feedback. The pipeline-related fields in the AutoModeService class are well-organized for managing pipeline state.


395-405: LGTM!

The getPipelineStorage method correctly uses a project-relative path (line 400) instead of a hardcoded relative path, addressing the previous review concern about storage location resolution.


683-700: LGTM!

The pipeline integration in executeFeature correctly:

  • Executes pipeline steps after feature implementation (line 684)
  • Derives final status based on whether pipeline is configured (lines 692-699)
  • Maintains backward compatibility when no pipeline is configured

786-791: LGTM!

Resetting pipeline steps when resuming a feature without context ensures a clean state for re-execution.


907-918: LGTM!

The early return for completed features (lines 908-911) prevents unnecessary re-execution, and resetting pipeline steps (lines 915-918) ensures a clean state for follow-up work.


1550-1594: LGTM!

The updateFeatureStatus method correctly extends the signature with an optional pipelineState parameter (lines 1554-1557) while maintaining backward compatibility. The persistence logic properly handles both setting and explicitly clearing pipeline state (lines 1577-1588).


1596-1613: LGTM!

The loadPipelineConfig method implements a sensible caching strategy and gracefully handles errors by returning null, allowing callers to detect the absence of pipeline configuration.


1615-1768: LGTM!

The executePipelineSteps method correctly orchestrates pipeline execution:

  • Early returns when pipeline is disabled
  • Skips non-auto-triggered and already-passed steps
  • Properly propagates signals and handles errors
  • Updates feature status and stores results to storage
  • Fails fast on required step failures

Previous concerns about non-null assertions have been addressed.


1770-1799: LGTM!

The getStepModel method correctly implements model resolution logic for pipeline steps, including special handling for 'same', 'different', and explicit model specifications.


1861-1894: LGTM!

The skipPipelineStep method correctly prevents skipping required steps (lines 1869-1871) as suggested in previous reviews, throwing an error that will be caught by route handlers.


1896-1920: LGTM!

The clearPipelineStepResults method correctly removes step data from both the executor and feature state.


2987-3032: LGTM!

The addToPipelineQueue method correctly implements:

  • Deduplication with config updates
  • Retry counter initialization and reset
  • Priority-based queue sorting

3142-3190: LGTM!

The pipeline queue management helper methods are well-implemented:

  • sortPipelineQueue correctly prioritizes by priority then time
  • updatePipelineMetrics properly calculates running averages
  • getPipelineMetrics returns a defensive copy
  • clearPipelineQueue provides a clean reset operation

Comment on lines 516 to 534
config.steps.every((step: any) => {
// Normalize step ID for comparison (trim and lowercase)
const normalizedId = step.id?.toString().trim().toLowerCase();

return (
typeof step.id === 'string' &&
step.id.trim().length > 0 && // Ensure non-empty after trimming
!reservedNames.has(normalizedId) && // Check against reserved names
typeof step.type === 'string' &&
['review', 'security', 'performance', 'test', 'custom'].includes(step.type) &&
typeof step.name === 'string' &&
typeof step.model === 'string' &&
['opus', 'sonnet', 'haiku', 'different', 'same'].includes(step.model) &&
typeof step.required === 'boolean' &&
typeof step.autoTrigger === 'boolean' &&
typeof step.config === 'object' &&
step.config !== null
);
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== Checking StepConfig interface ==="
if [ -f "libs/types/src/pipeline.ts" ]; then
  rg -n --type=ts "interface StepConfig|type StepConfig" libs/types/src/pipeline.ts -A 20
else
  echo "File not found at libs/types/src/pipeline.ts"
  echo "Searching for StepConfig definition across repository:"
  rg -n --type=ts "interface StepConfig|type StepConfig" -A 15
fi

echo -e "\n=== Searching for dependency-related code ==="
rg -n --type=ts "dependenc|Dependenc" libs/types/src/ -C 2

echo -e "\n=== Checking the validation function context in settings.ts ==="
sed -n '500,550p' libs/types/src/settings.ts

Repository: AutoMaker-Org/automaker

Length of output: 4907


Add dependency validation to the step validation logic.

The dependencies field exists in the Step configuration (as dependencies?: string[] in pipeline.ts), but the validation function doesn't validate dependency relationships. Add checks to:

  1. Ensure dependency arrays contain valid step IDs (references to other steps in the array)
  2. Detect circular dependencies at validation time
  3. Verify that dependency IDs exist and are not self-referential

This aligns with the PR's "Dependency management with circular dependency detection" feature and prevents configuration errors at runtime.


⚠️ Potential issue | 🟠 Major

Add step ID uniqueness validation.

The validation function does not check whether step IDs are unique within the pipeline configuration. Duplicate step IDs could cause conflicts in the UI (column generation) or execution logic, leading to unexpected behavior.

🔎 Proposed fix to enforce unique step IDs
 config.steps.every((step: any) => {
   // Normalize step ID for comparison (trim and lowercase)
   const normalizedId = step.id?.toString().trim().toLowerCase();

   return (
     typeof step.id === 'string' &&
     step.id.trim().length > 0 && // Ensure non-empty after trimming
     !reservedNames.has(normalizedId) && // Check against reserved names
     typeof step.type === 'string' &&
     ['review', 'security', 'performance', 'test', 'custom'].includes(step.type) &&
     typeof step.name === 'string' &&
     typeof step.model === 'string' &&
     ['opus', 'sonnet', 'haiku', 'different', 'same'].includes(step.model) &&
     typeof step.required === 'boolean' &&
     typeof step.autoTrigger === 'boolean' &&
     typeof step.config === 'object' &&
     step.config !== null
   );
-})
+}) &&
+// Ensure step IDs are unique
+config.steps.length === new Set(config.steps.map((s: any) => s.id)).size
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
config.steps.every((step: any) => {
// Normalize step ID for comparison (trim and lowercase)
const normalizedId = step.id?.toString().trim().toLowerCase();
return (
typeof step.id === 'string' &&
step.id.trim().length > 0 && // Ensure non-empty after trimming
!reservedNames.has(normalizedId) && // Check against reserved names
typeof step.type === 'string' &&
['review', 'security', 'performance', 'test', 'custom'].includes(step.type) &&
typeof step.name === 'string' &&
typeof step.model === 'string' &&
['opus', 'sonnet', 'haiku', 'different', 'same'].includes(step.model) &&
typeof step.required === 'boolean' &&
typeof step.autoTrigger === 'boolean' &&
typeof step.config === 'object' &&
step.config !== null
);
})
config.steps.every((step: any) => {
// Normalize step ID for comparison (trim and lowercase)
const normalizedId = step.id?.toString().trim().toLowerCase();
return (
typeof step.id === 'string' &&
step.id.trim().length > 0 && // Ensure non-empty after trimming
!reservedNames.has(normalizedId) && // Check against reserved names
typeof step.type === 'string' &&
['review', 'security', 'performance', 'test', 'custom'].includes(step.type) &&
typeof step.name === 'string' &&
typeof step.model === 'string' &&
['opus', 'sonnet', 'haiku', 'different', 'same'].includes(step.model) &&
typeof step.required === 'boolean' &&
typeof step.autoTrigger === 'boolean' &&
typeof step.config === 'object' &&
step.config !== null
);
}) &&
// Ensure step IDs are unique
config.steps.length === new Set(config.steps.map((s: any) => s.id)).size
🤖 Prompt for AI Agents
In libs/types/src/settings.ts around lines 516 to 534, the validator currently
checks format and reserved names but not uniqueness of step IDs; update the
validation to ensure step IDs are unique by collecting each step's normalized ID
(trimmed and lowercased) and verifying no duplicates exist (e.g., build a Set of
normalized IDs and confirm its size equals the number of steps or check during
iteration that a normalizedId hasn't been seen before); keep the existing
reserved-name check and other validations, and fail validation if any duplicate
normalized ID is found.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (6)
libs/types/src/settings.ts (1)

500-555: Add dependency validation to prevent runtime errors.

The uniqueness validation has been successfully added (lines 547-551), but dependency validation is still missing as noted in previous reviews. The function doesn't validate the dependencies field in step configurations, which could lead to runtime errors.

Add validation to ensure:

  1. Dependency IDs reference existing steps in the configuration
  2. No circular dependencies exist
  3. Steps don't reference themselves
🔎 Proposed implementation
   // Check for duplicate normalized IDs
   if (normalizedIds.has(normalizedId)) {
     return false; // Duplicate ID found
   }
   normalizedIds.add(normalizedId);
 }

+// Validate dependencies
+for (const step of config.steps) {
+  if (step.dependencies && Array.isArray(step.dependencies)) {
+    for (const depId of step.dependencies) {
+      // Check dependency exists
+      if (!normalizedIds.has(depId.toString().trim().toLowerCase())) {
+        return false; // Dependency references non-existent step
+      }
+      // Check not self-referential
+      if (depId.toString().trim().toLowerCase() === step.id.toString().trim().toLowerCase()) {
+        return false; // Self-referential dependency
+      }
+    }
+  }
+}
+
+// Check for circular dependencies using DFS
+const visitedSteps = new Set<string>();
+const recursionStack = new Set<string>();
+
+function hasCycle(stepId: string): boolean {
+  visitedSteps.add(stepId);
+  recursionStack.add(stepId);
+
+  const step = config.steps.find((s: any) => s.id.toString().trim().toLowerCase() === stepId);
+  if (step?.dependencies && Array.isArray(step.dependencies)) {
+    for (const depId of step.dependencies) {
+      const normalizedDepId = depId.toString().trim().toLowerCase();
+      if (!visitedSteps.has(normalizedDepId)) {
+        if (hasCycle(normalizedDepId)) {
+          return true;
+        }
+      } else if (recursionStack.has(normalizedDepId)) {
+        return true; // Circular dependency detected
+      }
+    }
+  }
+
+  recursionStack.delete(stepId);
+  return false;
+}
+
+for (const id of normalizedIds) {
+  if (!visitedSteps.has(id) && hasCycle(id)) {
+    return false; // Circular dependency found
+  }
+}
+
 return true;
apps/server/src/services/auto-mode-service.ts (5)

377-381: Potential memory leak: Maps grow unbounded.

The pipelineConfigs and pipelineExecutors Maps use projectPath as keys and accumulate entries without any cleanup mechanism. If the service processes many different projects over its lifetime, these Maps will grow indefinitely.

Consider implementing an LRU cache or size-limited eviction strategy to prevent memory leaks in long-running server instances.


1882-1894: Silent return may hide programming errors.

The method now correctly prevents skipping required steps (lines 1887-1889), which is good. However, lines 1891-1894 still silently return when the feature doesn't exist or has no pipeline steps, which could hide programming errors where the method is called with invalid parameters.

Consider throwing explicit errors for better diagnostics:

const feature = await this.loadFeature(projectPath, featureId);
if (!feature) {
  throw new Error(`Feature ${featureId} not found in project ${projectPath}`);
}
if (!feature.pipelineSteps) {
  throw new Error(`Feature ${featureId} has no pipeline steps`);
}

3060-3064: Executor lookup uses wrong projectPath.

Line 3061 gets the executor using this.config?.projectPath, but queue items have their own item.projectPath which may differ. This means pipeline steps could be executed in the wrong project context.

Use the item's project path instead:

-  // Get executor
-  const executor = this.pipelineExecutors.get(this.config?.projectPath || '');
-  if (!executor) {
-    return;
-  }
+  // Process items in order
+  const itemsToProcess = this.pipelineQueue.filter((item) => {

Then inside the loop, get the executor for each specific item:

for (const item of itemsToProcess) {
  let executor = this.pipelineExecutors.get(item.projectPath);
  if (!executor) {
    executor = new PipelineStepExecutor(this, item.projectPath);
    this.pipelineExecutors.set(item.projectPath, executor);
  }
  // ... rest of processing
}

3067-3075: Dependency logic may allow execution of features with failed dependencies.

Lines 3069-3072 check if all dependencies are absent from the queue, but this doesn't verify that those dependencies completed successfully—they might have failed or been dropped.

Track completed and failed features separately:

private completedFeatures = new Set<string>();
private failedFeatures = new Set<string>();

// In dependency check:
return item.dependencies.every((dep) => 
  this.completedFeatures.has(dep) && 
  !this.failedFeatures.has(dep)
);

Update these sets in the success handler (line 3110) and failure handler (lines 3130-3143).


3086-3096: Risk of duplicate runningFeatures entries.

Lines 3088-3096 add an entry to runningFeatures without checking if the feature is already running. If the same feature is queued multiple times or is already running from a manual execution, this could create conflicts.

Add a check before creating the entry:

// Check if already running
if (this.runningFeatures.has(item.featureId)) {
  console.warn(`[Pipeline] Feature ${item.featureId} is already running, skipping`);
  continue;
}

// Create and store AbortController for this feature
const abortController = new AbortController();
this.runningFeatures.set(item.featureId, {
  // ...
});
🧹 Nitpick comments (3)
libs/utils/src/error-handler.ts (1)

70-71: Consider removing unused catch parameter.

The catch parameter e is declared but never used. You can simplify this to catch { without the parameter binding.

🔎 Proposed fix
-  } catch (e) {
+  } catch {
     message = 'Error converting error to string';
   }
apps/server/src/services/auto-mode-service.ts (2)

686-699: Simplify redundant conditional logic.

Both branches of the conditional (lines 692-698) set finalStatus to the same value based on feature.skipTests. The pipeline configuration check doesn't affect the outcome, making this essentially dead code.

🔎 Proposed simplification
-  // Determine final status based on testing mode and pipeline:
-  // - If pipeline is enabled and has steps, feature status will be the last step
-  // - If no pipeline or pipeline disabled, use original logic
-  const pipelineConfig = await this.loadPipelineConfig(projectPath);
-  let finalStatus: string;
-
-  if (pipelineConfig?.enabled && pipelineConfig.steps.length > 0) {
-    // Pipeline is active, status is already set to the last step
-    // Move to waiting_approval or verified based on skipTests
-    finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
-  } else {
-    // No pipeline, use original logic
-    finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
-  }
+  // Determine final status based on testing mode
+  const finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';

Note: If the pipeline configuration is intended to affect the final status differently, this logic needs to be updated to reflect that intent.


1615-1768: Consider extracting step execution into a helper method.

The executePipelineSteps method is comprehensive and handles errors appropriately. However, the step execution logic (lines 1671-1760) could be extracted into a separate helper method to improve readability and testability.

Example:

private async executeStep(
  executor: PipelineStepExecutor,
  stepConfig: PipelineStepConfig,
  feature: Feature,
  projectPath: string,
  signal: AbortSignal
): Promise<PipelineStepResult> {
  // Lines 1674-1709 logic here
}

This would make executePipelineSteps more focused on orchestration while keeping execution details separate.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fee79db and 7311880.

📒 Files selected for processing (3)
  • apps/server/src/services/auto-mode-service.ts
  • libs/types/src/settings.ts
  • libs/utils/src/error-handler.ts
🧰 Additional context used
🧬 Code graph analysis (2)
libs/types/src/settings.ts (2)
libs/types/src/index.ts (2)
  • PipelineConfig (88-88)
  • validatePipelineConfig (74-74)
libs/types/src/pipeline.ts (1)
  • PipelineConfig (13-20)
apps/server/src/services/auto-mode-service.ts (2)
libs/types/src/pipeline.ts (5)
  • PipelineStepConfig (25-40)
  • PipelineConfig (13-20)
  • PipelineStep (119-127)
  • PipelineStepResult (137-149)
  • StepStatus (132-132)
libs/types/src/feature.ts (1)
  • Feature (25-62)
🔇 Additional comments (19)
libs/utils/src/error-handler.ts (1)

67-72: Debug logging successfully removed.

The console.log statements flagged in the previous review have been removed. The try/catch now provides safe message extraction without log pollution or sensitive data exposure.

libs/types/src/settings.ts (4)

10-10: LGTM!

The import and re-export of PipelineConfig are correctly structured.

Also applies to: 12-13


326-327: LGTM!

The CodeRabbit API key field is properly added to the credentials structure with appropriate documentation.


401-403: LGTM!

The optional pipeline configuration field is correctly added to ProjectSettings, following the pattern of other optional project-level overrides.


481-481: LGTM!

The default empty string for the CodeRabbit API key is consistent with the pattern used for other API key providers.

apps/server/src/services/auto-mode-service.ts (14)

357-366: LGTM!

The PipelineQueueItem interface is well-structured and includes retry tracking (retryCount, maxRetries) to prevent infinite retry loops as requested in previous reviews.


395-405: LGTM!

The getPipelineStorage method correctly addresses the previous concern about hardcoded storage paths. It now uses project-relative paths and initializes storage lazily with the correct location.


481-482: LGTM!

The pipeline queue processing is appropriately integrated into the auto loop iteration.


786-790: LGTM!

Resetting pipeline steps when resuming a feature fresh is appropriate and ensures clean state.


684-684: LGTM!

The pipeline steps are correctly executed after feature implementation, with proper abort signal propagation.


907-918: LGTM!

The guard against re-executing completed features and the pipeline reset logic are both appropriate for the follow-up flow.


1550-1594: LGTM!

The extended updateFeatureStatus method properly handles optional pipeline state updates with explicit clearing of currentPipelineStep when undefined is passed.


1596-1613: LGTM!

The pipeline configuration loading and caching logic is well-implemented. The Map growth concern is already flagged separately.


1770-1799: LGTM!

The getStepModel method provides a clean abstraction for model resolution with sensible defaults and handling of special values ('same', 'different').


1801-1877: LGTM!

The executeAIStep method now properly handles AbortController creation and signal forwarding (lines 1824-1839), addressing the previous review concern. The implementation is robust with proper error handling.


3005-3050: LGTM!

The addToPipelineQueue method properly handles deduplication, initializes retry tracking, and maintains queue ordering.


3160-3173: LGTM!

The queue sorting logic correctly prioritizes items by priority level and then by time added.


3175-3192: LGTM!

The pipeline metrics tracking is correctly implemented with proper average execution time calculation.


3194-3208: LGTM!

Both getPipelineMetrics and clearPipelineQueue are correctly implemented. The use of spread operator in getPipelineMetrics provides proper encapsulation.

@NomadicDaddy NomadicDaddy deleted the feature/configurable-pipelines branch December 28, 2025 15:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Do Not Merge Use this label if something should not be merged. Enhancement Improvements to existing functionality or UI. Testers-Requested Request for others to test an enhancement or bug fix/etc.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants