Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions .agent/skills/git-worktrees/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Git Worktrees Skill

# IDENTIFIER: automaker-worktree-skill

## Description

This skill enables the agent to manage isolated development environments using Git Worktrees, which is the preferred way to work in the Automaker repository.

## Instructions

- Always use `git worktree list` before creating new ones to avoid path conflicts.
- Prefer naming worktrees with the prefix `../automaker-worktree-`.
- Ensure you are in the root of the repository before running worktree commands.
- Use the `git-worktree` workflow for step-by-step execution.
32 changes: 32 additions & 0 deletions .agent/workflows/git-worktree.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
description: How to manage git worktrees in Automaker (IDENTIFIER: automaker-worktree-workflow)
---

# Git Worktree Workflow

This workflow describes how to safely use git worktrees for feature development in Automaker.

1. **Check existing worktrees**:

```bash
git worktree list
```

2. **Create a new worktree for a feature**:

```bash
git worktree add ../automaker-worktree-<name> -b feature/<name>
```

3. **Navigate to the worktree**:
- `cd ../automaker-worktree-<name>`

4. **Remove worktree after completion**:

```bash
git worktree remove ../automaker-worktree-<name>
git branch -d feature/<name>
```

5. **Verify current setup**:
git status
Comment on lines +31 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wrap the command in a code block for consistency.

The git status command should be formatted in a code block like all other commands in this workflow document.

✏️ Proposed fix
 5. **Verify current setup**:
-   git status
+   ```bash
+   git status
+   ```
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
5. **Verify current setup**:
git status
5. **Verify current setup**:
🤖 Prompt for AI Agents
In @.agent/workflows/git-worktree.md around lines 28 - 29, In the "Verify
current setup" step the git status line is not in a code block; update that
occurrence of "git status" so it is wrapped in a fenced code block (use triple
backticks with language identifier like bash) to match the formatting of other
commands in the document.

24 changes: 24 additions & 0 deletions .github/workflows/pr_agent.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Qodo PR Agent

on:
pull_request:
types: [opened, reopened, ready_for_review]
issue_comment:
types: [created, edited]

permissions:
pull-requests: write
issues: write
contents: read

jobs:
pr_agent:
runs-on: ubuntu-latest
name: Run PR Agent
if: ${{ github.event.sender.type != 'Bot' }}
steps:
- id: pr-agent
uses: Codium-ai/pr-agent@main
Comment on lines +20 to +21
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

latest release tag Codium-ai/pr-agent GitHub Actions

💡 Result:

Latest release tag: v0.31 — docker/GitHub Actions tag codiumai/pr-agent:0.31-github_action (release listed "06 Nov 09:25"). [1]

Sources:
[1] qodo-ai/pr-agent releases (GitHub).


🌐 Web query:

Codium-ai/pr-agent GitHub releases latest tag GitHub Actions

💡 Result:

Latest published release for the repository (qodo-ai / formerly codium-ai) is v0.31 — Docker/GitHub Action image tag codiumai/pr-agent:0.31 (including codiumai/pr-agent:0.31-github_action). [1]

If you need this tag inside a GitHub Actions workflow, you can:

  • Read the repo’s latest release via the Releases API (GET /repos/{owner}/{repo}/releases/latest) and parse tag_name. [2]
  • Or use an existing action such as actions-ecosystem/action-get-latest-tag to obtain the latest Git tag in a workflow. [3]

Sources:
[1] Releases page showing v0.31 and Docker/GHA tags.
[2] GitHub Releases API (GET /repos/{owner}/{repo}/releases/latest).
[3] actions-ecosystem/action-get-latest-tag (GitHub Action to get latest tag).


🌐 Web query:

site:github.com Codium-ai/pr-agent releases v0.31 commit SHA

💡 Result:

v0.31 commit SHA: d36ad31. [1] (github.com) [2] (sourceforge.net)

Citations:


🌐 Web query:

"qodo-ai/pr-agent" GitHub Actions marketplace action reference

💡 Result:

The GitHub Action for PR-Agent is provided in the qodo-ai/pr-agent repository — use it in workflows with uses: qodo-ai/pr-agent@<ref> (example: uses: qodo-ai/pr-agent@main). See the repo and the GitHub Action / GitHub integration docs for the official action reference and environment/config variables. [1][2][3]

Sources:

  • qodo-ai/pr-agent (GitHub repo — README, Action usage). [1]
  • Qodo documentation — GitHub Action installation/usage and env config. [2][3]

Update action reference and pin to a specific release.

The action uses outdated organization reference Codium-ai/pr-agent (now qodo-ai/pr-agent) and a floating @main ref, creating supply-chain risk. Pin to a vetted release tag or commit SHA.

🔒 Suggested change
-        uses: Codium-ai/pr-agent@main
+        uses: qodo-ai/pr-agent@v0.31

or pin to commit:

-        uses: Codium-ai/pr-agent@main
+        uses: qodo-ai/pr-agent@d36ad31
🤖 Prompt for AI Agents
In @.github/workflows/pr_agent.yml around lines 20 - 21, Replace the floating,
outdated action reference used by the job with a pinned, trusted release: update
the `uses: Codium-ai/pr-agent@main` entry (job id `pr-agent`) to the new
organization `qodo-ai/pr-agent` and pin it to a specific release tag or commit
SHA (for example `qodo-ai/pr-agent@vX.Y.Z` or `qodo-ai/pr-agent@<commit-sha>`),
ensuring the workflow now references the new org and a fixed, auditable ref.

env:
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
12 changes: 12 additions & 0 deletions .pr_agent.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Qodo configuration for Automaker
# IDENTIFIER: automaker-qodo-config

[pr_test]
testing_framework = "vitest"

[pr_reviewer]
require_score_review = true
enable_review_labels_security = true
enable_review_labels_effort = true
num_code_suggestions = 4
inline_code_comments = true
5 changes: 5 additions & 0 deletions .qodo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Qodo configuration for Automaker
# IDENTIFIER: automaker-qodo-config

[pr_test]
testing_framework = "vitest"
1 change: 1 addition & 0 deletions apps/server/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
"@automaker/utils": "1.0.0",
"@modelcontextprotocol/sdk": "1.25.2",
"@openai/codex-sdk": "^0.77.0",
"axios": "^1.13.2",
"cookie-parser": "1.4.7",
"cors": "2.8.5",
"dotenv": "17.2.3",
Expand Down
13 changes: 13 additions & 0 deletions apps/server/src/config/sprites.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
export const spritesConfig = {
SPRITES_TOKEN: process.env.SPRITES_TOKEN,
SPRITES_API_BASE: process.env.SPRITES_API_BASE || 'https://api.sprites.dev/v1',
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
LINEAR_API_KEY: process.env.LINEAR_API_KEY,
CLAUDE_OAUTH_TOKEN: process.env.CLAUDE_OAUTH_TOKEN,
DEFAULT_REPO_URL: process.env.DEFAULT_REPO_URL,
DEFAULT_BRANCH: process.env.DEFAULT_BRANCH || 'main',
GIT_USER_NAME: process.env.GIT_USER_NAME || 'Automaker Agent',
GIT_USER_EMAIL: process.env.GIT_USER_EMAIL || 'agent@automaker.dev',
OTEL_RECEIVER_PORT: parseInt(process.env.OTEL_RECEIVER_PORT || '4317'),
OTEL_RECEIVER_HOST: process.env.OTEL_RECEIVER_HOST || '0.0.0.0',
};
13 changes: 13 additions & 0 deletions apps/server/src/lib/sdk-options.ts
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,19 @@ export function getModelForUseCase(
default: CLAUDE_MODEL_MAP['opus'],
};

// Fallback to OpenAI if Anthropic key is missing but OpenAI key is present
if (!process.env.ANTHROPIC_API_KEY && process.env.OPENAI_API_KEY) {
// Override defaults with OpenAI equivalents
defaultModels.spec = 'codex-gpt-4o';
defaultModels.features = 'codex-gpt-4o';
defaultModels.suggestions = 'codex-gpt-4o';
defaultModels.chat = 'codex-gpt-4o';
defaultModels.auto = 'codex-gpt-4o';
defaultModels.default = 'codex-gpt-4o';

logger.info(`Using OpenAI (codex-gpt-4o) for ${useCase} due to missing Anthropic key`);
}
Comment on lines +218 to +229
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "*model-resolver*" | head -20

Repository: AutoMaker-Org/automaker

Length of output: 118


🏁 Script executed:

fd -t f "model-resolver" --type f

Repository: AutoMaker-Org/automaker

Length of output: 116


🏁 Script executed:

find . -type d -name "*model-resolver*"

Repository: AutoMaker-Org/automaker

Length of output: 88


🏁 Script executed:

git ls-files | grep -i "model.*resolver"

Repository: AutoMaker-Org/automaker

Length of output: 362


🏁 Script executed:

cat -n libs/model-resolver/src/resolver.ts

Repository: AutoMaker-Org/automaker

Length of output: 8596


🏁 Script executed:

cat -n libs/model-resolver/tests/resolver.test.ts

Repository: AutoMaker-Org/automaker

Length of output: 22884


🏁 Script executed:

rg "PROVIDER_PREFIXES" -A 5 libs/types/src/

Repository: AutoMaker-Org/automaker

Length of output: 4587


🏁 Script executed:

cat -n apps/server/src/lib/sdk-options.ts | head -250

Repository: AutoMaker-Org/automaker

Length of output: 9758


🏁 Script executed:

cat -n libs/model-resolver/src/index.ts

Repository: AutoMaker-Org/automaker

Length of output: 618


🏁 Script executed:

rg "codex-gpt-4o" --type ts

Repository: AutoMaker-Org/automaker

Length of output: 899


🏁 Script executed:

rg "CODEX_MODEL_MAP" -A 20 libs/types/src/

Repository: AutoMaker-Org/automaker

Length of output: 14604


🏁 Script executed:

rg "class.*Provider|function.*Provider" --type ts apps/server/src/ | grep -i codex | head -20

Repository: AutoMaker-Org/automaker

Length of output: 160


🏁 Script executed:

cat -n apps/server/src/routes/features/routes/generate-title.ts | head -100

Repository: AutoMaker-Org/automaker

Length of output: 4131


🏁 Script executed:

cat -n apps/server/src/providers/codex-provider.ts | head -150

Repository: AutoMaker-Org/automaker

Length of output: 6623


🏁 Script executed:

rg "gpt-4o|codex-gpt-4o" --type ts libs/

Repository: AutoMaker-Org/automaker

Length of output: 1031


🏁 Script executed:

rg "validateBareModelId|validateModel" --type ts apps/server/src/providers/ -A 5

Repository: AutoMaker-Org/automaker

Length of output: 2827


🏁 Script executed:

rg "gpt-4o" --type ts libs/types/src/model.ts

Repository: AutoMaker-Org/automaker

Length of output: 49


🏁 Script executed:

cat -n libs/types/src/model.ts | grep -A 50 "CODEX_MODEL_MAP"

Repository: AutoMaker-Org/automaker

Length of output: 3153


🏁 Script executed:

rg "validateBareModelId" -A 15 libs/types/src/

Repository: AutoMaker-Org/automaker

Length of output: 2243


🏁 Script executed:

rg "stripProviderPrefix|getModelProvider" -B 2 -A 10 libs/types/src/provider-utils.ts | head -60

Repository: AutoMaker-Org/automaker

Length of output: 1364


Replace hardcoded codex-gpt-4o with a supported Codex model constant.

codex-gpt-4o is not a valid Codex model. The CODEX_MODEL_MAP only supports models like gpt-5.2-codex, gpt-5.1-codex-max, and gpt-5.2. The resolver will pass this invalid model through unchanged, causing a runtime failure when the Codex provider tries to execute it. Use CODEX_MODEL_MAP.gpt52Codex (or another supported entry) instead of the hardcoded string:

-    defaultModels.spec = 'codex-gpt-4o';
-    defaultModels.features = 'codex-gpt-4o';
-    defaultModels.suggestions = 'codex-gpt-4o';
-    defaultModels.chat = 'codex-gpt-4o';
-    defaultModels.auto = 'codex-gpt-4o';
-    defaultModels.default = 'codex-gpt-4o';
+    defaultModels.spec = CODEX_MODEL_MAP.gpt52Codex;
+    defaultModels.features = CODEX_MODEL_MAP.gpt52Codex;
+    defaultModels.suggestions = CODEX_MODEL_MAP.gpt52Codex;
+    defaultModels.chat = CODEX_MODEL_MAP.gpt52Codex;
+    defaultModels.auto = CODEX_MODEL_MAP.gpt52Codex;
+    defaultModels.default = CODEX_MODEL_MAP.gpt52Codex;

Apply the same fix to apps/server/src/routes/features/routes/generate-title.ts at line 68.

🤖 Prompt for AI Agents
In `@apps/server/src/lib/sdk-options.ts` around lines 218 - 229, The code sets
defaults to the invalid literal 'codex-gpt-4o' (e.g., defaultModels.spec,
.features, .suggestions, .chat, .auto, .default) which will break the Codex
resolver — replace those hardcoded strings with the supported constant
CODEX_MODEL_MAP.gpt52Codex (or another supported entry from CODEX_MODEL_MAP) and
update any similar occurrence in the generate-title handler; also ensure
CODEX_MODEL_MAP is imported where you make the change so the resolver receives a
valid Codex model constant.


return resolveModelString(defaultModels[useCase] || DEFAULT_MODELS.claude);
}

Expand Down
13 changes: 12 additions & 1 deletion apps/server/src/routes/features/routes/generate-title.ts
Original file line number Diff line number Diff line change
Expand Up @@ -60,12 +60,23 @@ export function createGenerateTitleHandler(
const prompts = await getPromptCustomization(settingsService, '[GenerateTitle]');
const systemPrompt = prompts.titleGeneration.systemPrompt;

let model = CLAUDE_MODEL_MAP.haiku;

// Fallback to OpenAI if Anthropic key is missing but OpenAI key is present
if (!process.env.ANTHROPIC_API_KEY && process.env.OPENAI_API_KEY) {
// Use gpt-4o which is comparable/better than haiku
model = 'codex-gpt-4o';
logger.info(
'Using OpenAI (codex-gpt-4o) for title generation due to missing Anthropic key'
);
}
Comment on lines +65 to +72
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd apps/server/src/routes/features/routes && cat -n generate-title.ts

Repository: AutoMaker-Org/automaker

Length of output: 190


🌐 Web query:

OpenAI Codex SDK model identifiers gpt-4o codex-gpt-4o

💡 Result:

Short answer:

  • "gpt-4o" — yes, this is a valid OpenAI model identifier. [1][2]
  • "codex-gpt-4o" — not a standard model id in OpenAI’s docs; Codex-branded model ids use names like "gpt-5-codex" / "gpt-5.1-codex" (and the Codex CLI/config accepts generic model strings such as "gpt-4o"). Use "gpt-4o" (or a documented codex model like "gpt-5-codex") in the Codex SDK. [3][4]

Sources:
[1] OpenAI platform docs — GPT‑4o model page.
[2] OpenAI cookbook / examples referencing model "gpt-4o".
[3] OpenAI Codex SDK / Codex docs (model selection in config).
[4] OpenAI Codex models listing (shows "gpt-5-codex" family).


Fix invalid model identifier codex-gpt-4o—use gpt-4o instead.

The model string 'codex-gpt-4o' on line 68 is not a recognized OpenAI model identifier. According to OpenAI documentation, valid options are 'gpt-4o' (or Codex-branded models like 'gpt-5-codex'). Change this to 'gpt-4o' to match the comment and ensure the fallback works correctly.

🤖 Prompt for AI Agents
In `@apps/server/src/routes/features/routes/generate-title.ts` around lines 65 -
72, The fallback model identifier is incorrect: replace the string
'codex-gpt-4o' with the valid OpenAI model id 'gpt-4o' where the code sets model
(the assignment that currently reads model = 'codex-gpt-4o') in the
title-generation route; update the related logger message only if you want to
keep it consistent (logger.info about using OpenAI codex/gpt-4o) so the fallback
uses a recognized model identifier.


const userPrompt = `Generate a concise title for this feature:\n\n${trimmedDescription}`;

// Use simpleQuery - provider abstraction handles all the streaming/extraction
const result = await simpleQuery({
prompt: `${systemPrompt}\n\n${userPrompt}`,
model: CLAUDE_MODEL_MAP.haiku,
model,
cwd: process.cwd(),
maxTurns: 1,
allowedTools: [],
Expand Down
31 changes: 31 additions & 0 deletions apps/server/src/services/codex-model-cache-service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,37 @@ export class CodexModelCacheService {
try {
// Check if app-server is available
const isAvailable = await this.appServerService.isAvailable();

// FALLBACK: If CLI is not available but we have an OpenAI API Key (e.g. in CI or limited env),
// provide default models so the UI doesn't block.
if (!isAvailable && process.env.OPENAI_API_KEY) {
logger.info('[doRefresh] CLI unavailable but OPENAI_API_KEY found. Using fallback models.');
const fallbackModels: CodexModel[] = [
{
id: 'codex-gpt-4o',
label: 'GPT-4o (Fallback)',
description: 'Fallback model using direct OpenAI API',
hasThinking: false,
supportsVision: true,
tier: 'premium',
isDefault: true,
},
{
id: 'codex-gpt-4o-mini',
label: 'GPT-4o Mini (Fallback)',
description: 'Fast fallback model using direct OpenAI API',
hasThinking: false,
supportsVision: true,
tier: 'basic',
isDefault: false,
},
];

// Save to cache so subsequent requests are fast
await this.saveToCache(fallbackModels);
return fallbackModels;
}
Comment on lines +152 to +181
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid caching fallback for the full TTL.

With the current logic, once fallback models are cached, getModels will keep serving them until the 1‑hour TTL expires—even if the app-server becomes available shortly after. That can unnecessarily hide real models for up to an hour. Consider saving fallback with a much shorter TTL (or marking it as fallback and forcing a refresh when the app-server is available).

💡 Suggested change (shorter TTL for fallback)
-        await this.saveToCache(fallbackModels);
+        await this.saveToCache(fallbackModels, 5 * 60 * 1000); // 5 min TTL for fallback
         return fallbackModels;
-  private async saveToCache(models: CodexModel[]): Promise<void> {
+  private async saveToCache(models: CodexModel[], ttlOverride?: number): Promise<void> {
     const cache: CodexModelCache = {
       models,
       cachedAt: Date.now(),
-      ttl: this.ttl,
+      ttl: ttlOverride ?? this.ttl,
     };
🤖 Prompt for AI Agents
In `@apps/server/src/services/codex-model-cache-service.ts` around lines 152 -
181, The fallback branch in doRefresh caches fallbackModels via saveToCache so
getModels will serve them for the full TTL; change this to cache fallbacks with
a much shorter TTL or mark them as fallback so they’re refreshed as soon as the
CLI becomes available. Concretely: when saving fallbackModels in doRefresh, call
a variant of saveToCache that accepts an options object (e.g., ttlSeconds or
isFallback) or add an overload to saveToCache to pass ttlSeconds (e.g., 30–120s)
and/or a isFallback flag; additionally update getModels (or the cache retrieval
logic) to treat isFallback entries as stale when the CLI/app-server reports
available so a fresh refresh is triggered. Ensure references to doRefresh,
fallbackModels, saveToCache, and getModels are updated accordingly.


if (!isAvailable) {
return [];
}
Expand Down
Loading