-
-
Notifications
You must be signed in to change notification settings - Fork 698
Agent docs examples #1706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agent docs examples #1706
Conversation
* If there’s a heartbeat error and no attempts we put it back in the queue to try again * When nacking, return whether it was put back in the queue or not * Try and nack, if it fails then fail the run * Consolidated switch statement * Fail executing/retrying runs
* OOM retrying on larger machines * Create forty-windows-shop.md * Update forty-windows-shop.md * Only retry again if the machine is different from the original
…ing these as OOMs
This reverts commit 5f652c6.
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Detect ffmpeg OOM errors, added manual OutOfMemoryError * Create eighty-spies-knock.md
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
… new runs are created (#1696) * Create new partitioned TaskEvent table, and switch to it gradually as new runs are created * Add env var for partition window in seconds * Make startCreatedAt required in task event store
…continue fix (#1698) * WIP fix for ResumeAttemptService selecting the wrong attempt (which has no error or output) * Don’t create an attempt if the run is already in a final status * Don’t get all the columns for the query. Improved the logging. * Added a log to the batch example * Filter out the undefined values
* add env var for additional pull secrets * make static images configurable * optional image prefixes * optional labels with sample rates * add missing core paths * remove excessive logs
* remove unused imports * tell run to exit before force requeue * handle exit for case where we already retried after oom * improve retry span and add machine props * don't try to exit run in dev
|
WalkthroughThis pull request introduces a broad set of updates across the codebase. Legacy patch note files and deprecated span link components have been removed, while Kubernetes provider modules now use dynamic image references and a new custom label helper. Environment schemas and presenter methods have been extended with additional contextual fields. Multiple service methods have been refactored to integrate a new task event store and improved error handling—including Out Of Memory detection—and more granular task run and cancellation logic. Documentation, database migrations, and package versions have also been updated. Changes
Sequence Diagram(s)sequenceDiagram
participant S as CompleteAttemptService
participant E as EventRepository
participant T as TaskEventStore
participant DB as Database
S->>E: completeEvent(storeTable, spanId, startCreatedAt, endCreatedAt)
E->>T: Process task events with partitioning
T->>DB: Execute database operations (inserts/queries)
DB-->>T: Return operation result
T-->>E: Return processed event data
E-->>S: Complete task run or trigger retry (if OOM detected)
Possibly related PRs
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 ESLint
apps/kubernetes-provider/src/index.tsOops! Something went wrong! :( ESLint: 8.45.0 ESLint couldn't find the config "custom" to extend from. Please check that the name of the config is correct. The config "custom" was referenced from the config file in "/.eslintrc.js". If you still have problems, please stop by https://eslint.org/chat/help to chat with the team. apps/webapp/app/env.server.tsOops! Something went wrong! :( ESLint: 8.45.0 ESLint couldn't find the config "custom" to extend from. Please check that the name of the config is correct. The config "custom" was referenced from the config file in "/.eslintrc.js". If you still have problems, please stop by https://eslint.org/chat/help to chat with the team. apps/webapp/app/components/runs/v3/SpanInspector.tsxOops! Something went wrong! :( ESLint: 8.45.0 ESLint couldn't find the config "custom" to extend from. Please check that the name of the config is correct. The config "custom" was referenced from the config file in "/.eslintrc.js". If you still have problems, please stop by https://eslint.org/chat/help to chat with the team.
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🔭 Outside diff range comments (3)
docs/guides/examples/scrape-hacker-news.mdx (3)
193-193
:⚠️ Potential issueFix typo in OpenAI model name.
There appears to be a typo in the model name:
gpt-4o
should begpt-4
.- model: "gpt-4o", + model: "gpt-4",
99-127
: 🛠️ Refactor suggestionAdd error handling for browser operations.
The browser operations should be wrapped in a try-catch block to ensure proper cleanup of resources in case of errors.
run: async () => { + let browser; + try { // Connect to BrowserBase to proxy the scraping of the Hacker News articles - const browser = await puppeteer.connect({ + browser = await puppeteer.connect({ browserWSEndpoint: `wss://connect.browserbase.com?apiKey=${process.env.BROWSERBASE_API_KEY}`, }); logger.info("Connected to Browserbase"); const page = await browser.newPage(); // ... rest of the browser operations ... await browser.close(); + } catch (error) { + logger.error("Browser operation failed", { error }); + if (browser) await browser.close(); + throw error; + } await wait.for({ seconds: 5 });
163-188
: 🛠️ Refactor suggestionEnsure browser cleanup in child task.
Similar to the parent task, the child task should also handle browser cleanup properly in case of errors.
+ let browser; + try { - const browser = await puppeteer.connect({ + browser = await puppeteer.connect({ browserWSEndpoint: `wss://connect.browserbase.com?apiKey=${process.env.BROWSERBASE_API_KEY}`, }); // ... rest of the browser operations ... await browser.close(); + } catch (error) { + logger.error("Browser operation failed in child task", { error }); + if (browser) await browser.close(); + throw error; + }
🧹 Nitpick comments (37)
docs/guides/frameworks/supabase-edge-functions-basic.mdx (2)
32-33
: Add link to create a new project.For better user experience, consider adding a link to guide users on how to create a new Trigger.dev project, similar to the account creation link above it.
- [Create a Trigger.dev account](https://cloud.trigger.dev) - - Create a new Trigger.dev project + - [Create a new Trigger.dev project](https://cloud.trigger.dev/new)
79-79
: Replace SDK version placeholder with actual version.The comment mentions replacing
<your-sdk-version>
but the code already shows3.0.0
. This could be confusing. Consider updating the comment to be clearer.- // Import the Trigger.dev SDK - replace "<your-sdk-version>" with the version of the SDK you are using, e.g. "3.0.0". You can find this in your package.json file. + // Import the Trigger.dev SDK v3.0.0apps/webapp/app/components/runs/v3/SpanInspector.tsx (1)
233-233
: Update Tailwind class for consistent sizing.The
size-4
class might not be supported in all Tailwind versions. Consider using the standard width and height classes instead.-<Spinner className="size-4" /> +<Spinner className="h-4 w-4" />apps/webapp/app/v3/services/resumeAttempt.server.ts (2)
41-70
: Performance optimization: Using select instead of include.The change from
include
toselect
in dependencies and batchDependencies follows Prisma's best practices. This optimization explicitly specifies which fields to retrieve, potentially reducing the amount of data transferred from the database.Consider adding a comment explaining why
select
is preferred overinclude
here, to help future maintainers understand the performance implications.
135-157
: Enhance error message for better debugging.The new batch dependency handling logic is robust, filtering for final state attempts and picking the most recent ones. However, the error message could be more descriptive.
Consider enhancing the error message to include more context:
- this._logger.error("[ResumeAttemptService] not all batch items have attempts", { + this._logger.error("[ResumeAttemptService] Some batch items are missing final state attempts", { + totalBatchItems: dependentBatchItems.length, + completedBatchItems: completedAttemptIds.length, runId: attempt.taskRunId, completedAttemptIds, finalAttempts, dependentBatchItems, });docs/introduction.mdx (1)
25-25
: Consider hyphenating "open source" when used as a compound adjective.-Trigger.dev is an open source background jobs framework that lets you write reliable workflows in plain async code. +Trigger.dev is an open-source background jobs framework that lets you write reliable workflows in plain async code.🧰 Tools
🪛 LanguageTool
[uncategorized] ~25-~25: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...What is Trigger.dev? Trigger.dev is an open source background jobs framework that lets you...(EN_COMPOUND_ADJECTIVE_INTERNAL)
docs/guides/introduction.mdx (1)
38-38
: Fix typo in "encorporate".-Example projects are full projects with example repos you can fork and use. These are a great way of learning how to encorporate Trigger.dev into your project. +Example projects are full projects with example repos you can fork and use. These are a great way of learning how to incorporate Trigger.dev into your project.docs/guides/examples/fal-ai-realtime.mdx (1)
42-78
: Consider adding error handling for Fal.ai API failures.While the code example is well-structured with proper type safety using Zod schemas, it could benefit from explicit error handling for potential Fal.ai API failures.
Consider adding a try-catch block:
export const realtimeImageGeneration = schemaTask({ id: "realtime-image-generation", schema: payloadSchema, run: async (payload) => { + try { const result = await fal.subscribe("fal-ai/flux/dev/image-to-image", { input: { image_url: payload.imageUrl, prompt: payload.prompt, }, onQueueUpdate: (update) => { logger.info("Fal.ai processing update", { update }); }, }); const $result = FalResult.parse(result); const [{ url: cartoonUrl }] = $result.images; return { imageUrl: cartoonUrl, }; + } catch (error) { + logger.error("Failed to generate image", { error }); + throw error; + } }, });references/hello-world/src/trigger/oom.ts (3)
7-12
: Consider externalizing retry configuration.
Currently, the retry logic and machine fallback are hardcoded. For greater flexibility and maintainability, you might want to move these values to a configuration file or environment variables.retry: { - outOfMemory: { - machine: "small-1x", - }, + outOfMemory: { + machine: process.env.OOM_FALLBACK_MACHINE ?? "small-1x", + }, },
34-36
: Add context for manual OOM errors.
Throwing anOutOfMemoryError
whenmanual
is true can be clearer if you log an explanatory message before the throw, so future maintainers know why this is triggered.if (manual) { + logger.info("Manual OOM error triggered"); throw new OutOfMemoryError(); }
44-55
: Clarify intentional infinite loops.
These infinite loops appear to be deliberately causing high memory usage. Consider adding comments to explain that this is an intentional stress test or demonstration of retry fallback.try { while (true) { a += a; } } catch (error) { + // At this point, an OutOfMemoryError may occur, or we proceed to the catch block intentionally logger.error(error instanceof Error ? error.message : "Unknown error", { error }); let b = []; while (true) { + // Intentionally creating memory pressure to simulate OOM b.push(a.replace(/a/g, "b")); } }apps/kubernetes-provider/src/labelHelper.ts (1)
145-149
: Remove unnecessary continue statement.
Static analysis flagged the continue as unnecessary. Removing it can streamline the code without changing functionality.if (Math.random() <= sampleRate) { additionalLabels[key] = value; - continue; }
🧰 Tools
🪛 Biome (1.9.4)
[error] 147-147: Unnecessary continue statement
Unsafe fix: Delete the unnecessary continue statement
(lint/correctness/noUnnecessaryContinue)
apps/webapp/app/v3/taskEventStore.server.ts (1)
123-174
: Verify selection ofid
field in partitioned queries.
In the partitioned table query, theid
column is not selected, whereas in the non-partitioned branch it is. This could be intentional for partial anonymization or an oversight.SELECT + id, "spanId", "parentId", ... FROM "TaskEventPartitioned"
apps/webapp/app/v3/failedTaskRun.server.ts (1)
183-220
: FunctiongetExecutionRetry
is well-structured.The try/catch block gracefully handles missing or invalid retry configurations. Consider ensuring that negative or zero delays are handled in unit tests, as a defensive measure against unexpected calculations.
apps/kubernetes-provider/src/index.ts (1)
419-432
: Extended pull secrets.Building
pullSecrets
from environment-provided values enables flexible private registry configurations. Consider trimming or validating the split strings if there's any chance of formatting issues.apps/webapp/app/v3/services/completeAttempt.server.ts (1)
741-779
: Refining OOM detection inisOOMError
.This logic accounts for different OOM signals, including manual OOM triggers. The static analysis suggests using optional chaining at line 766 for readability:
- if (error.message && error.message.includes("ffmpeg was killed with signal SIGKILL")) { + if (error.message?.includes("ffmpeg was killed with signal SIGKILL")) {🧰 Tools
🪛 Biome (1.9.4)
[error] 766-766: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
apps/webapp/app/v3/eventRepository.server.ts (2)
375-421
:queryIncompleteEvents
approach is functional, though could be optimized.
Currently, it fetches all partial events, filters them in-memory, and calls#queryEvents
again. If performance becomes a concern, consider a single query with more sophisticated filtering.
648-685
: Consider documenting the recursion limit in#createSpanFromEvent
.
The checkif (level >= 8)
halts ancestor traversal. If deeper ancestry is possible, make it adjustable or clarify why 8 is sufficient.apps/webapp/app/routes/resources.runs.$runParam.logs.download.ts (1)
43-52
: Improve stream error handling.The current implementation silently catches errors during event formatting. Consider logging these errors or handling them more explicitly.
const readable = new Readable({ read() { runEvents.forEach((event) => { try { this.push(formatRunEvent(event) + "\n"); - } catch {} + } catch (error) { + logger.warn("Failed to format run event", { error, event }); + } }); this.push(null); // End of stream }, });apps/webapp/app/v3/services/executeTasksWaitingForDeploy.ts (1)
81-95
: Add error handling for message enqueueing.The current implementation lacks error handling for the
marqs?.enqueueMessage
call. Consider adding try-catch blocks to handle potential failures gracefully.for (const run of runsWaitingForDeploy) { + try { await marqs?.enqueueMessage( backgroundWorker.runtimeEnvironment, run.queue, run.id, { type: "EXECUTE", taskIdentifier: run.taskIdentifier, projectId: backgroundWorker.runtimeEnvironment.projectId, environmentId: backgroundWorker.runtimeEnvironment.id, environmentType: backgroundWorker.runtimeEnvironment.type, }, run.concurrencyKey ?? undefined ); + } catch (error) { + logger.error("Failed to enqueue message for task run", { + runId: run.id, + error, + }); + } }apps/webapp/test/realtimeClient.test.ts (1)
13-17
: Consider extracting Redis configuration to a shared constant.The Redis configuration is duplicated across three test cases. Consider extracting it to a shared constant to improve maintainability and reduce duplication.
Create a shared constant at the top of the file:
+const TEST_REDIS_CONFIG = (redis: any) => ({ + host: redis.options.host, + port: redis.options.port, + tlsDisabled: true, +}); describe.skipIf(process.env.GITHUB_ACTIONS)("RealtimeClient", () => { containerWithElectricAndRedisTest( "Should only track concurrency for live requests", { timeout: 30_000 }, async ({ redis, electricOrigin, prisma }) => { const client = new RealtimeClient({ electricOrigin, keyPrefix: "test:realtime", - redis: { - host: redis.options.host, - port: redis.options.port, - tlsDisabled: true, - }, + redis: TEST_REDIS_CONFIG(redis),Also applies to: 153-157, 236-240
internal-packages/database/prisma/migrations/20250212053026_create_task_event_partitioned_table/migration.sql (1)
2-54
: Consider additional indexes for common query patterns.The table schema is well-designed with comprehensive fields. However, consider adding indexes for:
(environmentId, createdAt)
- For querying events by environment over time(organizationId, createdAt)
- For organization-wide event analysis(taskSlug, createdAt)
- For task-specific event analysisThese composite indexes would optimize common query patterns while respecting the partition key.
docs/machines.mdx (3)
53-53
: Improve clarity in customer reference.Consider rephrasing to "...when you know it's a larger file or when you have a customer who has a lot of data."
🧰 Tools
🪛 LanguageTool
[style] ~53-~53: Consider using “who” when you are referring to a person instead of an object.
Context: ...u know it's a larger file or a customer that has a lot of data. ## Out Of Memory (O...(THAT_WHO)
59-59
: Add comma for better readability.Consider adding a comma: "If this doesn't fix it, there might be a memory leak."
🧰 Tools
🪛 LanguageTool
[typographical] ~59-~59: Consider adding a comma.
Context: ...he machine specs. If this doesn't fix it there might be a memory leak. We automatical...(IF_THERE_COMMA)
80-105
: Add memory monitoring examples.Consider adding examples of how to monitor memory usage in tasks, such as:
- Using Node.js's
process.memoryUsage()
- Implementing memory usage logging
- Setting up alerts for high memory usage
docs/guides/ai-agents/route-question.mdx (1)
82-85
: Add timeout handling for model calls.Consider adding timeout handling for the model calls to prevent long-running requests:
const answerResult = await generateText({ model: openai(routingResult.model), messages: [{ role: "user", content: payload.question }], + timeout: 30000, // 30 seconds timeout });
docs/guides/ai-agents/generate-translate-copy.mdx (1)
73-90
: Add retry logic for translation failures.Consider adding retry logic for the translation step, as it's less critical than the initial generation:
// Step 2: Translate to target language +const MAX_RETRIES = 3; +let retryCount = 0; +let translatedCopy; + +while (retryCount < MAX_RETRIES) { + try { const translatedCopy = await generateText({ model: openai("o1-mini"), messages: [ { role: "system", content: `You are an expert translator specializing in marketing content translation into ${payload.targetLanguage}.`, }, { role: "user", content: `Translate the following marketing copy to ${payload.targetLanguage}, maintaining the same tone and marketing impact:\n\n${generatedCopy}`, }, ], experimental_telemetry: { isEnabled: true, functionId: "generate-and-translate-copy", }, }); + break; + } catch (error) { + retryCount++; + if (retryCount === MAX_RETRIES) { + throw new Error(`Translation failed after ${MAX_RETRIES} attempts: ${error.message}`); + } + await new Promise(resolve => setTimeout(resolve, 1000 * retryCount)); + } +}docs/guides/ai-agents/respond-and-check-content.mdx (3)
34-39
: Consider using a more advanced model and enhancing the system prompt.The current implementation uses
o1-mini
with a basic system prompt. For customer service, consider:
- Using a more capable model for better response quality
- Enhancing the system prompt with specific guidelines, tone requirements, and response format
- model: openai("o1-mini"), + model: openai("gpt-4"), messages: [ { role: "system", - content: "You are a helpful customer service representative.", + content: `You are a helpful customer service representative. + Guidelines: + - Maintain a professional and friendly tone + - Be concise but thorough + - Format responses with clear sections + - Include relevant disclaimers when necessary + Response format: + 1. Greeting + 2. Direct answer + 3. Additional context (if needed) + 4. Next steps (if applicable) + 5. Closing`, },
53-73
: Enhance content moderation with more detailed analysis.The current implementation uses a simple true/false response. Consider returning a structured response with:
- Specific categories of inappropriate content
- Confidence scores per category
- Specific phrases or sections that triggered the flags
- return response.text.toLowerCase().includes("true"); + const analysis = JSON.parse(response.text); + return { + isInappropriate: analysis.isInappropriate, + categories: analysis.categories, + confidence: analysis.confidence, + flaggedContent: analysis.flaggedContent, + explanation: analysis.explanation + };Update the system prompt accordingly:
- "You are a content moderator. Respond with 'true' if the content is inappropriate or contains harmful, threatening, offensive, or explicit content, 'false' otherwise.", + `You are a content moderator. Analyze the content and respond with a JSON object containing: + { + "isInappropriate": boolean, + "categories": string[], + "confidence": number, + "flaggedContent": string[], + "explanation": string + } + + Categories to check: + - Hate speech + - Violence + - Adult content + - Harassment + - Personal information + - Malicious content`,
93-112
: Improve error handling with specific error types.The current error handling is basic. Consider adding specific error handling for different scenarios.
// Check moderation result first - if (moderationRun.ok && moderationRun.output === true) { + if (moderationRun.ok && moderationRun.output.isInappropriate) { return { response: - "I apologize, but I cannot process this request as it contains inappropriate content.", + `I apologize, but I cannot process this request as it contains ${moderationRun.output.categories.join(", ")}. ${moderationRun.output.explanation}`, wasInappropriate: true, + moderationDetails: moderationRun.output }; } // Return the generated response if everything is ok if (responseRun.ok) { return { response: responseRun.output, wasInappropriate: false, }; } // Handle any errors - throw new Error("Failed to process customer question"); + const error = new Error("Failed to process customer question"); + error.cause = { + moderationError: !moderationRun.ok ? moderationRun.error : undefined, + responseError: !responseRun.ok ? responseRun.error : undefined + }; + throw error;docs/guides/ai-agents/translate-and-refine.mdx (3)
44-50
: Consider reducing the maximum iterations.10 iterations seems excessive and could lead to high API costs. Most translations should converge within 3-5 iterations.
- if (rejectionCount >= 10) { + if (rejectionCount >= 5) { return { finalTranslation: payload.previousTranslation, iterations: rejectionCount, status: "MAX_ITERATIONS_REACHED", + reason: "Maximum iterations reached without achieving desired quality." }; }
81-104
: Structure the evaluation criteria more rigorously.The current evaluation prompt could be more structured to ensure consistent quality assessment.
content: `You are an expert literary critic and translator focused on practical, high-quality translations. Your goal is to ensure translations are accurate and natural, but not necessarily perfect. This is iteration ${ rejectionCount + 1 - } of a maximum 5 iterations. + } of a maximum 5 iterations. RESPONSE FORMAT: - - If the translation meets 90%+ quality: Respond with exactly "APPROVED" (nothing else) - - If improvements are needed: Provide only the specific issues that must be fixed + { + "approved": boolean, + "qualityScore": number, + "issues": { + "accuracy": string[], + "naturalness": string[], + "style": string[] + } + } Evaluation criteria: - - Accuracy of meaning (primary importance) - - Natural flow in the target language - - Preservation of key style elements + Accuracy (weight: 0.5): + - Core meaning preserved + - No omissions or additions + - Technical terms correctly translated + + Natural flow (weight: 0.3): + - Idiomatic expressions + - Correct grammar and syntax + - Appropriate register + + Style (weight: 0.2): + - Tone matches original + - Literary devices preserved + - Cultural nuances adapted
52-55
: Enhance the translation prompt with more context.The current translation prompt could benefit from more context about the text's style and purpose.
const translationPrompt = payload.feedback ? `Previous translation: "${payload.previousTranslation}"\n\nFeedback received: "${payload.feedback}"\n\nPlease provide an improved translation addressing this feedback.` - : `Translate this text into ${payload.targetLanguage}, preserving style and meaning: "${payload.text}"`; + : `Translate this text into ${payload.targetLanguage}: + + Text: "${payload.text}" + + Context: + - Text type: ${detectTextType(payload.text)} + - Register: ${detectRegister(payload.text)} + - Target audience: General + + Requirements: + - Preserve the original style and tone + - Maintain any technical terminology + - Adapt cultural references appropriately + - Ensure natural flow in ${payload.targetLanguage}`;docs/guides/ai-agents/verify-news-article.mdx (1)
147-152
: Structure historical analysis more comprehensively.The current historical analysis is basic and could benefit from a more detailed structure.
return { claimId: claim.id, - feasibility: 0.8, - historicalContext: response.text, + analysis: { + feasibility: response.feasibility, + timeline: response.timeline, + precedents: response.precedents, + contradictions: response.contradictions, + trendAnalysis: response.trendAnalysis + }, + sources: response.sources, + confidence: response.confidence };Update the system prompt accordingly:
- "Analyze this claim in historical context, considering past announcements and technological feasibility.", + `Analyze this claim in historical context. Provide a JSON response with: + { + "feasibility": number, + "timeline": [ + { + "date": string, + "event": string, + "significance": string + } + ], + "precedents": [ + { + "description": string, + "outcome": string, + "relevance": string + } + ], + "contradictions": [ + { + "claim": string, + "source": string, + "explanation": string + } + ], + "trendAnalysis": { + "pattern": string, + "reliability": number, + "factors": string[] + }, + "sources": [ + { + "url": string, + "title": string, + "date": string + } + ], + "confidence": number + }`,internal-packages/database/prisma/schema.prisma (3)
1732-1732
: Document the purpose of the taskEventStore field.The
taskEventStore
field appears to be a configuration field for specifying the event store implementation. Consider adding a comment to explain its purpose, valid values, and when to use different event stores.- taskEventStore String @default("taskEvent") + /// Specifies which event store implementation to use for task events. + /// Valid values: + /// - "taskEvent": Uses the default TaskEvent model + /// - "taskEventPartitioned": Uses the partitioned TaskEventPartitioned model + taskEventStore String @default("taskEvent")
2710-2711
: Document partitioning strategy and migration plan.The comment indicates this is a temporary solution until replaced by clickhouse. Consider adding more details about:
- The partitioning strategy
- Performance implications
- Migration timeline to clickhouse
-/// This is the unified otel span/log model that will eventually be replaced by clickhouse +/// This is the unified otel span/log model that uses time-based partitioning for improved query performance. +/// Note: This is a temporary solution and will be replaced by clickhouse in the future. +/// Partitioning strategy: +/// - Events are partitioned by createdAt using a composite primary key +/// - Indexes are maintained for efficient querying of trace, span, and run data +/// - Performance considerations: Monitor partition sizes and query patterns
2712-2805
: LGTM! Well-designed partitioning strategy.The model effectively enables time-based partitioning while maintaining compatibility with the original TaskEvent model:
- Uses composite primary key
[id, createdAt]
for time-based partitioning- Preserves all necessary fields and indexes
- Maintains query capabilities for tracing and logging
This change should help manage large volumes of task events more efficiently.
Consider implementing a cleanup strategy for old partitions to manage data retention and storage costs effectively.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (32)
docs/guides/ai-agents/evaluator-optimizer.png
is excluded by!**/*.png
docs/guides/ai-agents/orchestrator-workers.png
is excluded by!**/*.png
docs/guides/ai-agents/parallelization.png
is excluded by!**/*.png
docs/guides/ai-agents/prompt-chaining.png
is excluded by!**/*.png
docs/guides/ai-agents/routing.png
is excluded by!**/*.png
docs/images/creating-a-project/creating-a-project-1.png
is excluded by!**/*.png
docs/images/creating-a-project/creating-a-project-2.png
is excluded by!**/*.png
docs/images/creating-a-project/creating-a-project-3.png
is excluded by!**/*.png
docs/images/intro-browserbase.jpg
is excluded by!**/*.jpg
docs/images/intro-deepgram.jpg
is excluded by!**/*.jpg
docs/images/intro-examples.jpg
is excluded by!**/*.jpg
docs/images/intro-fal.jpg
is excluded by!**/*.jpg
docs/images/intro-ffmpeg.jpg
is excluded by!**/*.jpg
docs/images/intro-firecrawl.jpg
is excluded by!**/*.jpg
docs/images/intro-frameworks.jpg
is excluded by!**/*.jpg
docs/images/intro-libreoffice.jpg
is excluded by!**/*.jpg
docs/images/intro-openai.jpg
is excluded by!**/*.jpg
docs/images/intro-puppeteer.jpg
is excluded by!**/*.jpg
docs/images/intro-quickstart.jpg
is excluded by!**/*.jpg
docs/images/intro-resend.jpg
is excluded by!**/*.jpg
docs/images/intro-sentry.jpg
is excluded by!**/*.jpg
docs/images/intro-sharp.jpg
is excluded by!**/*.jpg
docs/images/intro-supabase.jpg
is excluded by!**/*.jpg
docs/images/intro-vercel.jpg
is excluded by!**/*.jpg
docs/images/intro-video.jpg
is excluded by!**/*.jpg
docs/images/logo-bun.png
is excluded by!**/*.png
docs/images/logo-nextjs.png
is excluded by!**/*.png
docs/images/logo-nodejs-1.png
is excluded by!**/*.png
docs/images/logo-nodejs.png
is excluded by!**/*.png
docs/images/logo-remix.png
is excluded by!**/*.png
docs/pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (81)
.changeset/lemon-fireants-repair.md
(0 hunks).changeset/lovely-toys-obey.md
(0 hunks)apps/kubernetes-provider/src/index.ts
(13 hunks)apps/kubernetes-provider/src/labelHelper.ts
(1 hunks)apps/kubernetes-provider/tsconfig.json
(1 hunks)apps/webapp/app/components/runs/v3/RunInspector.tsx
(0 hunks)apps/webapp/app/components/runs/v3/SpanInspector.tsx
(2 hunks)apps/webapp/app/env.server.ts
(2 hunks)apps/webapp/app/presenters/v3/RunPresenter.server.ts
(4 hunks)apps/webapp/app/presenters/v3/SpanPresenter.server.ts
(6 hunks)apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.traces.$traceId.spans.$spanId/route.tsx
(0 hunks)apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.v3.$projectParam.runs.$runParam.spans.$spanParam/route.tsx
(0 hunks)apps/webapp/app/routes/resources.runs.$runParam.logs.download.ts
(2 hunks)apps/webapp/app/utils/pathBuilder.ts
(0 hunks)apps/webapp/app/utils/taskEvent.ts
(0 hunks)apps/webapp/app/v3/eventRepository.server.ts
(17 hunks)apps/webapp/app/v3/failedTaskRun.server.ts
(3 hunks)apps/webapp/app/v3/marqs/index.server.ts
(4 hunks)apps/webapp/app/v3/services/cancelAttempt.server.ts
(2 hunks)apps/webapp/app/v3/services/cancelTaskRun.server.ts
(2 hunks)apps/webapp/app/v3/services/completeAttempt.server.ts
(13 hunks)apps/webapp/app/v3/services/crashTaskRun.server.ts
(2 hunks)apps/webapp/app/v3/services/createTaskRunAttempt.server.ts
(2 hunks)apps/webapp/app/v3/services/executeTasksWaitingForDeploy.ts
(5 hunks)apps/webapp/app/v3/services/expireEnqueuedRun.server.ts
(2 hunks)apps/webapp/app/v3/services/resumeAttempt.server.ts
(4 hunks)apps/webapp/app/v3/services/triggerTask.server.ts
(2 hunks)apps/webapp/app/v3/taskEventStore.server.ts
(1 hunks)apps/webapp/app/v3/taskRunHeartbeatFailed.server.ts
(5 hunks)apps/webapp/test/realtimeClient.test.ts
(3 hunks)docker/docker-compose.yml
(1 hunks)docs/docs.json
(1 hunks)docs/guides/ai-agents/generate-translate-copy.mdx
(1 hunks)docs/guides/ai-agents/overview.mdx
(1 hunks)docs/guides/ai-agents/respond-and-check-content.mdx
(1 hunks)docs/guides/ai-agents/route-question.mdx
(1 hunks)docs/guides/ai-agents/translate-and-refine.mdx
(1 hunks)docs/guides/ai-agents/verify-news-article.mdx
(1 hunks)docs/guides/dashboard/creating-a-project.mdx
(0 hunks)docs/guides/example-projects/realtime-fal-ai.mdx
(1 hunks)docs/guides/examples/fal-ai-image-to-cartoon.mdx
(1 hunks)docs/guides/examples/fal-ai-realtime.mdx
(1 hunks)docs/guides/examples/scrape-hacker-news.mdx
(1 hunks)docs/guides/frameworks/supabase-edge-functions-basic.mdx
(1 hunks)docs/guides/frameworks/supabase-edge-functions-database-webhooks.mdx
(1 hunks)docs/guides/introduction.mdx
(1 hunks)docs/introduction.mdx
(4 hunks)docs/machines.mdx
(2 hunks)docs/mint.json
(0 hunks)docs/package.json
(1 hunks)docs/realtime/overview.mdx
(1 hunks)docs/snippets/card-bun.mdx
(0 hunks)docs/snippets/card-nextjs.mdx
(0 hunks)docs/snippets/card-nodejs.mdx
(0 hunks)docs/snippets/card-remix.mdx
(0 hunks)docs/snippets/card-supabase.mdx
(0 hunks)docs/snippets/framework-prerequisites.mdx
(1 hunks)docs/video-walkthrough.mdx
(1 hunks)internal-packages/database/prisma/migrations/20250212053026_create_task_event_partitioned_table/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20250212075957_add_task_event_store_to_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/schema.prisma
(2 hunks)internal-packages/testcontainers/src/utils.ts
(1 hunks)packages/build/CHANGELOG.md
(1 hunks)packages/build/package.json
(2 hunks)packages/cli-v3/CHANGELOG.md
(1 hunks)packages/cli-v3/package.json
(2 hunks)packages/core/CHANGELOG.md
(1 hunks)packages/core/package.json
(1 hunks)packages/core/src/v3/errors.ts
(2 hunks)packages/core/src/v3/schemas/schemas.ts
(2 hunks)packages/core/src/v3/types/tasks.ts
(1 hunks)packages/react-hooks/CHANGELOG.md
(1 hunks)packages/react-hooks/package.json
(2 hunks)packages/rsc/CHANGELOG.md
(1 hunks)packages/rsc/package.json
(2 hunks)packages/trigger-sdk/CHANGELOG.md
(1 hunks)packages/trigger-sdk/package.json
(2 hunks)packages/trigger-sdk/src/v3/index.ts
(1 hunks)references/hello-world/src/trigger/example.ts
(1 hunks)references/hello-world/src/trigger/oom.ts
(1 hunks)references/nextjs-realtime/src/app/actions.ts
(1 hunks)
💤 Files with no reviewable changes (14)
- .changeset/lovely-toys-obey.md
- docs/snippets/card-bun.mdx
- .changeset/lemon-fireants-repair.md
- docs/snippets/card-remix.mdx
- apps/webapp/app/utils/pathBuilder.ts
- docs/snippets/card-supabase.mdx
- docs/snippets/card-nextjs.mdx
- docs/snippets/card-nodejs.mdx
- apps/webapp/app/utils/taskEvent.ts
- docs/guides/dashboard/creating-a-project.mdx
- apps/webapp/app/components/runs/v3/RunInspector.tsx
- apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.v3.$projectParam.traces.$traceId.spans.$spanId/route.tsx
- docs/mint.json
- apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.v3.$projectParam.runs.$runParam.spans.$spanParam/route.tsx
✅ Files skipped from review due to trivial changes (18)
- docs/snippets/framework-prerequisites.mdx
- docs/guides/example-projects/realtime-fal-ai.mdx
- docs/guides/examples/fal-ai-image-to-cartoon.mdx
- docs/guides/frameworks/supabase-edge-functions-database-webhooks.mdx
- packages/trigger-sdk/package.json
- docs/realtime/overview.mdx
- packages/react-hooks/CHANGELOG.md
- packages/rsc/CHANGELOG.md
- packages/build/CHANGELOG.md
- packages/build/package.json
- packages/cli-v3/package.json
- packages/cli-v3/CHANGELOG.md
- packages/core/package.json
- docs/docs.json
- docs/video-walkthrough.mdx
- packages/core/CHANGELOG.md
- packages/react-hooks/package.json
- docs/guides/ai-agents/overview.mdx
🧰 Additional context used
🪛 LanguageTool
docs/guides/ai-agents/translate-and-refine.mdx
[uncategorized] ~9-~9: Possible missing comma found.
Context: ...fective for tasks with clear evaluation criteria where iterative refinement provides bet...
(AI_HYDRA_LEO_MISSING_COMMA)
docs/machines.mdx
[style] ~53-~53: Consider using “who” when you are referring to a person instead of an object.
Context: ...u know it's a larger file or a customer that has a lot of data. ## Out Of Memory (O...
(THAT_WHO)
[typographical] ~59-~59: Consider adding a comma.
Context: ...he machine specs. If this doesn't fix it there might be a memory leak. We automatical...
(IF_THERE_COMMA)
docs/introduction.mdx
[uncategorized] ~25-~25: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...What is Trigger.dev? Trigger.dev is an open source background jobs framework that lets you...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🪛 Biome (1.9.4)
apps/kubernetes-provider/src/labelHelper.ts
[error] 147-147: Unnecessary continue statement
Unsafe fix: Delete the unnecessary continue statement
(lint/correctness/noUnnecessaryContinue)
apps/webapp/app/v3/services/completeAttempt.server.ts
[error] 766-766: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (95)
docs/guides/frameworks/supabase-edge-functions-basic.mdx (2)
81-81
: Clarify the import path structure.The relative import path
../../../src/trigger/example.ts
might not match users' project structure. Consider adding a note about the expected project structure or using a more generic path in the example.Add a note explaining the expected project structure and where users should place their task files.
1-194
: Well-structured documentation with clear instructions!The documentation is well-organized with:
- Clear step-by-step instructions
- Helpful code examples
- Effective use of notes and images
- Good use of MDX components for better readability
🧰 Tools
🪛 LanguageTool
[style] ~22-~22: In American English, abbreviations like “etc.” require a period.
Context: ... when a new user signs up to a service, etc), or when there are any changes or upda...(ETC_PERIOD)
[uncategorized] ~97-~97: Use a comma before ‘or’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...der use Node, so they must stay in there or they will not run, especially if you ...(COMMA_COMPOUND_SENTENCE)
[uncategorized] ~98-~98: Possible missing comma found.
Context: ...re using a different runtime like Deno. Also do not add "npm:
" to imports inside...(AI_HYDRA_LEO_MISSING_COMMA)
[style] ~134-~134: Consider a more expressive alternative.
Context: ...cret key in the Supabase dashboard. To do this, first go to your Trigger.dev [pro...(DO_ACHIEVE)
[duplication] ~138-~138: Possible typo: you repeated a word.
Context: ...project, navigate to 'Project settings' <Icon icon="circle-1" iconType="solid" size={20} c...(ENGLISH_WORD_REPEAT_RULE)
[duplication] ~138-~138: Possible typo: you repeated a word.
Context: ...lor="A8FF53" />, click 'Edge functions' <Icon icon="circle-2" iconType="solid" size={20} c...(ENGLISH_WORD_REPEAT_RULE)
[duplication] ~138-~138: Possible typo: you repeated a word.
Context: ...nu, and then click the 'Add new secret' <Icon icon="circle-3" iconType="solid" size={20} c...(ENGLISH_WORD_REPEAT_RULE)
[duplication] ~140-~140: Possible typo: you repeated a word.
Context: ...3" /> button. AddTRIGGER_SECRET_KEY
<Icon icon="circle-4" iconType="solid" size={20} c...(ENGLISH_WORD_REPEAT_RULE)
[style] ~172-~172: Consider a more expressive alternative.
Context: ... from your deployed edge function"> To do this all you need to do is simply open ...(DO_ACHIEVE)
[uncategorized] ~172-~172: Possible missing comma found.
Context: ...om your deployed edge function"> To do this all you need to do is simply open the `...(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~186-~186: Use a comma before ‘and’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...dev](http://cloud.trigger.dev) dashboard and you should see a succesful `hello-world...(COMMA_COMPOUND_SENTENCE)
apps/webapp/app/components/runs/v3/SpanInspector.tsx (3)
1-1
: Note: AI summary inconsistency regarding imports.The AI summary incorrectly states that
formatDuration
andnanosecondsToMilliseconds
imports were removed. These imports are still present and are actively used in the SpanTimeline component.Also applies to: 8-8, 20-20
Likely an incorrect or invalid review comment.
25-210
: LGTM! Well-structured component with good practices.The SpanInspector component demonstrates:
- Clear prop types and error handling
- Effective use of context and hooks
- Clean conditional rendering with tabs
- Well-organized layout structure
219-264
: LGTM! Clean timeline implementation.The SpanTimeline component demonstrates:
- Clear type definitions
- Effective state handling
- Proper date calculations
- Good use of utility functions
apps/webapp/app/v3/services/resumeAttempt.server.ts (1)
29-32
: LGTM! Addition of status field enhances attempt filtering capabilities.The inclusion of the
status
field inlatestAttemptSelect
enables proper filtering of attempts based on their status, which is crucial for the batch dependency handling logic.docs/introduction.mdx (4)
2-5
: LGTM! The metadata changes improve clarity and organization.The updated title, sidebar title, and description provide better context and the centered mode enhances readability.
8-21
: LGTM! Well-structured quick access cards.The card group provides an excellent overview of key resources with clear titles, images, and descriptions.
31-44
: LGTM! Well-organized feature and example sections.The card groups effectively categorize and present different aspects of the framework:
- Core concepts with clear icons and descriptions
- Feature overview with relevant icons
- Rich set of examples with visual thumbnails
Also applies to: 48-67, 71-86
93-132
: LGTM! Clear and inviting help section.The help section effectively presents multiple channels for user support with appropriate icons and descriptions.
docs/guides/introduction.mdx (3)
3-3
: LGTM! Clearer sidebar navigation.The sidebar title change from "Introduction" to "Overview" better reflects the content of this page.
9-14
: LGTM! Well-structured framework cards.The two-column layout with framework logos provides clear navigation to framework-specific guides.
23-27
: LGTM! Comprehensive AI agent guides.The new AI agent guides cover a good range of practical use cases:
- Content generation and translation
- Question routing
- Content moderation
- News verification
- Translation refinement
docs/guides/examples/fal-ai-realtime.mdx (1)
22-30
: LGTM! Video embedding improvements.The iframe implementation has been simplified by removing unnecessary wrapper divs and using a fixed height, which provides better consistency across different viewports.
docs/guides/examples/scrape-hacker-news.mdx (1)
10-18
: LGTM! Video embedding improvements.The iframe implementation has been simplified and standardized with a fixed height of 315 pixels.
internal-packages/testcontainers/src/utils.ts (1)
58-60
:❓ Verification inconclusive
Verify compatibility with ElectricSQL beta.15.
The ElectricSQL version has been updated from beta.1 to beta.15, which is a significant version jump. Let's verify the compatibility and check for any breaking changes.
🌐 Web query:
What are the breaking changes in ElectricSQL between version 1.0.0-beta.1 and 1.0.0-beta.15?
💡 Result:
ElectricSQL introduced several notable changes between versions 1.0.0-beta.1 and subsequent beta releases up to 1.0.0-beta.15:
Wire Protocol Changes
- Removed
offset
parameter and replaced it with an explicitlsn
header for replication streams[5][7]. This affects clients relying on offset-based synchronization:// Old protocol (offset) const stream = new ReplicationStream({ offset: "xyz" }) // New protocol (lsn header) const stream = new ReplicationStream({ headers: { lsn: "abc" } })Query Handling Changes
- Fixed WHERE clause parsing for logical operators (AND/OR) with ≥3 chained conditions[7]. Queries like:
Now parse correctly but would have failed/misbehaved in beta ≤4.WHERE (a = 1 AND b = 2) OR (c = 3 AND d = 4) OR e = 5Health Check Behavior
- Changed HTTP status codes for health endpoints[7]:
- Returns
202 Accepted
instead of200 OK
when services are inwaiting
/starting
states- This affects monitoring systems expecting specific status codes
Memory Management
- Fixed arithmetic bugs in system memory calculations[7]. Applications relying on memory stats via:
Would see corrected values post-beta3.Electric.Stats.get_memory_usage()While no explicit schema-breaking changes were documented between these betas[9], developers should test replication streams and query handling when upgrading past beta4 due to protocol and parser fixes[5][7].
Citations:
- 1: https://electric-sql.com/blog/2024/04/10/electricsql-v0.10-released
- 2: https://ivy.fm/tag/gleam
- 3: Hooks electric-sql/electric#565
- 4: https://github.com/coursetable/coursetable/blob/master/bun.lock
- 5: https://electric-sql.com/changelog
- 6: https://cve.mitre.org/cgi-bin/cvekey.cgi
- 7: https://github.com/electric-sql/electric/releases
- 8: https://classic.yarnpkg.com/en/package/unstorage
- 9: https://electric-sql.com/blog/2024/12/10/electric-beta-release
- 10: https://github.com/nuxt/nuxt.new/blob/main/pnpm-lock.yaml
Action Required: Verify ElectricSQL Beta.15 Compatibility
The update to ElectricSQL from 1.0.0-beta.1 to 1.0.0-beta.15 introduces several changes that may affect downstream functionality:
- Replication Streams: The removal of the
offset
parameter in favor of an explicitlsn
header means that if any part of our system (or tests) relies on legacy offset-based configuration, adjustments will be required.- Query Parsing: Improvements in the handling of complex WHERE clause conditions could affect queries that previously operated under beta.1 behavior.
- Health Checks: Health endpoint responses now return
202 Accepted
(instead of200 OK
) during transitional states, so ensure that any monitoring or status-checking mechanisms accommodate this change.- Memory Management: Fixes in memory calculation may impact any code relying on precise memory usage data.
While our current container instantiation in
internal-packages/testcontainers/src/utils.ts
(lines 58–60) appears unaffected directly, please run comprehensive integration tests to ensure compatibility with these protocol and behavior changes.docker/docker-compose.yml (1)
142-144
: LGTM! Version consistency maintained.The ElectricSQL version has been updated to match the version in the test containers configuration, maintaining consistency across the development and deployment environments.
packages/rsc/package.json (1)
3-3
: LGTM! Version bump and dependency updates look consistent.The version bump to 3.3.15 and corresponding workspace dependency updates maintain consistency across the monorepo packages.
Also applies to: 40-40, 47-47
apps/kubernetes-provider/src/labelHelper.ts (2)
20-23
: Ensure case-insensitive usage is intentional.
By converting the prefix to lowercase, there's a potential mismatch with uppercase environment variable names. Confirm this is the intended behavior to avoid confusion or inconsistent labeling.
138-144
: Sampling logic looks good.
The random sampling approach for selectively applying labels appears reasonable for dynamic label usage. No major issues detected here.apps/webapp/app/v3/taskEventStore.server.ts (1)
82-99
: Confirm partitioned vs. non-partitioned date filtering consistency.
When partitioning is enabled, a date range with a buffer is added. This logic is skipped for the non-partitioned table. Verify that ignoring the date range in the non-partitioned flow is intentional and doesn't lead to unintended query results.apps/webapp/app/v3/failedTaskRun.server.ts (2)
8-15
: New imports look good.The additional imports for
Prisma
,TaskRun
,semver
,sharedQueueTasks
, and the new task status functions appear correct. These dependencies align well with the revised logic in this file.
222-299
: MethodgetRetryConfig
is consistent with semver checks.The fallback behavior for older SDK versions and the safe parsing of
retryConfig
help prevent runtime errors. Make sure to maintain coverage for edge cases (e.g., invalid or emptyretryConfig
) in your tests.apps/kubernetes-provider/src/index.ts (14)
20-21
: Imports updated correctly.Adding
assertExhaustive
andCustomLabelHelper
supports better labeling and exhaustive switch handling. No issues found.
42-42
: No meaningful change.
44-49
: Environment-based image references look good.Hardcoding fallback images while allowing overrides via environment variables is a clean solution for ensuring consistent deployments.
77-77
: Introduced#labelHelper
.This approach centralizes label management, improving maintainability and readability.
117-117
: UsinggetImageRef("deployment", opts.imageRef)
.Switching to a dynamic image reference helps maintain consistency across deployments.
171-171
: Additional labels for creation.Incorporating
this.#labelHelper.getAdditionalLabels("create")
enriches metadata for K8s resources.
185-185
: Deploy image reference updated.
getImageRef("deployment", opts.image)
is a neat way to unify references under a single helper.
233-233
: Restoration labels improved.
this.#labelHelper.getAdditionalLabels("restore")
ensures each operation has its own labeling context.
247-247
: Deployment image reference for init container.Reusing
getImageRef("deployment", opts.imageRef)
ensures consistent image resolution.
252-252
: Switching to utility image reference for BusyBox.
getImageRef("utility", BUSYBOX_IMAGE)
supports flexible busybox image overrides.
268-268
: Checkpoint restore image.Using
getImageRef("restore", opts.checkpointRef)
clearly distinguishes restore images from deployments.
374-374
: Deployment image reference under prePull.This continues the same consistent pattern for retrieving the correct image.
388-388
: Utility image reference for pause container.
getImageRef("utility", PAUSE_IMAGE)
unifies how images are resolved within the codebase.
692-710
: Introduction ofImageType
andgetImageRef
functions.Defining a dedicated type and prefixing helper functions for images fosters a cleaner, more extensible architecture. Adding a small test for each image type scenario may help prevent future regressions.
apps/webapp/app/v3/services/completeAttempt.server.ts (11)
3-5
: New imports for advanced error handling and store resolution.Imports like
MachinePresetName
,TaskRunError
,isManualOutOfMemoryError
, andgetTaskEventStoreTableForRun
integrate well with the updated OOM logic.Also applies to: 13-13, 34-35
168-186
: Marking task run span as complete upon success.The new usage of
eventRepository.completeEvent()
withgetTaskEventStoreTableForRun
enhances traceability of completed runs.
247-253
: Falling back togetExecutionRetry
.When
executionRetry
is absent and a crash/system failure is suspected, inferring a retry ensures robust fault tolerance.
256-261
: Setting up OOM variables.The new flags for
isOOMAttempt
and related conditions are clear and pave the way for specialized OOM handling logic.
264-299
: Handling out-of-memory scenarios.This block handles OOM gracefully by adjusting machine presets and forcing re-tries when necessary.
313-314
: Forcing re-queue after OOM detection.Tying
forceRequeue
toisOOMRetry
covers an edge case where the run must exit and re-enter the queue for a bigger machine.
320-324
: Exiting run if OOM recurs on the maximum machine.Calling
exitRun()
prevents endless attempts on the same run when it's already at the largest allowable machine preset.
326-346
: Completing failed run event context.Invoking
eventRepository.completeEvent()
for finalizing the event records ensures that partial or conflicting states don’t remain.
388-395
: Query and handle incomplete events.Adding
getTaskEventStoreTableForRun(taskRunAttempt.taskRun)
ensures the correct store is used for crashing or failing in-progress events.
423-446
: Handling in-progress events during CRASHED or SYSTEM_FAILURE states.Completing or crashing these events avoids leaving them hanging indefinitely when the run transitions to a final error state.
780-785
:exitRun
implementation.Emitting
REQUEST_RUN_CANCELLATION
via socket ensures a clean approach to halting further work on the run.apps/webapp/app/v3/eventRepository.server.ts (15)
37-37
: No issues with the new import.
The import from./taskEventStore.server
is properly used later in the file.
105-105
: Good addition of thepartitioningEnabled
config property.
This optional toggle cleanly extends the repository's configurability.
191-191
: IntroducingtaskEventStore
field looks fine.
Ensures the class can coordinate event storage through the new store object.
197-199
: Constructor parameter expansion is valid.
Providing default values fordb
andreadReplica
is a clear approach.
211-212
: Optional check: Pass the partition flag if needed.
You're instantiatingTaskEventStore
without explicitly passingpartitioningEnabled
. IfTaskEventStore
internally handles partitioning logic viathis.taskEventStoreTable
, that’s fine. Otherwise, consider verifying the constructor usage.
231-243
: Accepting astoreTable
parameter incompleteEvent
is aligned with partition usage.
Querying incomplete events via the table reference is a sensible design.
361-372
: Refactoring to use#queryEvents
withtaskEventStore
is clean.
Centralizing event retrieval logic intaskEventStore
improves maintainability.
423-435
: Integration withfindTraceEvents
forgetTraceSummary
is consistent.
PassingstoreTable
and date bounds ensures consistent partition usage and time-window filtering.
515-547
:getRunEvents
method uses the store effectively.
Relying onthis.taskEventStore.findMany
while allowing partial data selection is a good pattern.
561-569
:getSpan
method signature changes look correct.
ProvidingstoreTable
and matching partial events fosters better partition handling.
577-582
: Delegating to#createSpanFromEvent
is appropriate.
Ensures consistent usage of the new store-based queries for building span data.
725-750
:#walkSpanAncestors
usage is correct.
Consistently pullingparentEvent
from#getSpanEvent
and stopping when no more ancestors remain helps avoid infinite recursion.
757-773
:#getSpanEvent
properly leverages the store.
Sorting bystartTime
ascending and returning the final event (full or partial) is a neat approach.
1078-1080
:taskEventStoreTable
getter usage is clear.
Returns table name based onpartitioningEnabled
, ensuring a single source of truth.
1093-1096
: Switched totaskEventStore.createMany
is an excellent abstraction.
It unifies batch insertion logic and aligns with the partition table logic.packages/trigger-sdk/src/v3/index.ts (1)
34-34
: ExportingOutOfMemoryError
expands error-handling capabilities.
This addition is consistent with other custom errors for specialized exception flows.references/hello-world/src/trigger/example.ts (1)
83-83
: Great addition of completion logging!The added logging statement improves observability by providing feedback on batch task completion and its results.
apps/webapp/app/v3/services/expireEnqueuedRun.server.ts (1)
81-103
: Enhanced event completion with proper context and timestamps!The updated event completion logic now correctly:
- Uses the appropriate event store table context
- Handles timestamps properly with created and completed times
- Maintains consistent error reporting structure
apps/webapp/app/routes/resources.runs.$runParam.logs.download.ts (1)
35-40
: Enhanced event retrieval with proper context and timestamps!The updated event retrieval logic now correctly uses the appropriate event store table context and handles timestamps properly.
apps/webapp/app/v3/services/executeTasksWaitingForDeploy.ts (1)
21-25
: Great optimizations for task execution handling!The changes improve the service by:
- Optimizing task selection to fetch only necessary fields
- Adding batch size limits for better resource management
- Using createdAt for more reliable ordering
Also applies to: 34-35, 45-56
apps/webapp/app/v3/services/cancelAttempt.server.ts (1)
76-83
: LGTM! Enhanced event querying with task-specific context.The changes improve the granularity of event querying by incorporating task-specific event store table and temporal bounds.
apps/webapp/app/presenters/v3/RunPresenter.server.ts (2)
36-37
: LGTM! Enhanced data model with temporal context.Added
createdAt
andtaskEventStore
fields to improve data completeness.Also applies to: 50-50
112-117
: LGTM! Improved trace summary retrieval.Enhanced trace summary retrieval with task-specific event store and temporal bounds.
apps/webapp/app/v3/services/cancelTaskRun.server.ts (1)
87-94
: LGTM! Enhanced event querying with task-specific context.The changes improve the granularity of event querying by incorporating task-specific event store table and temporal bounds.
apps/webapp/app/v3/services/crashTaskRun.server.ts (1)
10-10
: LGTM! Enhanced event querying with task run context.The changes improve event querying by providing context-specific information about task runs, including their creation and completion timestamps.
Also applies to: 123-130
packages/core/src/v3/schemas/schemas.ts (1)
3-3
: LGTM! Added Out Of Memory (OOM) error handling support.The changes enhance the retry options by allowing specific machine configurations for OOM errors. The implementation is well-documented with clear comments explaining the purpose and usage.
Also applies to: 98-107
apps/webapp/app/v3/services/createTaskRunAttempt.server.ts (1)
14-14
: LGTM! Enhanced task run status validation.The changes improve error handling by:
- Adding task run status to span attributes for better tracing.
- Preventing the creation of new attempts for finalized task runs.
Also applies to: 95-95, 101-104
apps/webapp/app/presenters/v3/SpanPresenter.server.ts (4)
13-13
: LGTM!The import of
getTaskEventStoreTableForRun
is correctly added to support the enhanced span retrieval functionality.
75-75
: LGTM!The additional fields
taskEventStore
andcreatedAt
are correctly added to the select clause to support the enhanced span retrieval.Also applies to: 138-138
211-217
: LGTM!The
getSpan
call is correctly updated with the new parameters:
- taskEventStore table from the run
- traceId for correlation
- timestamps for temporal context
354-356
: LGTM!The
getSpan
method is consistently updated with the same pattern of additional fields and parameters as thegetRun
method.Also applies to: 367-373
apps/webapp/app/env.server.ts (2)
445-446
: LGTM!The legacy run engine batch configuration is well-defined with sensible defaults:
- Batch size of 100 items
- Stagger interval of 1 second between batches
485-486
: LGTM!The task event partitioning configuration is properly structured:
- Feature flag for enabling/disabling partitioning
- Configurable window size with a reasonable default of 60 seconds
packages/core/src/v3/errors.ts (3)
57-68
: LGTM!The
OutOfMemoryError
class is well-implemented:
- Clear error message constant
- Proper class extension and name setting
- Helpful documentation explaining its purpose
70-77
: LGTM!The
isManualOutOfMemoryError
function provides a clean way to detect manual OOM errors by checking both the error type and message.
588-592
: LGTM!The
taskRunErrorEnhancer
is correctly updated to handle manual OOM errors, ensuring they are represented consistently asTASK_PROCESS_OOM_KILLED
errors.packages/core/src/v3/types/tasks.ts (2)
205-219
: LGTM!The machine configuration documentation is improved with:
- Clear link to the machines documentation
- Updated example using the new preset-based configuration
220-248
: LGTM!The machine configuration type is well-structured:
- Maintains backward compatibility with CPU/memory configuration
- Properly marks old configuration as deprecated
- Adds support for preset-based configuration
apps/webapp/app/v3/services/triggerTask.server.ts (1)
32-32
: LGTM! Integration of task event store looks good.The changes correctly integrate the task event store functionality by:
- Adding the required import
- Setting the taskEventStore property in the task run creation data
Also applies to: 398-398
apps/webapp/app/v3/marqs/index.server.ts (1)
641-643
: LGTM! Improved method return values for better error handling.The changes enhance the
nackMessage
method by:
- Clearly documenting the return value's meaning
- Adding explicit boolean returns for all code paths
- Providing better feedback about the message requeuing status
Also applies to: 661-662, 680-681, 709-710
internal-packages/database/prisma/migrations/20250212075957_add_task_event_store_to_task_run/migration.sql (1)
1-5
: LGTM! Well-structured database migration.The migration correctly:
- Adds the new column with appropriate type
- Includes NOT NULL constraint
- Sets a sensible default value
apps/kubernetes-provider/tsconfig.json (1)
11-12
: LGTM! Path mappings are correctly configured.The new path mappings for
@trigger.dev/core
are properly aligned with the existing v3 mappings.internal-packages/database/prisma/migrations/20250212053026_create_task_event_partitioned_table/migration.sql (1)
56-63
: LGTM! Essential indexes are in place.The indexes on
traceId
,spanId
, andrunId
are well-chosen for tracing and debugging purposes.packages/trigger-sdk/CHANGELOG.md (1)
1-1164
: LGTM! The changelog is well-maintained.The changelog follows good practices with clear version numbering, detailed descriptions, and links to relevant PRs.
🧰 Tools
🪛 LanguageTool
[formatting] ~179-~179: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...riggerAndWaitand
batchTriggerAndWait`, because it can lead to permanently frozen runs ...(COMMA_BEFORE_BECAUSE)
[grammar] ~183-~183: Possible subject-verb agreement error detected.
Context: ...er and batch.triggerByTask methods that allows triggering multiple different tasks in ...(PLURAL_THAT_AGREEMENT)
[style] ~204-~204: This phrase is redundant. Consider using “inside”.
Context: ...rror); } } ``` Or if you are inside of a task, you can usetriggerByTask
: ...(OUTSIDE_OF)
[uncategorized] ~272-~272: Use a comma before ‘so’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...l accept accessToken and baseURL options so the use of the Provider is no longer ne...(COMMA_COMPOUND_SENTENCE_2)
[duplication] ~368-~368: Possible typo: you repeated a word.
Context: ...ns (root, parent, and children) as well how how the runs were triggered and if they are...(ENGLISH_WORD_REPEAT_RULE)
[grammar] ~423-~423: In this context, ‘type’ should agree in number with the noun after ‘of’.
Context: ...nvVars CLI hook - 979bee5: Fix return type of runs.retrieve, and allow passing the type of...(TYPE_OF_PLURAL)
[uncategorized] ~424-~424: ‘In’ is unnecessary in most cases before the expression ‘next .’.
Context: ...(for now) to fix Module Not Found error in Next.js. It turns out that webpack will "...(CONFUSION_OF_NN_IN_NEXT_NNP)
[uncategorized] ~435-~435: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...on - 4f95c9d: v3: recover from server rate limiting errors in a more reliable way - 0591db5...(EN_COMPOUND_ADJECTIVE_INTERNAL)
[grammar] ~650-~650: In this context, ‘type’ should agree in number with the noun after ‘of’.
Context: ... Patch Changes - 979bee5: Fix return type of runs.retrieve, and allow passing the type of...(TYPE_OF_PLURAL)
[uncategorized] ~688-~688: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...s - 4f95c9d: v3: recover from server rate limiting errors in a more reliable way - Updated...(EN_COMPOUND_ADJECTIVE_INTERNAL)
[uncategorized] ~945-~945: ‘In’ is unnecessary in most cases before the expression ‘next .’.
Context: ...(for now) to fix Module Not Found error in Next.js. It turns out that webpack will "...(CONFUSION_OF_NN_IN_NEXT_NNP)
🪛 markdownlint-cli2 (0.17.2)
428-428: Bare URL used
null(MD034, no-bare-urls)
949-949: Bare URL used
null(MD034, no-bare-urls)
apps/webapp/app/v3/taskRunHeartbeatFailed.server.ts (3)
52-82
: LGTM! Well-structured status handling.The addition of
WAITING_TO_RESUME
andPAUSED
cases is well-implemented with appropriate error handling and logging.
49-49
: LGTM! Improved service instantiation.Moving the
FailedTaskRunService
instantiation before the switch statement is more efficient and follows DRY principles.
33-37
:❓ Verification inconclusive
Verify the usage of attempts count.
The
_count.attempts
field is selected in the query but never used in the code. This could indicate dead code or missing functionality.
🏁 Script executed:
#!/bin/bash # Search for usage of attempts count in the codebase rg -A 5 '_count.*attempts' apps/Length of output: 34
Verify the usage of
_count.attempts
in the task run query.Our search for any use of the attempts count in the code yielded no results. As the field is selected but not referenced anywhere in the code, please manually verify whether this selection was intended to drive additional functionality or if it can be safely removed.
packages/rsc/package.json
Outdated
"react": "19.0.0-rc.1", | ||
"react-dom": "19.0.0-rc.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify if React RC version is still appropriate.
The package is using React 19.0.0-rc.1 which is a release candidate version. Let's verify if this is still the appropriate version to use.
🌐 Web query:
What is the latest stable version of React? Is React 19.0.0-rc.1 still the latest release candidate?
💡 Result:
The latest stable version of React is 19.0.0, released on December 5, 2024[1][5]. This version includes major features like Actions API improvements with async transitions (startTransition
), new hooks (useActionState
, useOptimistic
), and breaking changes such as removing deprecated APIs like string refs and propTypes
[1][5].
React 19.0.0-rc.1 is not the latest release candidate anymore – it has been superseded by the final stable release (v19)[1][9]. The RC phase ended with the official v19 launch in December 2024[5]. Key additions during the RC-to-stable period included:
- Suspense sibling pre-warming for smoother fallback rendering
- New React DOM static APIs[5]
For projects still using RC versions, upgrading to react@19.0.0 via npm/yarn/pnpm is recommended[9]. Next.js 15+ fully supports React 19[2][3], though some third-party libraries may require updates to list react@^19
in their peer dependencies[2].
Citations:
- 1: https://github.com/facebook/react/releases
- 2: https://ui.shadcn.com/docs/react-19
- 3: https://bizcoder.com/next-js-15-rc/
- 4: https://react.dev/versions
- 5: https://react.dev/blog/2024/12/05/react-19
- 6: https://www.wisp.blog/blog/nextjs-15-is-out-whats-new-and-what-broke
- 7: https://legacy.reactjs.org/versions/
- 8: https://react.dev/blog/2024/04/25/react-19-upgrade-guide
- 9: https://www.npmjs.com/package/react?activeTab=versions
- 10: https://dev.to/holasoymalva/why-i-decided-to-stop-working-with-reactjs-in-2025-4d1l
Action: Update React Versions to Latest Stable Release
After verification, the latest stable version of React is 19.0.0. The current dependency versions in the file use the outdated release candidate (19.0.0-rc.1
), which has been superseded by the final stable release. Please update the dependency versions accordingly.
- Location:
packages/rsc/package.json
(Lines: 42-43) - Recommended Change:
To:
"react": "19.0.0-rc.1", "react-dom": "19.0.0-rc.1"
"react": "19.0.0", "react-dom": "19.0.0"
This update ensures that the project is using the stable release with the latest features and improvements.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"react": "19.0.0-rc.1", | |
"react-dom": "19.0.0-rc.1" | |
"react": "19.0.0", | |
"react-dom": "19.0.0" |
// Set JWT in a secure, HTTP-only cookie | ||
cookies().set("run_token", publicToken); | ||
cookies().set("run_token", handle.publicAccessToken); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure secure cookie settings.
Storing handle.publicAccessToken
in a cookie is convenient, but consider adding security flags (e.g., httpOnly
, secure
, sameSite
) to mitigate potential security risks.
docs/package.json
Outdated
}, | ||
"dependencies": { | ||
"mintlify": "^4.0.382" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the mintlify version for latest features and security updates.
The dependency addition looks good, but let's verify we're using the latest stable version of mintlify.
🌐 Web query:
What is the latest version of mintlify?
💡 Result:
The latest version of Mintlify CLI as of February 13, 2025 is 4.0.386
, published on npm 3 hours ago (relative to your current date)[5]. This can be installed globally via:
npm install -g mintlify@4.0.386
Key details:
- Requires Node.js v18 or higher[5]
- Includes recent improvements like custom port configuration (
--port
flag)[5] - Features enhanced link validation with
mintlify broken-links
command[5]
For local development workflows, this CLI version supports syncing with Mintlify's web editor enhancements announced in their changelogs[1], including WYSIWYG markdown editing and AI-assisted documentation updates[1].
Citations:
- 1: https://mintlify.com/docs/changelog/overview
- 2: https://mintlify.com/docs/content/components/update
- 3: https://github.com/mintlify/mintlify.com/releases
- 4: https://mintlify.com/docs/development
- 5: https://www.npmjs.com/package/mintlify
- 6: https://marketplace.visualstudio.com/items?itemName=mintlify.document
- 7: https://github.com/mintlify
- 8: https://mintlify.com
- 9: https://mintlify.com/blog/february-2024
- 10: https://plugins.jetbrains.com/plugin/18606-mintlify-doc-writer/versions/stable/166718
Update Mintlify Dependency Version for Latest Security and Feature Enhancements
- The
docs/package.json
currently specifies"mintlify": "^4.0.382"
, but recent checks indicate that the latest stable release is 4.0.386. - To leverage the most recent improvements (e.g., custom port configuration and enhanced link validation) and ensure optimal security updates, please update the version to
"mintlify": "^4.0.386"
.
// Add error handling and cleanup | ||
let jsonText = routingResponse.text.trim(); | ||
if (jsonText.startsWith("```")) { | ||
jsonText = jsonText.replace(/```json\n|\n```/g, ""); | ||
} | ||
|
||
const routingResult = routingSchema.parse(JSON.parse(jsonText)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance JSON parsing error handling.
Consider adding more robust error handling for JSON parsing:
let jsonText = routingResponse.text.trim();
if (jsonText.startsWith("```")) {
jsonText = jsonText.replace(/```json\n|\n```/g, "");
}
+try {
+ const parsed = JSON.parse(jsonText);
+ const routingResult = routingSchema.parse(parsed);
+} catch (error) {
+ if (error instanceof SyntaxError) {
+ throw new Error(`Invalid JSON response from routing model: ${error.message}`);
+ }
+ throw error;
+}
-const routingResult = routingSchema.parse(JSON.parse(jsonText));
export interface TranslatePayload { | ||
marketingSubject: string; | ||
targetLanguage: string; | ||
targetWordCount: number; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation for payload fields.
Consider adding validation for the payload fields:
+import { z } from "zod";
+
+const TranslatePayloadSchema = z.object({
+ marketingSubject: z.string().min(1),
+ targetLanguage: z.string().min(2),
+ targetWordCount: z.number().int().positive().max(1000),
+});
+
export interface TranslatePayload {
marketingSubject: string;
targetLanguage: string;
targetWordCount: number;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export interface TranslatePayload { | |
marketingSubject: string; | |
targetLanguage: string; | |
targetWordCount: number; | |
} | |
import { z } from "zod"; | |
const TranslatePayloadSchema = z.object({ | |
marketingSubject: z.string().min(1), | |
targetLanguage: z.string().min(2), | |
targetWordCount: z.number().int().positive().max(1000), | |
}); | |
export interface TranslatePayload { | |
marketingSubject: string; | |
targetLanguage: string; | |
targetWordCount: number; | |
} |
return { | ||
claimId: claim.id, | ||
isVerified: false, | ||
confidence: 0.7, | ||
explanation: response.text, | ||
}; | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance source verification with actual source checking.
The current implementation doesn't actually verify against real sources and always returns a hardcoded confidence of 0.7.
return {
claimId: claim.id,
- isVerified: false,
- confidence: 0.7,
- explanation: response.text,
+ isVerified: response.verified,
+ confidence: response.confidence,
+ sources: response.sources,
+ explanation: response.explanation,
+ lastChecked: new Date().toISOString(),
+ factCheckUrls: response.factCheckUrls
};
Update the system prompt to include source checking:
- "Verify this claim by considering recent news sources and official statements. Assess reliability.",
+ `Verify this claim using the following steps:
+ 1. Search recent news sources (last 30 days)
+ 2. Check official statements and press releases
+ 3. Cross-reference with fact-checking websites
+ 4. Evaluate source credibility
+
+ Respond with a JSON object:
+ {
+ "verified": boolean,
+ "confidence": number,
+ "sources": [
+ {
+ "url": string,
+ "title": string,
+ "publisher": string,
+ "publishDate": string,
+ "relevance": number
+ }
+ ],
+ "explanation": string,
+ "factCheckUrls": string[]
+ }`,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return { | |
claimId: claim.id, | |
isVerified: false, | |
confidence: 0.7, | |
explanation: response.text, | |
}; | |
}, | |
return { | |
claimId: claim.id, | |
isVerified: response.verified, | |
confidence: response.confidence, | |
sources: response.sources, | |
explanation: response.explanation, | |
lastChecked: new Date().toISOString(), | |
factCheckUrls: response.factCheckUrls | |
}; | |
}, |
return { | |
claimId: claim.id, | |
isVerified: false, | |
confidence: 0.7, | |
explanation: response.text, | |
}; | |
}, | |
`Verify this claim using the following steps: | |
1. Search recent news sources (last 30 days) | |
2. Check official statements and press releases | |
3. Cross-reference with fact-checking websites | |
4. Evaluate source credibility | |
Respond with a JSON object: | |
{ | |
"verified": boolean, | |
"confidence": number, | |
"sources": [ | |
{ | |
"url": string, | |
"title": string, | |
"publisher": string, | |
"publishDate": string, | |
"relevance": number | |
} | |
], | |
"explanation": string, | |
"factCheckUrls": string[] | |
}`, |
const claims = response.text | ||
.split("\n") | ||
.filter((line: string) => line.trim()) | ||
.map((claim: string, index: number) => ({ | ||
id: index + 1, | ||
text: claim.replace(/^\d+\.\s*/, ""), | ||
})); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve claim extraction robustness.
The current claim extraction is overly simplistic and might miss complex claims or misinterpret context.
const claims = response.text
- .split("\n")
- .filter((line: string) => line.trim())
- .map((claim: string, index: number) => ({
- id: index + 1,
- text: claim.replace(/^\d+\.\s*/, ""),
- }));
+ .split("\n")
+ .filter((line: string) => line.trim())
+ .map((claim: string, index: number) => {
+ const cleanedClaim = claim.replace(/^\d+\.\s*/, "");
+ return {
+ id: index + 1,
+ text: cleanedClaim,
+ entities: extractEntities(cleanedClaim),
+ type: classifyClaim(cleanedClaim),
+ context: {
+ precedingText: getPrecedingContext(article, cleanedClaim),
+ followingText: getFollowingContext(article, cleanedClaim)
+ }
+ };
+ });
Committable suggestion skipped: line range outside the PR's diff.
logger.debug("[RequeueTaskRunService] Failing task run", { taskRun }); | ||
|
||
const service = new FailedTaskRunService(); | ||
logger.debug(`[RequeueTaskRunService] ${taskRun.status} failing task run`, { taskRun }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix inconsistent service name in logging.
The logging statement uses RequeueTaskRunService
while other logs use TaskRunHeartbeatFailedService
. This inconsistency could make log tracing difficult.
- logger.debug(`[RequeueTaskRunService] ${taskRun.status} failing task run`, { taskRun });
+ logger.debug(`[TaskRunHeartbeatFailedService] ${taskRun.status} failing task run`, { taskRun });
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
logger.debug(`[RequeueTaskRunService] ${taskRun.status} failing task run`, { taskRun }); | |
logger.debug(`[TaskRunHeartbeatFailedService] ${taskRun.status} failing task run`, { taskRun }); |
# Conflicts: # apps/webapp/app/v3/services/completeAttempt.server.ts # docs/machines.mdx
Summary by CodeRabbit