-
-
Notifications
You must be signed in to change notification settings - Fork 706
fix: removing schedule instance no longer crashes #1430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@ericallam has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 19 minutes and 19 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThis pull request introduces several changes across multiple files, primarily focusing on enhancing error handling and logging mechanisms. The Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (1)
internal-packages/database/README.md (1)
Line range hint
1-100
: Consider adding timing expectations for index creation.Given that this PR addresses index creation on the TaskRun table, it would be helpful to enhance the documentation with specific timing expectations. While the doc mentions "2 mins maybe" for JobRun, different tables might have different creation times based on their size.
Consider adding:
-- creates an index safely, this can take a long time (2 mins maybe) CREATE INDEX CONCURRENTLY IF NOT EXISTS "JobRun_eventId_idx" ON "JobRun" ("eventId"); +-- Note: Creation time varies by table size. For reference: +-- JobRun table (~100k rows): ~2 minutes +-- TaskRun table: [add your observed timing here]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
- apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.schedules.new/route.tsx (2 hunks)
- apps/webapp/app/v3/services/upsertTaskSchedule.server.ts (5 hunks)
- internal-packages/database/README.md (1 hunks)
- internal-packages/database/prisma/migrations/20241023154826_add_schedule_instance_id_index_to_task_run/migration.sql (1 hunks)
- internal-packages/database/prisma/schema.prisma (1 hunks)
- references/v3-catalog/src/management.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
- internal-packages/database/prisma/migrations/20241023154826_add_schedule_instance_id_index_to_task_run/migration.sql
🧰 Additional context used
🔇 Additional comments (5)
internal-packages/database/README.md (1)
8-8
: LGTM! Directory path updated correctly.The path update from
packages/database
tointernal-packages/database
accurately reflects the current project structure.references/v3-catalog/src/management.ts (1)
259-261
: LGTM! Changes align with PR objectives.The modification to activate
doSchedules()
while commenting out other test functions is appropriate, as it allows for focused testing of the schedule-related functionality that this PR addresses (specifically the schedule instance deletion fixes).Let's verify the schedule deletion functionality being tested:
✅ Verification successful
Let me gather more specific information about schedule deletion functionality.
The previous search was too broad. Let's focus specifically on the schedule-related code and error handling to verify the changes align with fixing crashes during schedule deletion.
Let's search more specifically for the schedule-related code and error handling.
I want to see the complete context of the management file and schedule-related operations to understand how deletion is handled and verify if the changes address potential crashes.
Based on the analysis of the code, I can now provide a final response:
LGTM - Schedule deletion is properly handled with error catching
Looking at the code in
management.ts
, the schedule deletion functionality is properly implemented:
- The
doSchedules()
function includes a complete test cycle including deletion viaschedules.del()
- All operations are wrapped with
.catch(console.error)
for error handling- The SDK operations are properly chained: create → retrieve → update → deactivate → activate → delete
The changes to activate
doSchedules()
while commenting out other test functions are appropriate as they:
- Test the complete lifecycle of schedule operations
- Include proper error handling
- Follow the SDK's recommended usage pattern
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the schedule deletion implementation # Look for related prisma transaction timeout configurations and schedule deletion handling # Check for transaction timeout settings rg -A 5 "prisma.*transaction" # Look for schedule deletion handling ast-grep --pattern 'async function $_(scheduleId$_) { $$$ delete$_ $$$ }'Length of output: 9785
Script:
#!/bin/bash # Search for schedule deletion related code rg -A 10 "TaskScheduleInstance.*delete" # Look for schedule-related functions in management.ts rg -A 10 "function doSchedules" references/v3-catalog/src/management.ts # Search for any schedule deletion error handling rg -B 2 -A 5 "schedule.*delete.*catch"Length of output: 482
Script:
#!/bin/bash # Search for schedule-related code in the management file cat references/v3-catalog/src/management.ts # Search for schedule-related operations and error handling rg -A 10 "schedules\." references/v3-catalog/src/management.ts # Look for any schedule-related types or interfaces rg -A 5 "interface.*Schedule|type.*Schedule"Length of output: 31467
apps/webapp/app/routes/resources.orgs.$organizationSlug.projects.$projectParam.schedules.new/route.tsx (2)
47-47
: LGTM: Good addition of centralized logging.Adding the logger import supports better error tracking and debugging capabilities.
98-100
: Verify error handling during schedule deletion.Since this PR aims to fix crashes during TaskScheduleInstance deletion, let's verify the error handling for deletion scenarios.
✅ Verification successful
Let me gather more specific information about the error handling patterns in the schedule-related files we found.
Let me gather one more piece of information about the error handling specifically in the schedule deletion flow.
Based on the gathered information, I can now provide a final response regarding the error handling during schedule deletion.
Error handling for schedule deletion appears to be properly implemented
The codebase shows that TaskScheduleInstance deletions are handled within transactions and have proper error boundaries:
- In
upsertTaskSchedule.server.ts
, deletions are performed within a transaction context (tx.taskScheduleInstance.delete
)- In
createBackgroundWorker.server.ts
, bulk deletions usedeleteMany
with specific where clauses to prevent unintended deletions- The API client (
packages/core/src/v3/apiClient/index.ts
) implements schedule deletion with proper response type validation usingDeletedScheduleObject
The error logging enhancement in the reviewed file complements the existing error handling infrastructure and aligns with good practices by:
- Logging errors for debugging while keeping user-facing messages generic
- Not exposing internal error details to the client
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for schedule deletion handling rg -l "deleteSchedule|removeSchedule|destroySchedule" --type ts # Search for TaskScheduleInstance deletion patterns ast-grep --pattern 'await prisma.taskScheduleInstance.delete' # Look for transaction timeout configurations rg "prisma.*transaction.*timeout" --type tsLength of output: 262
Script:
#!/bin/bash # Check the content of the identified schedule files with context rg -A 10 "deleteSchedule|removeSchedule|destroySchedule" packages/trigger-sdk/src/v3/schedules/index.ts packages/core/src/v3/apiClient/index.ts # Look for schedule-related error handling patterns rg -A 5 "catch.*schedule" --type ts # Search for TaskScheduleInstance related operations rg -A 5 "TaskScheduleInstance" --type ts # Look for schedule deletion related tests rg -l "test.*delete.*schedule|describe.*delete.*schedule" --type tsLength of output: 7801
Script:
#!/bin/bash # Look for schedule deletion error handling in the API routes and services rg -A 10 "delete.*schedule.*catch|schedule.*delete.*catch" --type ts # Check for any prisma delete operations on TaskScheduleInstance rg -A 5 "taskScheduleInstance.*delete" --type ts # Look for transaction handling in schedule operations rg -A 10 "prisma.\$transaction.*schedule" --type tsLength of output: 1196
internal-packages/database/prisma/schema.prisma (1)
1796-1796
: LGTMAdding an index on
scheduleInstanceId
in theTaskRun
model should improve performance and resolve the transaction timeout issue when deletingTaskScheduleInstance
.
logger.error("Failed to create schedule", error); | ||
|
||
const errorMessage = `Something went wrong. Please try again.`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling with error categorization.
While the current changes improve security by not exposing internal errors to users, consider categorizing known errors to provide more specific user feedback when appropriate.
Consider implementing error categorization like this:
- logger.error("Failed to create schedule", error);
+ logger.error("Failed to create schedule", { error, projectId: project.id });
+
+ let errorMessage = "Something went wrong. Please try again.";
+ if (error instanceof ServiceValidationError) {
+ errorMessage = error.message; // Safe to show validation errors
+ } else if (error instanceof PrismaClientKnownRequestError && error.code === "P2002") {
+ errorMessage = "A schedule with this deduplication key already exists.";
+ }
This approach:
- Adds context (projectId) to error logs
- Shows friendly messages for known error cases
- Maintains generic message for unexpected errors
Committable suggestion was skipped due to low confidence.
const instancesToDeleted = existingInstances.filter( | ||
(i) => !options.environments.includes(i.environmentId) | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct variable name from instancesToDeleted
to instancesToDelete
There's a typo in the variable name instancesToDeleted
. It should be instancesToDelete
for clarity and consistency.
Apply this diff to fix the typo:
// find the instances that need to be removed
- const instancesToDeleted = existingInstances.filter(
+ const instancesToDelete = existingInstances.filter(
(i) => !options.environments.includes(i.environmentId)
);
// delete the instances no longer selected
- for (const instance of instancesToDeleted) {
+ for (const instance of instancesToDelete) {
await tx.taskScheduleInstance.delete({
where: {
id: instance.id,
},
});
}
Also applies to: 224-230
const result = await (async (tx) => { | ||
if (existingSchedule) { | ||
if (existingSchedule.type === "DECLARATIVE") { | ||
throw new ServiceValidationError("Cannot update a declarative schedule"); | ||
} | ||
|
||
return await this.#updateExistingSchedule(tx, existingSchedule, schedule, projectId); | ||
return await this.#updateExistingSchedule(existingSchedule, schedule); | ||
} else { | ||
return await this.#createNewSchedule(tx, schedule, projectId, deduplicationKey); | ||
return await this.#createNewSchedule(schedule, projectId, deduplicationKey); | ||
} | ||
}); | ||
})(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused parameter tx
from the async function
The parameter tx
in the async function is not used and can be removed to simplify the code.
Apply this diff to remove the unused parameter:
- const result = await (async (tx) => {
+ const result = await (async () => {
if (existingSchedule) {
if (existingSchedule.type === "DECLARATIVE") {
throw new ServiceValidationError("Cannot update a declarative schedule");
}
return await this.#updateExistingSchedule(existingSchedule, schedule);
} else {
return await this.#createNewSchedule(schedule, projectId, deduplicationKey);
}
})();
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const result = await (async (tx) => { | |
if (existingSchedule) { | |
if (existingSchedule.type === "DECLARATIVE") { | |
throw new ServiceValidationError("Cannot update a declarative schedule"); | |
} | |
return await this.#updateExistingSchedule(tx, existingSchedule, schedule, projectId); | |
return await this.#updateExistingSchedule(existingSchedule, schedule); | |
} else { | |
return await this.#createNewSchedule(tx, schedule, projectId, deduplicationKey); | |
return await this.#createNewSchedule(schedule, projectId, deduplicationKey); | |
} | |
}); | |
})(); | |
const result = await (async () => { | |
if (existingSchedule) { | |
if (existingSchedule.type === "DECLARATIVE") { | |
throw new ServiceValidationError("Cannot update a declarative schedule"); | |
} | |
return await this.#updateExistingSchedule(existingSchedule, schedule); | |
} else { | |
return await this.#createNewSchedule(schedule, projectId, deduplicationKey); | |
} | |
})(); |
@trigger.dev/build
trigger.dev
@trigger.dev/core
@trigger.dev/sdk
@trigger.dev/react-hooks
commit: |
c66aeb1
to
521a057
Compare
Deleting a
TaskScheduleInstance
would cause theUpsertTaskSchedule
service to throw a Prisma transaction timeout error, because of the cascading updates to theTaskRun
table (setting thescheduleInstanceId
tonull
).I've added an index on the
TaskRun.scheduleInstanceId
column. I've also restructured theUpsertTaskSchedule
service to do less work inside of the prisma transaction, and also increased the transaction timeout to 10s in the "update" path (where the deleting happens).Summary by CodeRabbit
Release Notes
New Features
BackgroundWorker
,TaskSchedule
, andBulkActionGroup
.Bug Fixes
Documentation
Database Changes
TaskRun
table.These updates aim to enhance user experience and streamline task management within the application.