-
-
Notifications
You must be signed in to change notification settings - Fork 708
Add unique indexes for the oneTimeUseToken columns #1517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThis pull request introduces multiple SQL migration scripts and updates to the Prisma schema. It adds unique constraints to the Changes
Possibly related PRs
Suggested reviewers
Warning There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure. 🔧 eslint
references/nextjs-realtime/src/app/page.tsxOops! Something went wrong! :( ESLint: 8.45.0 ESLint couldn't find the config "custom" to extend from. Please check that the name of the config is correct. The config "custom" was referenced from the config file in "/.eslintrc.js". If you still have problems, please stop by https://eslint.org/chat/help to chat with the team. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (4)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql (1)
7-8
: LGTM! Good use of CONCURRENTLY for zero-downtime migrationThe implementation is solid:
- Uses
CONCURRENTLY
to avoid table locks during index creation- Includes
IF NOT EXISTS
for idempotency- Follows proper naming convention for the index
While this implementation is correct, consider these operational aspects:
CONCURRENTLY
takes longer but is safer for production- Monitor the index creation progress in production as it might take time on large tables
- Have a rollback plan ready in case of failure
internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql (1)
7-8
: LGTM! Well-implemented index creation.The implementation is solid with good practices:
- Using
CONCURRENTLY
prevents table locks during index creationIF NOT EXISTS
makes the migration idempotentConsider implementing these operational safeguards:
- Add monitoring for failed token uniqueness violations
- Set up alerts for high token collision rates
- Implement a token rotation policy if tokens are stored long-term
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql
(1 hunks)internal-packages/database/prisma/schema.prisma
(2 hunks)references/nextjs-realtime/src/app/page.tsx
(1 hunks)
🔇 Additional comments (4)
internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql (1)
1-6
: Verify existing data before running the migration
The warning correctly indicates the risk of failure due to duplicate tokens. Before applying this migration in production, it's crucial to verify the uniqueness of existing tokens.
Run this script to check for potential duplicate tokens:
✅ Verification successful
The warning is appropriate, but no data validation is needed
The migration is safe to proceed without data validation because:
- The
oneTimeUseToken
column was just added as a nullable column in the previous migration - Being a newly added nullable column, there can't be any duplicate values in existing records
- The
CONCURRENTLY
creation of the index ensures no table locks during the operation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for any SQL queries that might help understand the current data state
# and potential duplicates in the oneTimeUseToken column
# Look for existing queries or tests that interact with oneTimeUseToken
rg -l "oneTimeUseToken|TaskRun" --type sql
# Look for any existing unique constraints or indexes on TaskRun
rg -l "CREATE.*INDEX.*TaskRun|CREATE.*CONSTRAINT.*TaskRun" --type sql
Length of output: 13415
Script:
#!/bin/bash
# Let's examine the migration that added the oneTimeUseToken column to understand its nullability
# and check if there's any existing data cleanup or validation
cat internal-packages/database/prisma/migrations/20241129141824_add_one_time_use_tokens/migration.sql
# Also check the actual index creation in the current migration
cat internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql
Length of output: 676
internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql (1)
1-6
: Verify existing data before applying the migration
The warning indicates potential failure if duplicate tokens exist. It's crucial to verify and clean up any duplicate data before applying this migration in production.
Run this script to check for potential duplicates:
✅ Verification successful
Let me check if there's any additional context about the BatchTaskRun
table and oneTimeUseToken
column that could help us understand the data model and potential impact.
Let me check the related migrations to understand the context of the oneTimeUseToken
field and its introduction.
Safe to proceed with the migration
Based on the migration history, this is a safe operation because:
- The
oneTimeUseToken
column was just added in the previous migration (20241129141824) as a nullable column - This is a new column that was empty (NULL) when created
- The unique index is being created with
CONCURRENTLY
which is safe for production - Since the column was just introduced as nullable and empty, there cannot be any duplicate values
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Generate SQL to identify duplicate oneTimeUseTokens in BatchTaskRun table
echo "-- SQL to identify duplicates:"
echo "SELECT oneTimeUseToken, COUNT(*)"
echo "FROM \"BatchTaskRun\""
echo "WHERE oneTimeUseToken IS NOT NULL"
echo "GROUP BY oneTimeUseToken"
echo "HAVING COUNT(*) > 1;"
Length of output: 381
Script:
#!/bin/bash
# Check schema definition and related files
fd BatchTaskRun --exec grep -l "BatchTaskRun" {} \;
# Look for schema definition
fd "schema.prisma" --exec grep -A 10 -B 10 "BatchTaskRun" {} \;
# Check for related migrations
fd --glob "*.sql" --exec grep -l "BatchTaskRun" {} \;
Length of output: 9786
Script:
#!/bin/bash
# Check the migration that introduced oneTimeUseToken
cat ./internal-packages/database/prisma/migrations/20241129141824_add_one_time_use_tokens/migration.sql
# Check if there are any existing constraints on oneTimeUseToken
cat ./internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql
Length of output: 707
references/nextjs-realtime/src/app/page.tsx (2)
9-9
: Verify the scope requirements for both trigger components.
The token generation has been simplified, which is good for maintainability. However, we should verify that the "openai-streaming" scope is sufficient for both TriggerButton
and TriggerButtonWithStreaming
components.
Let's check the component implementations:
✅ Verification successful
The "openai-streaming" scope is correctly used in both components
Based on the code analysis, both components (TriggerButton
and TriggerButtonWithStreaming
) are using the token exclusively for OpenAI streaming operations:
TriggerButton
usesuseTaskTrigger<typeof openaiStreaming>("openai-streaming", ...)
for basic task triggeringTriggerButtonWithStreaming
usesuseRealtimeTaskTriggerWithStreams<typeof openaiStreaming>("openai-streaming", ...)
for streaming responses
Both components use identical OpenAI-related functionality (same model, similar prompts) and the simplified scope matches their requirements.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the scope requirements in both trigger components
# Expected: Components should use the token only for OpenAI streaming operations
# Check TriggerButton implementation
ast-grep --pattern 'function TriggerButton({ accessToken }: { accessToken: string }) {
$$$
}'
# Check TriggerButtonWithStreaming implementation
ast-grep --pattern 'function TriggerButtonWithStreaming({ accessToken }: { accessToken: string }) {
$$$
}'
Length of output: 11181
9-9
: Ensure token uniqueness is handled properly.
Since this PR adds unique constraints to oneTimeUseToken
columns, we should verify that createTriggerPublicToken
generates unique tokens.
Let's check the token generation implementation:
✅ Verification successful
Let me search for the token generation implementation since it wasn't found in the previous results.
Let me search for the implementation of createTriggerPublicToken
since we haven't found it yet.
Based on the search results, I can now generate the final response:
Token uniqueness is properly handled through database constraints and JWT-based implementation
The token generation is implemented using JWT (JSON Web Tokens) with proper security measures:
- Tokens are generated with unique claims and scopes using the
SignJWT
class - Database has unique constraints on
oneTimeUseToken
columns - The system includes proper error handling for token collisions through Prisma's unique constraints
- Tokens have proper expiration time management through JWT claims
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the token generation implementation
# Expected: Should find token generation logic that ensures uniqueness
# Search for createTriggerPublicToken implementation
ast-grep --pattern 'createTriggerPublicToken($$$) {
$$$
}'
# Search for any token collision handling
rg -A 5 'oneTimeUseToken'
Length of output: 10185
Script:
#!/bin/bash
# Search for getOneTimeUseToken implementation since it's used in the routes
ast-grep --pattern 'getOneTimeUseToken($$$) {
$$$
}'
# Also search for any files containing this function name
rg -A 5 'getOneTimeUseToken'
# Search for any crypto/random related token generation
rg -A 5 'crypto|randomBytes|randomUUID'
Length of output: 42102
Script:
#!/bin/bash
# Search for createTriggerPublicToken implementation
ast-grep --pattern 'createTriggerPublicToken($$$) {
$$$
}'
# Also search for any files containing this function name
rg -A 5 'createTriggerPublicToken'
# Search for any JWT or token generation related code
rg -A 5 'jwt|token.*generate|generate.*token'
Length of output: 37984
@@ -1791,8 +1790,7 @@ model TaskRun { | |||
|
|||
maxDurationInSeconds Int? | |||
|
|||
// TODO: uncomment this | |||
// @@unique([oneTimeUseToken]) | |||
@@unique([oneTimeUseToken]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure existing data complies with the new unique constraint on oneTimeUseToken
in TaskRun
.
Adding a unique constraint to the oneTimeUseToken
field enforces data integrity and uniqueness. However, if there are existing duplicate values in this column, the migration will fail. Please verify that the existing data does not contain duplicates before applying this change.
@@ -2180,8 +2177,7 @@ model BatchTaskRun { | |||
dependentTaskAttemptId String? | |||
runDependencies TaskRunDependency[] @relation("dependentBatchRun") | |||
|
|||
// TODO: uncomment this | |||
// @@unique([oneTimeUseToken]) | |||
@@unique([oneTimeUseToken]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure existing data complies with the new unique constraint on oneTimeUseToken
in BatchTaskRun
.
Similar to the TaskRun
model, adding a unique constraint to the oneTimeUseToken
field in the BatchTaskRun
model requires that all existing values are unique. Ensure there are no duplicates to prevent migration failures.
@trigger.dev/react-hooks
@trigger.dev/rsc
@trigger.dev/sdk
@trigger.dev/build
@trigger.dev/core
trigger.dev
commit: |
2b86a40
to
dcb19be
Compare
Summary by CodeRabbit
New Features
oneTimeUseToken
inTaskRun
andBatchTaskRun
models to enhance data integrity.BatchTaskRun
model for improved tracking and management of batch runs.Improvements