Skip to content

Add unique indexes for the oneTimeUseToken columns #1517

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 2, 2024

Conversation

ericallam
Copy link
Member

@ericallam ericallam commented Nov 29, 2024

Summary by CodeRabbit

  • New Features

    • Introduced unique constraints for oneTimeUseToken in TaskRun and BatchTaskRun models to enhance data integrity.
    • Added new fields in the BatchTaskRun model for improved tracking and management of batch runs.
  • Improvements

    • Streamlined token generation process in the Home function, simplifying the logic for creating public access tokens.

Copy link

changeset-bot bot commented Nov 29, 2024

⚠️ No Changeset found

Latest commit: dcb19be

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link
Contributor

coderabbitai bot commented Nov 29, 2024

Caution

Review failed

The pull request is closed.

Walkthrough

This pull request introduces multiple SQL migration scripts and updates to the Prisma schema. It adds unique constraints to the oneTimeUseToken column in both the TaskRun and BatchTaskRun tables, ensuring that each token is unique. Additionally, new fields are introduced in the BatchTaskRun model to enhance its functionality. The changes also simplify the token generation process in the Home function of the Next.js application by consolidating multiple calls into a single method.

Changes

File Change Summary
internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql Added unique index on oneTimeUseToken for TaskRun table with a warning for existing duplicates.
internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql Added unique index on oneTimeUseToken for BatchTaskRun table with a warning for existing duplicates.
internal-packages/database/prisma/schema.prisma Updated TaskRun and BatchTaskRun models to enforce uniqueness on oneTimeUseToken and added new fields to BatchTaskRun.
references/nextjs-realtime/src/app/page.tsx Simplified token generation in Home function by consolidating token creation into a single method call.

Possibly related PRs

Suggested reviewers

  • matt-aitken

🐰 In the garden of code, I hop with glee,
Unique tokens bloom, as bright as can be!
With fields anew, our tasks take flight,
In the realm of data, everything feels right!
So let’s celebrate this change today,
A leap for our project, hip-hip-hooray! 🌼✨

Warning

There were issues while running some tools. Please review the errors and either fix the tool’s configuration or disable the tool if it’s a critical failure.

🔧 eslint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

references/nextjs-realtime/src/app/page.tsx

Oops! Something went wrong! :(

ESLint: 8.45.0

ESLint couldn't find the config "custom" to extend from. Please check that the name of the config is correct.

The config "custom" was referenced from the config file in "/.eslintrc.js".

If you still have problems, please stop by https://eslint.org/chat/help to chat with the team.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 2b86a40 and dcb19be.

📒 Files selected for processing (4)
  • internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql (1 hunks)
  • internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql (1 hunks)
  • internal-packages/database/prisma/schema.prisma (2 hunks)
  • references/nextjs-realtime/src/app/page.tsx (1 hunks)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (2)
internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql (1)

7-8: LGTM! Good use of CONCURRENTLY for zero-downtime migration

The implementation is solid:

  • Uses CONCURRENTLY to avoid table locks during index creation
  • Includes IF NOT EXISTS for idempotency
  • Follows proper naming convention for the index

While this implementation is correct, consider these operational aspects:

  1. CONCURRENTLY takes longer but is safer for production
  2. Monitor the index creation progress in production as it might take time on large tables
  3. Have a rollback plan ready in case of failure
internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql (1)

7-8: LGTM! Well-implemented index creation.

The implementation is solid with good practices:

  • Using CONCURRENTLY prevents table locks during index creation
  • IF NOT EXISTS makes the migration idempotent

Consider implementing these operational safeguards:

  1. Add monitoring for failed token uniqueness violations
  2. Set up alerts for high token collision rates
  3. Implement a token rotation policy if tokens are stored long-term
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 364ea56 and 2b86a40.

📒 Files selected for processing (4)
  • internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql (1 hunks)
  • internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql (1 hunks)
  • internal-packages/database/prisma/schema.prisma (2 hunks)
  • references/nextjs-realtime/src/app/page.tsx (1 hunks)
🔇 Additional comments (4)
internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql (1)

1-6: Verify existing data before running the migration

The warning correctly indicates the risk of failure due to duplicate tokens. Before applying this migration in production, it's crucial to verify the uniqueness of existing tokens.

Run this script to check for potential duplicate tokens:

✅ Verification successful

The warning is appropriate, but no data validation is needed

The migration is safe to proceed without data validation because:

  1. The oneTimeUseToken column was just added as a nullable column in the previous migration
  2. Being a newly added nullable column, there can't be any duplicate values in existing records
  3. The CONCURRENTLY creation of the index ensures no table locks during the operation
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Search for any SQL queries that might help understand the current data state
# and potential duplicates in the oneTimeUseToken column

# Look for existing queries or tests that interact with oneTimeUseToken
rg -l "oneTimeUseToken|TaskRun" --type sql

# Look for any existing unique constraints or indexes on TaskRun
rg -l "CREATE.*INDEX.*TaskRun|CREATE.*CONSTRAINT.*TaskRun" --type sql

Length of output: 13415


Script:

#!/bin/bash
# Let's examine the migration that added the oneTimeUseToken column to understand its nullability
# and check if there's any existing data cleanup or validation
cat internal-packages/database/prisma/migrations/20241129141824_add_one_time_use_tokens/migration.sql

# Also check the actual index creation in the current migration
cat internal-packages/database/prisma/migrations/20241129141900_add_one_time_use_index_task_run/migration.sql

Length of output: 676

internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql (1)

1-6: Verify existing data before applying the migration

The warning indicates potential failure if duplicate tokens exist. It's crucial to verify and clean up any duplicate data before applying this migration in production.

Run this script to check for potential duplicates:

✅ Verification successful

Let me check if there's any additional context about the BatchTaskRun table and oneTimeUseToken column that could help us understand the data model and potential impact.


Let me check the related migrations to understand the context of the oneTimeUseToken field and its introduction.


Safe to proceed with the migration

Based on the migration history, this is a safe operation because:

  1. The oneTimeUseToken column was just added in the previous migration (20241129141824) as a nullable column
  2. This is a new column that was empty (NULL) when created
  3. The unique index is being created with CONCURRENTLY which is safe for production
  4. Since the column was just introduced as nullable and empty, there cannot be any duplicate values
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Generate SQL to identify duplicate oneTimeUseTokens in BatchTaskRun table
echo "-- SQL to identify duplicates:"
echo "SELECT oneTimeUseToken, COUNT(*)"
echo "FROM \"BatchTaskRun\""
echo "WHERE oneTimeUseToken IS NOT NULL"
echo "GROUP BY oneTimeUseToken"
echo "HAVING COUNT(*) > 1;"

Length of output: 381


Script:

#!/bin/bash
# Check schema definition and related files
fd BatchTaskRun --exec grep -l "BatchTaskRun" {} \;

# Look for schema definition
fd "schema.prisma" --exec grep -A 10 -B 10 "BatchTaskRun" {} \;

# Check for related migrations
fd --glob "*.sql" --exec grep -l "BatchTaskRun" {} \;

Length of output: 9786


Script:

#!/bin/bash
# Check the migration that introduced oneTimeUseToken
cat ./internal-packages/database/prisma/migrations/20241129141824_add_one_time_use_tokens/migration.sql

# Check if there are any existing constraints on oneTimeUseToken
cat ./internal-packages/database/prisma/migrations/20241129142057_add_one_time_use_token_index_batch_task_run/migration.sql

Length of output: 707

references/nextjs-realtime/src/app/page.tsx (2)

9-9: Verify the scope requirements for both trigger components.

The token generation has been simplified, which is good for maintainability. However, we should verify that the "openai-streaming" scope is sufficient for both TriggerButton and TriggerButtonWithStreaming components.

Let's check the component implementations:

✅ Verification successful

The "openai-streaming" scope is correctly used in both components

Based on the code analysis, both components (TriggerButton and TriggerButtonWithStreaming) are using the token exclusively for OpenAI streaming operations:

  • TriggerButton uses useTaskTrigger<typeof openaiStreaming>("openai-streaming", ...) for basic task triggering
  • TriggerButtonWithStreaming uses useRealtimeTaskTriggerWithStreams<typeof openaiStreaming>("openai-streaming", ...) for streaming responses

Both components use identical OpenAI-related functionality (same model, similar prompts) and the simplified scope matches their requirements.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the scope requirements in both trigger components
# Expected: Components should use the token only for OpenAI streaming operations

# Check TriggerButton implementation
ast-grep --pattern 'function TriggerButton({ accessToken }: { accessToken: string }) {
  $$$
}'

# Check TriggerButtonWithStreaming implementation
ast-grep --pattern 'function TriggerButtonWithStreaming({ accessToken }: { accessToken: string }) {
  $$$
}'

Length of output: 11181


9-9: Ensure token uniqueness is handled properly.

Since this PR adds unique constraints to oneTimeUseToken columns, we should verify that createTriggerPublicToken generates unique tokens.

Let's check the token generation implementation:

✅ Verification successful

Let me search for the token generation implementation since it wasn't found in the previous results.


Let me search for the implementation of createTriggerPublicToken since we haven't found it yet.


Based on the search results, I can now generate the final response:

Token uniqueness is properly handled through database constraints and JWT-based implementation

The token generation is implemented using JWT (JSON Web Tokens) with proper security measures:

  • Tokens are generated with unique claims and scopes using the SignJWT class
  • Database has unique constraints on oneTimeUseToken columns
  • The system includes proper error handling for token collisions through Prisma's unique constraints
  • Tokens have proper expiration time management through JWT claims
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the token generation implementation
# Expected: Should find token generation logic that ensures uniqueness

# Search for createTriggerPublicToken implementation
ast-grep --pattern 'createTriggerPublicToken($$$) {
  $$$
}'

# Search for any token collision handling
rg -A 5 'oneTimeUseToken'

Length of output: 10185


Script:

#!/bin/bash
# Search for getOneTimeUseToken implementation since it's used in the routes
ast-grep --pattern 'getOneTimeUseToken($$$) {
  $$$
}'

# Also search for any files containing this function name
rg -A 5 'getOneTimeUseToken'

# Search for any crypto/random related token generation
rg -A 5 'crypto|randomBytes|randomUUID'

Length of output: 42102


Script:

#!/bin/bash
# Search for createTriggerPublicToken implementation
ast-grep --pattern 'createTriggerPublicToken($$$) {
  $$$
}'

# Also search for any files containing this function name
rg -A 5 'createTriggerPublicToken'

# Search for any JWT or token generation related code
rg -A 5 'jwt|token.*generate|generate.*token'

Length of output: 37984

@@ -1791,8 +1790,7 @@ model TaskRun {

maxDurationInSeconds Int?

// TODO: uncomment this
// @@unique([oneTimeUseToken])
@@unique([oneTimeUseToken])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure existing data complies with the new unique constraint on oneTimeUseToken in TaskRun.

Adding a unique constraint to the oneTimeUseToken field enforces data integrity and uniqueness. However, if there are existing duplicate values in this column, the migration will fail. Please verify that the existing data does not contain duplicates before applying this change.

@@ -2180,8 +2177,7 @@ model BatchTaskRun {
dependentTaskAttemptId String?
runDependencies TaskRunDependency[] @relation("dependentBatchRun")

// TODO: uncomment this
// @@unique([oneTimeUseToken])
@@unique([oneTimeUseToken])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure existing data complies with the new unique constraint on oneTimeUseToken in BatchTaskRun.

Similar to the TaskRun model, adding a unique constraint to the oneTimeUseToken field in the BatchTaskRun model requires that all existing values are unique. Ensure there are no duplicates to prevent migration failures.

Copy link

pkg-pr-new bot commented Dec 2, 2024

@trigger.dev/react-hooks

npm i https://pkg.pr.new/triggerdotdev/trigger.dev/@trigger.dev/react-hooks@1517

@trigger.dev/rsc

npm i https://pkg.pr.new/triggerdotdev/trigger.dev/@trigger.dev/rsc@1517

@trigger.dev/sdk

npm i https://pkg.pr.new/triggerdotdev/trigger.dev/@trigger.dev/sdk@1517

@trigger.dev/build

npm i https://pkg.pr.new/triggerdotdev/trigger.dev/@trigger.dev/build@1517

@trigger.dev/core

npm i https://pkg.pr.new/triggerdotdev/trigger.dev/@trigger.dev/core@1517

trigger.dev

npm i https://pkg.pr.new/triggerdotdev/trigger.dev@1517

commit: dcb19be

@ericallam ericallam force-pushed the one-time-use-token-indexes branch from 2b86a40 to dcb19be Compare December 2, 2024 13:45
@ericallam ericallam merged commit 3ab7eb9 into main Dec 2, 2024
@ericallam ericallam deleted the one-time-use-token-indexes branch December 2, 2024 13:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant