Skip to content

Conversation

@yiweichi
Copy link
Member

@yiweichi yiweichi commented Jun 10, 2025

Purpose or design rationale of this PR

Describe your change. Make sure to answer these three questions: What does this PR do? Why does it do it? How does it do it?
To add batch_hash to table blob_upload, which helps handle batch revert.

PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • build: Changes that affect the build system or external dependencies (example scopes: yarn, eslint, typescript)
  • ci: Changes to our CI configuration files and scripts (example scopes: vercel, github, cypress)
  • docs: Documentation-only changes
  • feat: A new feature
  • fix: A bug fix
  • perf: A code change that improves performance
  • refactor: A code change that doesn't fix a bug, or add a feature, or improves performance
  • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • test: Adding missing tests or correcting existing tests

Deployment tag versioning

Has tag in common/version.go been updated or have you added bump-version label to this PR?

  • No, this PR doesn't involve a new deployment, git tag, docker image tag
  • Yes

Breaking change label

Does this PR have the breaking-change label?

  • No, this PR is not a breaking change
  • Yes

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Introduced a new batch identifier to enhance tracking and management of blob upload batches.
    • Added advanced filtering and indexing to improve data retrieval performance.
  • Bug Fixes

    • Improved accuracy and reliability in identifying and processing unuploaded batches across platforms.
  • Refactor

    • Optimized batch upload handling logic for better maintainability and scalability.

@coderabbitai
Copy link

coderabbitai bot commented Jun 10, 2025

Walkthrough

A new batch_hash column was added to the blob_upload table, and related indexes and unique constraints were updated to include this column. The logic for determining the first unuploaded batch by platform was refactored from the ORM layer to the controller, and the ORM methods and struct were updated to support the new schema.

Changes

File(s) Change Summary
database/migrate/migrations/00027_ blob_upload.sql Added batch_hash column to blob_upload; updated unique and composite indexes to include batch_hash.
rollup/internal/controller/blob_uploader/blob_uploader.go Moved logic for fetching the first unuploaded batch by platform to controller; updated method calls to include batch_hash. Added new method GetFirstUnuploadedBatchByPlatform.
rollup/internal/orm/batch.go Removed GetFirstUnuploadedBatchByPlatform method from the Batch ORM.
rollup/internal/orm/blob_upload.go Added batch_hash field; removed primary key tags from BatchIndex and Platform; added methods for querying next batch index and blob uploads; updated InsertOrUpdateBlobUpload signature to include batch_hash and replaced upsert logic with explicit query and conditional insert/update.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant BlobUploader
    participant BlobUploadORM
    participant BatchORM

    Client->>BlobUploader: UploadBlobToS3(platform)
    BlobUploader->>BlobUploader: GetFirstUnuploadedBatchByPlatform(startBatch, platform)
    BlobUploader->>BlobUploadORM: GetNextBatchIndexToUploadByPlatform(startBatch, platform)
    BlobUploadORM-->>BlobUploader: batchIndex
    BlobUploader->>BatchORM: GetBatchByIndex(batchIndex)
    BatchORM-->>BlobUploader: batch
    alt Parent batch not uploaded
        BlobUploader->>BlobUploadORM: GetBlobUploads(parentBatchIndex, platform)
        BlobUploadORM-->>BlobUploader: upload status
        BlobUploader->>BlobUploader: Decrement batchIndex and retry
    end
    alt Batch committed
        BlobUploader-->>Client: batch
    else Batch not committed or not found
        BlobUploader-->>Client: nil
    end
    BlobUploader->>BlobUploadORM: InsertOrUpdateBlobUpload(batchIndex, batchHash, platform, status)
Loading

Poem

In the warren where hashes now hop,
A new column joins the blob upload crop.
Indices shift, constraints realign,
Batch hashes and logic now intertwine.
The ORM and controller each play their part—
A well-ordered waltz, database and code, smart!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@codecov-commenter
Copy link

codecov-commenter commented Jun 10, 2025

Codecov Report

Attention: Patch coverage is 0% with 102 lines in your changes missing coverage. Please review.

Project coverage is 40.16%. Comparing base (4ee459a) to head (90440f0).

Files with missing lines Patch % Lines
rollup/internal/orm/blob_upload.go 0.00% 61 Missing ⚠️
...internal/controller/blob_uploader/blob_uploader.go 0.00% 41 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1677      +/-   ##
===========================================
- Coverage    40.21%   40.16%   -0.05%     
===========================================
  Files          232      232              
  Lines        18391    18448      +57     
===========================================
+ Hits          7396     7410      +14     
- Misses       10275    10315      +40     
- Partials       720      723       +3     
Flag Coverage Δ
coordinator 34.55% <ø> (+0.42%) ⬆️
database 42.05% <ø> (ø)
rollup 46.30% <0.00%> (-0.55%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
database/migrate/migrations/00027_ blob_upload.sql (1)

1-1: ⚠️ Potential issue

Fix the filename - remove extra space after migration number.

The filename contains an extra space: 00027_ blob_upload.sql. This could cause issues with migration tools. Rename it to 00027_blob_upload.sql.

🧹 Nitpick comments (1)
rollup/internal/orm/blob_upload.go (1)

73-81: Clarify the ordering behavior in GetBlobUploads.

The method always appends ORDER BY batch_index ASC after applying the provided orderByList. This could lead to unexpected results if orderByList already contains a batch_index ordering.

Consider documenting this behavior or making it conditional:

+// If batch_index is not in orderByList, append it for consistent ordering
+hasBatchIndexOrder := false
 for _, orderBy := range orderByList {
+    if strings.Contains(strings.ToLower(orderBy), "batch_index") {
+        hasBatchIndexOrder = true
+    }
     db = db.Order(orderBy)
 }

 if limit > 0 {
     db = db.Limit(limit)
 }

-db = db.Order("batch_index ASC")
+if !hasBatchIndexOrder {
+    db = db.Order("batch_index ASC")
+}

Note: You'll need to import the "strings" package if you implement this suggestion.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4ee459a and b255336.

📒 Files selected for processing (4)
  • database/migrate/migrations/00027_ blob_upload.sql (2 hunks)
  • rollup/internal/controller/blob_uploader/blob_uploader.go (6 hunks)
  • rollup/internal/orm/batch.go (0 hunks)
  • rollup/internal/orm/blob_upload.go (3 hunks)
💤 Files with no reviewable changes (1)
  • rollup/internal/orm/batch.go
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
rollup/internal/orm/blob_upload.go (1)

55-55: Wrong struct name in error message (previous feedback still unresolved)

The message still says Batch.GetFirstUnuploadedBatchIndexByPlatform.
Replace Batch with BlobUpload to keep stack traces accurate.

-			return 0, fmt.Errorf("Batch.GetFirstUnuploadedBatchIndexByPlatform error: %w", err)
+			return 0, fmt.Errorf("BlobUpload.GetFirstUnuploadedBatchIndexByPlatform error: %w", err)
🧹 Nitpick comments (1)
rollup/internal/orm/blob_upload.go (1)

104-108: Upsert should also refresh updated_at

During the conflict update you only set status, leaving updated_at stale.
Include it so dashboards and audits reflect the true modification time.

-		DoUpdates: clause.AssignmentColumns([]string{"status"}),
+		DoUpdates: clause.AssignmentColumns([]string{"status", "updated_at"}),
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b255336 and e6b84a2.

📒 Files selected for processing (2)
  • rollup/internal/controller/blob_uploader/blob_uploader.go (6 hunks)
  • rollup/internal/orm/blob_upload.go (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • rollup/internal/controller/blob_uploader/blob_uploader.go
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/orm/blob_upload.go (2)
common/types/db.go (3)
  • BlobStoragePlatform (358-358)
  • BlobUploadStatusUploaded (339-339)
  • BlobUploadStatus (331-331)
bridge-history-api/internal/orm/migrate/migrate.go (1)
  • Status (54-56)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check

@yiweichi yiweichi requested review from colinlyguo and georgehao June 11, 2025 07:21
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
rollup/internal/orm/blob_upload.go (2)

19-22: Declare composite primary key to restore ORM identity

The struct still lacks primaryKey tags on batch_index, batch_hash, and platform.
Without a primary key GORM falls back to “no primary key mode”, disabling important behaviours (updates via Save, optimistic locking, etc.) and emitting runtime warnings.

-	BatchIndex uint64 `json:"batch_index" gorm:"column:batch_index"`
-	BatchHash  string `json:"batch_hash" gorm:"column:batch_hash"`
-	Platform   int16  `json:"platform" gorm:"column:platform"`
+	BatchIndex uint64 `json:"batch_index" gorm:"primaryKey;column:batch_index"`
+	BatchHash  string `json:"batch_hash"  gorm:"primaryKey;column:batch_hash"`
+	Platform   int16  `json:"platform"    gorm:"primaryKey;column:platform"`

68-70: Use map binding instead of looping Where to avoid SQL-injection foot-guns

db.Where(key, value) requires key to contain its own placeholder which is brittle and error-prone. A single db.Where(fields) lets GORM build the placeholders safely and keeps the code shorter.

-for key, value := range fields {
-    db = db.Where(key, value)
-}
+db = db.Where(fields)
🧹 Nitpick comments (2)
rollup/internal/orm/blob_upload.go (2)

44-45: Redundant deleted_at IS NULL predicate

GORM automatically adds deleted_at IS NULL for soft-deleted models. Re-stating it is harmless but clutters the query builder chain and risks divergence if the column name ever changes.


72-81: Ordering clause can contradict caller’s intention

Appending db.Order("batch_index ASC") after iterating orderByList means:

  • final SQL will sort by caller-supplied order(s) and then by batch_index ASC;
  • if caller already ordered by batch_index DESC, the second clause reverses the previous one’s effect due to precedence.

Either prepend this default order before orderByList, or document the override clearly.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc2be98 and dabcfca.

📒 Files selected for processing (1)
  • rollup/internal/orm/blob_upload.go (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/orm/blob_upload.go (1)
common/types/db.go (3)
  • BlobStoragePlatform (358-358)
  • BlobUploadStatusUploaded (339-339)
  • BlobUploadStatus (331-331)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
rollup/internal/orm/blob_upload.go (3)

19-22: Declare the composite key explicitly to avoid GORM edge-cases

None of the fields is marked as a primary key.
GORM will therefore treat the struct as having no primary key, which disables a few ORM behaviours (e.g. Save, optimistic‐locking, cascading updates) and may even log warnings during auto-migration.
Since the table's uniqueness is (batch_index, batch_hash, platform), mark those three columns as a composite primary key so GORM can reason about identity correctly.

-	BatchIndex uint64 `json:"batch_index" gorm:"column:batch_index"`
-	BatchHash  string `json:"batch_hash" gorm:"column:batch_hash"`
-	Platform   int16  `json:"platform" gorm:"column:platform"`
+	BatchIndex uint64 `json:"batch_index" gorm:"primaryKey;column:batch_index"`
+	BatchHash  string `json:"batch_hash"  gorm:"primaryKey;column:batch_hash"`
+	Platform   int16  `json:"platform"    gorm:"primaryKey;column:platform"`

68-70: Where(key, value) is brittle and bypasses placeholders

db.Where(key, value) expects key to contain the full SQL with its own placeholders (e.g. "platform = ?") – otherwise the generated SQL becomes invalid and opens room for accidental SQL-injection if key is ever constructed dynamically.

Refactor to let GORM build the condition:

-for key, value := range fields {
-	db = db.Where(key, value)
-}
+db = db.Where(fields)

This accepts a map directly, is safer, and shorter.


98-121: Potential race condition – replace "select-then-insert" with atomic upsert

Between the SELECT and CREATE another goroutine/instance can insert the same (batch_index, batch_hash, platform) causing a duplicate-key error.
GORM supports atomic upsert:

-var existing BlobUpload
-err := db.Where("batch_index = ? AND batch_hash = ? AND platform = ? AND deleted_at IS NULL",
-	batchIndex, batchHash, int16(platform),
-).First(&existing).Error
-
-if errors.Is(err, gorm.ErrRecordNotFound) {
-	newRecord := BlobUpload{
-		BatchIndex: batchIndex,
-		BatchHash:  batchHash,
-		Platform:   int16(platform),
-		Status:     int16(status),
-	}
-	if err := db.Create(&newRecord).Error; err != nil {
-		return fmt.Errorf("BlobUpload.InsertOrUpdateBlobUpload insert error: %w, batch index: %v, batch_hash: %v, platform: %v", err, batchIndex, batchHash, platform)
-	}
-	return nil
-} else if err != nil {
-	return fmt.Errorf("BlobUpload.InsertOrUpdateBlobUpload query error: %w, batch index: %v, batch_hash: %v, platform: %v", err, batchIndex, batchHash, platform)
-}
-
-if err := db.Model(&existing).Update("status", int16(status)).Error; err != nil {
-	return fmt.Errorf("BlobUpload.InsertOrUpdateBlobUpload update error: %w, batch index: %v, batch_hash: %v, platform: %v", err, batchIndex, batchHash, platform)
-}
+import "gorm.io/gorm/clause"
+
+newRecord := BlobUpload{
+	BatchIndex: batchIndex,
+	BatchHash:  batchHash,
+	Platform:   int16(platform),
+	Status:     int16(status),
+}
+
+if err := db.Clauses(clause.OnConflict{
+	Columns:   []clause.Column{{Name: "batch_index"}, {Name: "batch_hash"}, {Name: "platform"}},
+	DoUpdates: clause.Assignments(map[string]interface{}{"status": int16(status)}),
+}).Create(&newRecord).Error; err != nil {
+	return fmt.Errorf("BlobUpload.InsertOrUpdateBlobUpload upsert error: %w, batch index: %v, batch_hash: %v, platform: %v", err, batchIndex, batchHash, platform)
+}

This removes the race and simplifies the flow.

🧹 Nitpick comments (1)
rollup/internal/orm/blob_upload.go (1)

72-80: Avoid redundant ordering by batch_index

The method always appends ORDER BY batch_index ASC at line 80, which could create redundant ordering if batch_index is already included in orderByList. Consider checking if batch_index is already in the order list before appending.

-for _, orderBy := range orderByList {
-	db = db.Order(orderBy)
-}
-
-if limit > 0 {
-	db = db.Limit(limit)
-}
-
-db = db.Order("batch_index ASC")
+for _, orderBy := range orderByList {
+	db = db.Order(orderBy)
+}
+
+// Only add batch_index ordering if not already present
+hasBatchIndexOrder := false
+for _, orderBy := range orderByList {
+	if strings.Contains(strings.ToLower(orderBy), "batch_index") {
+		hasBatchIndexOrder = true
+		break
+	}
+}
+if !hasBatchIndexOrder {
+	db = db.Order("batch_index ASC")
+}
+
+if limit > 0 {
+	db = db.Limit(limit)
+}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dabcfca and 9b02689.

📒 Files selected for processing (2)
  • rollup/internal/controller/blob_uploader/blob_uploader.go (6 hunks)
  • rollup/internal/orm/blob_upload.go (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • rollup/internal/controller/blob_uploader/blob_uploader.go
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/orm/blob_upload.go (1)
common/types/db.go (3)
  • BlobStoragePlatform (358-358)
  • BlobUploadStatusUploaded (339-339)
  • BlobUploadStatus (331-331)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
database/migrate/migrations/00027_ blob_upload.sql (2)

6-6: Specify a maximum length for batch_hash.

Using an unbounded VARCHAR can lead to unpredictable storage and validation issues. Consider setting an appropriate length (e.g., VARCHAR(64)) for your hash values.


6-6: Add a primary key constraint.

The table currently has no primary key defined. You may enforce uniqueness and optimize lookups by making (batch_index, batch_hash, platform) a composite primary key or by introducing a dedicated id column.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9b02689 and 9db9604.

📒 Files selected for processing (1)
  • database/migrate/migrations/00027_ blob_upload.sql (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (10)
  • GitHub Check: goimports-lint
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: goimports-lint
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: goimports-lint
🔇 Additional comments (1)
database/migrate/migrations/00027_ blob_upload.sql (1)

17-18: Validate index strategy and naming conventions.

You’ve created both a unique index on (batch_index, batch_hash, platform) and a composite index on (batch_index, batch_hash, status, platform). Please confirm that both are required for your query patterns. Also, ensure index names follow the project’s conventions consistently (e.g., uindex_ vs. idx_ prefixes).

Also applies to: 24-25

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
rollup/internal/orm/blob_upload.go (3)

18-23: Composite primary key still missing – GORM can’t identify a row
The struct is still missing primaryKey tags on batch_index, batch_hash, and platform, exactly as flagged in a previous review. Without a PK, GORM-generated UPDATE/DELETE statements (see line 118) will fail or noop because it has no identifier column.


68-70: Unsafe Where(key, value) loop – still brittle
Iterating over the fields map and passing raw keys keeps the SQL-injection & syntax‐error foot-guns that were highlighted before. Let GORM build the predicate safely:

-for key, value := range fields {
-    db = db.Where(key, value)
-}
+db = db.Where(fields)

98-118: Race window remains – still using select-then-insert
The earlier advice to switch to an atomic ON CONFLICT upsert hasn’t been applied, so concurrent writers can still trigger duplicate-key errors. GORM’s Clauses(clause.OnConflict…) solves this cleanly.

🧹 Nitpick comments (2)
rollup/internal/orm/blob_upload.go (2)

43-61: Trim query payload – select only the column you need
GetNextBatchIndexToUploadByPlatform only uses blobUpload.BatchIndex, yet the query fetches every column. Narrow the projection to cut I/O and (de)serialization cost:

-db = db.Model(&BlobUpload{})
+db = db.Model(&BlobUpload{}).Select("batch_index")

or even:

var idx uint64
err := db.Pluck("batch_index", &idx).Error

98-101: Redundant deleted_at IS NULL clause
GORM already appends the soft-delete filter for models containing a DeletedAt field. Keeping it in the manual SQL is harmless but clutters the statement and risks diverging behaviour if Unscoped() is ever added.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9db9604 and 90440f0.

📒 Files selected for processing (2)
  • rollup/internal/controller/blob_uploader/blob_uploader.go (6 hunks)
  • rollup/internal/orm/blob_upload.go (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • rollup/internal/controller/blob_uploader/blob_uploader.go
🧰 Additional context used
🧬 Code Graph Analysis (1)
rollup/internal/orm/blob_upload.go (1)
common/types/db.go (3)
  • BlobStoragePlatform (358-358)
  • BlobUploadStatusUploaded (339-339)
  • BlobUploadStatus (331-331)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: tests
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: tests
  • GitHub Check: check
  • GitHub Check: check
  • GitHub Check: tests

@yiweichi yiweichi merged commit 5d6b5a8 into develop Jun 11, 2025
12 checks passed
@yiweichi yiweichi deleted the feat-blob-upload-table-add-batch-hash branch June 11, 2025 10:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants