-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(notification): ensure new snapshot is only notified after new fragment_mapping #7042
Conversation
…agment_mapping changes have been notified
Codecov Report
@@ Coverage Diff @@
## main #7042 +/- ##
=======================================
Coverage 73.15% 73.16%
=======================================
Files 1052 1052
Lines 167399 167413 +14
=======================================
+ Hits 122467 122488 +21
+ Misses 44932 44925 -7
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rest LGTM! Thanks for taking this up.
@@ -1714,11 +1708,11 @@ where | |||
{ | |||
self.check_state_consistency().await; | |||
} | |||
Ok(()) | |||
Ok(Some(snapshot)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may return an error in L1704 and thus miss to notify frontend. Can this be an issue? Recovery should be triggered in this case so maybe missing one notification is fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactored try_send_compaction_request
to make it infallible (by handling errors internally).
Because I think a compaction scheduling failure should not trigger recovery. And when it fails, recovery doesn't help at all. And currently the only reason of failure is compaction scheduler shutdown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
Scale test fails after merging main, investigating 🤔 |
Look similar to #7103. 🤔 |
I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.
What's changed and what's your intention?
This PR addresses #5446, by ensuring frontend is always notified of new snapshot after corresponding fragment_mapping.
Also a minor refactor that extracts
collect_synced_ssts
fromcomplete_barrier
, to make the latter cleaner.This PR doesn't (and no need to) affect relative order between snapshot and catalog notification:
Checklist
- [ ] I have added necessary unit tests and integration tests./risedev check
(or alias,./risedev c
)Documentation
If your pull request contains user-facing changes, please specify the types of the changes, and create a release note. Otherwise, please feel free to remove this section.
Types of user-facing changes
Please keep the types that apply to your changes, and remove those that do not apply.
Release note
Please create a release note for your changes. In the release note, focus on the impact on users, and mention the environment or conditions where the impact may occur.
Refer to a related PR or issue link (optional)
#5446