Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(storage): pass epoch and table id before barrier #17635

Merged
merged 15 commits into from
Jul 13, 2024

Conversation

wenym1
Copy link
Contributor

@wenym1 wenym1 commented Jul 9, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Pass the table_ids that will start writing on an epoch at the beginning of inject_barrier. This is for helping HummockUploader to spill data correctly so that data of two separate syncs won't be mixed in a spill task.

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

{
let node_to_collect = match self.control_stream_manager.inject_barrier(
command_ctx.clone(),
self.state.inflight_actor_infos.existing_table_ids(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to confirm, self.state.inflight_actor_infos.existing_table_ids() contains both created MV's and creating MV's state table ids, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This is more like a refactoring PR, and the logic of partial checkpoint based backfill is not included yet.

@@ -727,7 +758,7 @@ struct UnsyncData {
instance_table_id: HashMap<LocalInstanceId, TableId>,
// TODO: this is only used in spill to get existing epochs and can be removed
// when we support spill not based on epoch
epochs: BTreeMap<HummockEpoch, ()>,
epochs: BTreeMap<HummockEpoch, HashSet<TableId>>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be conflicting with #17539

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's expected. We'd better merge this PR first before #17539, so that in #17539 we can spill the data with the information provided in this PR.

@@ -454,6 +454,15 @@ impl HummockStorage {
)
}

/// Declare the start of an epoch. This information is provided for spill so that the spill task won't
/// include data of two or more syncs.
// TODO: remote this method when we support spill task that can include data of more two or more syncs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: remove

Comment on lines +829 to +831
// When drop/cancel a streaming job, for the barrier to stop actor, the
// local instance will call `local_seal_epoch`, but the `next_epoch` won't be
// called `start_epoch` because we have stopped writing on it.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it mean that the actors for the dropping/cancelling streaming job will be included in actors_to_send but the table_ids for the streaming job will not be included in the barrier?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This is the key difference between table_ids_to_sync and actors_to_collect in the InjectBarrierRequest. actors_to_collect means actors that this barrier will flow through, and the dropping streaming job will stop after the barrier flow through the actor.

On the other hand, table_ids_to_sync means the table id that will start writing data on the epoch.curr after the barrier, so it won't include the table_ids of dropping streaming job. For this reason, in LocalBarrierWorker, we store the table_ids_to_sync of the previous barrier (let's say epoch{curr=2, prev=1}), and when a barrier(epoch{curr=3, prev=2}) flows through the streaming graph, it means the table id in the table_ids_to_sync of the previous barrier(epoch{curr=2, prev=1}) has finished writing the data of epoch 2, and then we call sync with the table_ids_to_sync of the previous barrier.

assert_gt!(next_epoch, max_epoch);
}
debug!(?table_id, epoch, next_epoch, "table data has stopped");
table_data.stopped_next_epoch = Some(next_epoch);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so stopped_next_epoch is introduced only for assertion check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

// When drop/cancel a streaming job, for the barrier to stop actor, the
// local instance will call `local_seal_epoch`, but the `next_epoch` won't be
// called `start_epoch` because we have stopped writing on it.
if !table_data.unsync_epochs.contains_key(&next_epoch) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible that table_data for the dropping/cancelling streaming job tables are absent causing L823 to panic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unlikely to happen.

Let's say the barrier that stops the actor has epoch{curr=3, prev=2}. Before calling sync on epoch.prev, we must have called start_epoch(epoch.prev). Before calling this local_seal_epoch(epoch.curr), this barrier won't be collected, and we cannot call sync(epoch.prev). Before sync(epoch.prev), the unsync_epochs must contain epoch.prev, and therefore the table_data is non-empty and won't be dropped, and then in L823 the table won't be absent.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Thanks for the explanation.

Copy link
Collaborator

@hzxa21 hzxa21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

// When drop/cancel a streaming job, for the barrier to stop actor, the
// local instance will call `local_seal_epoch`, but the `next_epoch` won't be
// called `start_epoch` because we have stopped writing on it.
if !table_data.unsync_epochs.contains_key(&next_epoch) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Thanks for the explanation.

@wenym1 wenym1 added this pull request to the merge queue Jul 13, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jul 13, 2024
@wenym1 wenym1 added this pull request to the merge queue Jul 13, 2024
Merged via the queue into main with commit 4c4ada1 Jul 13, 2024
32 of 33 checks passed
@wenym1 wenym1 deleted the yiming/pre-start-sync-epoch branch July 13, 2024 11:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants