Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIXED] Don't InstallSnapshot during shutdown, would race with monitorStream/monitorConsumer #6153

Merged
merged 1 commit into from
Nov 20, 2024

Conversation

MauriceVanVeen
Copy link
Member

When stopping a stream or consumer, we would attempt to install a snapshot. However, this would race with what's happening in monitorStream/monitorConsumer at that time.

For example:

  1. In applyStreamEntries we call into mset.processJetStreamMsg to persist one or multiple messages.
  2. We call mset.stop(..) either before or during the above.
  3. In mset.stop(..) we'd wait for mset.processJetStreamMsg to release the lock so we can enter mset.stateSnapshotLocked(). We create a snapshot with new state here!
  4. Now we call into InstallSnapshot to persist above snapshot, but n.applied does not contain the right value, the value will be lower.
  5. Then applyStreamEntries finishes and we end with calling n.Applied(..).

This would be a race condition depending on if 4 happened before or after 5.

It's essential that the snapshot we make is aligned with the n.applied value. If we don't that means we'll replay and need to increase mset.clfs which will snowball into stream desync due to this shift.

The only place where we can guarantee that the snapshot and applied are aligned is in doSnapshot of monitorStream and monitorConsumer (and monitorCluster), so we must not attempt installing snapshots outside of those.

Signed-off-by: Maurice van Veen github@mauricevanveen.com

@MauriceVanVeen MauriceVanVeen requested a review from a team as a code owner November 20, 2024 12:51
Copy link
Member

@derekcollison derekcollison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

…rStream/monitorConsumer

Signed-off-by: Maurice van Veen <github@mauricevanveen.com>
@MauriceVanVeen MauriceVanVeen force-pushed the maurice/install-snapshot-race branch from 1097171 to 6938f97 Compare November 20, 2024 15:09
@derekcollison derekcollison merged commit 71ba974 into main Nov 20, 2024
4 of 5 checks passed
@derekcollison derekcollison deleted the maurice/install-snapshot-race branch November 20, 2024 15:30
MauriceVanVeen pushed a commit that referenced this pull request Nov 21, 2024
…itorStream`/`monitorConsumer` (#6153)

When stopping a stream or consumer, we would attempt to install a
snapshot. However, this would race with what's happening in
`monitorStream`/`monitorConsumer` at that time.

For example:
1. In `applyStreamEntries` we call into `mset.processJetStreamMsg` to
persist one or multiple messages.
2. We call `mset.stop(..)` either before or during the above.
3. In `mset.stop(..)` we'd wait for `mset.processJetStreamMsg` to
release the lock so we can enter `mset.stateSnapshotLocked()`. **We
create a snapshot with new state here!**
4. Now we call into `InstallSnapshot` to persist above snapshot, but
`n.applied` does not contain the right value, the value will be lower.
5. Then `applyStreamEntries` finishes and we end with calling
`n.Applied(..)`.

This would be a race condition depending on if 4 happened before or
after 5.

It's essential that the snapshot we make is aligned with the `n.applied`
value. If we don't that means we'll replay and need to increase
`mset.clfs` which will snowball into stream desync due to this shift.

The only place where we can guarantee that the snapshot and applied are
aligned is in `doSnapshot` of `monitorStream` and `monitorConsumer` (and
`monitorCluster`), so we must not attempt installing snapshots outside
of those.


Signed-off-by: Maurice van Veen <github@mauricevanveen.com>
neilalexander added a commit that referenced this pull request Nov 22, 2024
Includes:

- #6147
- #6150
- #6151
- #6153
- #6154
- #6146
- #6139
- #6152
- #6157
- #6161

Signed-off-by: Neil Twigg <neil@nats.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants