Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CORE-8485] Reset translation state on snapshot #24522

Conversation

mmaslankaprv
Copy link
Member

@mmaslankaprv mmaslankaprv commented Dec 11, 2024

When an stm receives Raft snapshot it indicates the whole in memory
state of that state machine should be replaced with the state from the
snapshot. The datalake translation state machine was incorrectly
handling raft snapshot which lead to its state being out of date after
the snapshot is applied. Raft snapshot for translation_stm is empty so
the correct action is to reset the state machine state and wait for the
update to be applied.

Fixes: CORE-8485

Backports Required

  • none - not a bug fix
  • none - this is a backport
  • none - issue does not exist in previous branches
  • none - papercut/not impactful enough to backport
  • v24.3.x
  • v24.2.x
  • v24.1.x

Release Notes

  • none

Signed-off-by: Michał Maślanka <michal@redpanda.com>
Signed-off-by: Michał Maślanka <michal@redpanda.com>
When an stm receives Raft snapshot it indicates the whole in memory
state of that state machine should be replaced with the state from the
snapshot. The datalake translation state machine was incorrectly
handling raft snapshot which lead to its state being out of date after
the snapshot is applied. Raft snapshot for translation_stm is empty so
the correct action is to reset the state machine state and wait for the
update to be applied.

Fixes: CORE-8485

Signed-off-by: Michał Maślanka <michal@redpanda.com>
// state machine will not hold any obsolete state that should be overriden
// with the snapshot.
vlog(_log.debug, "Applying raft snapshot, resetting state");
_highest_translated_offset = kafka::offset{};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit unclear to me why is it okay to throw out the offset like this - as the raft snapshot is empty, STMs on different replicas will necessarily get out of sync. Is it because, if the translation is to be continued from some later point, we hope to get another update in the log?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another update is one thing, the other one is the reconciliation with the datalake coordinator which happens before every translation. The empty snapshot indicates the snapshot is not required by this STM, hence resetting the state here is the only viable option we have

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The empty snapshot indicates the snapshot is not required by this STM, hence resetting the state here is the only viable option we have

Yes, but why is it okay to have an empty snapshot?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering about that, and given that we always commit to the coordinator i think it is safe. Am I right @bharathv ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its ok to reset. Currently this offset is only used to enforce max_collectible_offset on the replica. It is ok to reset because lowering the max collectible offset only delays compaction and has no correctness implications until it catches up again. As Michal said the leader reconciles with the coordinator every time to get the offset_to_translate_from.

@WillemKauf is planning to get rid of translation in this path for read replicas, so we could probably just store the log offset and avoid this kafka offset altogether. This kafka offset was added as an optimization so the coordinator can avoid reconciliation in every round of translation but since then that optimization has been removed to simplify the code, or we could just store a pair of <kafka_offset, log_offset> since it is already in serde and implement the optimization later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this offset is only used to enforce max_collectible_offset on the replica

is there a reason to be concerned that the scope could increase, this assumption no longer holds, and now there is a problem?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm can't think of a scope increase for the offset in the near future. The only reason it exists is to enforce max_collectible_offset. Also as noted, the plan is to get rid of offset translation in this path altogether and make this translation state self contained, and that automatically fixes this problem too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh right makes sense. thanks

@vbotbuildovich
Copy link
Collaborator

vbotbuildovich commented Dec 11, 2024

@mmaslankaprv mmaslankaprv merged commit f4c472f into redpanda-data:dev Dec 12, 2024
19 checks passed
@vbotbuildovich
Copy link
Collaborator

/backport v24.3.x

@vbotbuildovich
Copy link
Collaborator

Failed to create a backport PR to v24.3.x branch. I tried:

git remote add upstream https://github.com/redpanda-data/redpanda.git
git fetch --all
git checkout -b backport-pr-24522-v24.3.x-586 remotes/upstream/v24.3.x
git cherry-pick -x f34856c2b7 f454ef290e 8985f5762f

Workflow run logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants