Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use 'RR' when a prunable vclock is replicated #1866

Merged
merged 3 commits into from
Jul 18, 2023

Conversation

martinsumner
Copy link
Contributor

@martinsumner martinsumner commented Jul 14, 2023

There may be some situations whereby a vector clock grows beyond the prescribed limits on the source cluster - in particular following read repair.

In these cases the new object needs to be replicated but with the same resulting vector clock (assuming no siblings). If the same vector clock does not result on the sink - any full-sync operation may continuously detect the delta, but not be able to resolve it (as the sink vnodes prune each time).

The 'rr' option will, on riak_kv_vnode, ensure pruning is bypassed so that we avoid pruning on a sink, if we have not pruned on a source. The 'rr' option is only used when the clock is prunable (as otherwise the delta could occur in the reverse direction).

The 'rr' option also blocks some sibling constraint checks (e.g. maximum number of siblings. However, as the most likely cause of it being applied is 'rr' on the src side - this is still generally a win for replication consistency).

#1864

There may be some situations whereby a vector clock grows beyond the prescribed limits on the source cluster - in particular following read repair.

In these cases the new object needs to be replicated but with the same resulting vector clock (assuming no siblings).  If the same vector clock does not result on the sink - any full-sync operation may continuously detect the  delta, but not be able to resolve it (as the sink vnodes prune each time).

The 'rr' option will, on riak_kv_vnode, ensure pruning is bypassed so that we avoid pruning on a sink, if we have not pruned on a source.  The 'rr' option is only used when the clock is prunable (as otherwise the delta could occur in the reverse direction).

The 'rr' option also blocks some sibling constraint checks (e.g. maximum number of siblings.  However, as the most likely cause of it being applied is 'rr' on the src side - this is still generally a win for replication consistency.
@martinsumner
Copy link
Contributor Author

basho/riak_test#1379

Already know bucket props at this point.  case only to be considered when `asis` - so should also work for riak_repl aae full-sync
@martinsumner martinsumner merged commit ba58a97 into develop-3.0 Jul 18, 2023
1 check passed
@martinsumner martinsumner deleted the mas-i1864-vclockprune branch July 18, 2023 09:17
martinsumner added a commit that referenced this pull request Aug 4, 2023
* Use 'RR' when a prunable vclock is replicated

There may be some situations whereby a vector clock grows beyond the prescribed limits on the source cluster - in particular following read repair.

In these cases the new object needs to be replicated but with the same resulting vector clock (assuming no siblings).  If the same vector clock does not result on the sink - any full-sync operation may continuously detect the  delta, but not be able to resolve it (as the sink vnodes prune each time).

The 'rr' option will, on riak_kv_vnode, ensure pruning is bypassed so that we avoid pruning on a sink, if we have not pruned on a source.  The 'rr' option is only used when the clock is prunable (as otherwise the delta could occur in the reverse direction).

The 'rr' option also blocks some sibling constraint checks (e.g. maximum number of siblings.  However, as the most likely cause of it being applied is 'rr' on the src side - this is still generally a win for replication consistency.

* Switch logic to put_fsm

Already know bucket props at this point.  case only to be considered when `asis` - so should also work for riak_repl aae full-sync

* Lose a line
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants