Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow nextgenrepl to real-time replicate reaps #6

Merged
merged 5 commits into from
Sep 14, 2023

Conversation

martinsumner
Copy link

This is to address the issue of reaping across sync'd clusters. Without this feature it is necessary to disable full-sync whilst independently replicating on each cluster.

Now if reaping via riak_kv_reaper the reap will be replicated assuming the riak_kv.repl_reap flag has been enabled. At the receiving cluster the reap will not be replicated any further.

There are some API changes to support this. The find_tombs aae_fold will now return Keys/Clocks and not Keys/DeleteHash. The ReapReference for riak_kv_repaer will now expect a clock (version vector) not a DeleteHash, and will also now expect an additional boolean to indicate if this repl is a replication candidate (it will be false for all pushed reaps).

The object encoding for nextgenrepl now has a flag to indicate a reap, with a special encoding for reap references.

This is to address the issue of reaping across sync'd clusters.  Without this feature it is necessary to disable full-sync whilst independently replicating on each cluster.

Now if reaping via riak_kv_reaper the reap will be replicated assuming the `riak_kv.repl_reap` flag has been enabled.  At the receiving cluster the reap will not be replicated any further.

There are some API changes to support this.  The `find_tombs` aae_fold will now return Keys/Clocks and not Keys/DeleteHash.  The ReapReference for riak_kv_repaer will now expect a clock (version vector) not a DeleteHash, and will also now expect an additional boolean to indicate if this repl is a replication candidate (it will be false for all pushed reaps).

The object encoding for nextgenrepl now has a flag to indicate a reap, with a special encoding for reap references.
src/riak_object.erl Outdated Show resolved Hide resolved
Clarify specs
Copy link

@ThomasArts ThomasArts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please have a look at the type conflict mentioned in comment
It might be me overlooking something, but worth a second pair of eyes

src/riak_client.erl Show resolved Hide resolved
src/riak_client.erl Show resolved Hide resolved
src/riak_kv_replrtq_snk.erl Outdated Show resolved Hide resolved
@martinsumner
Copy link
Author

OpenRiak/riak_test#4

@martinsumner
Copy link
Author

@martinsumner
Copy link
Author

@martinsumner martinsumner merged commit ca26ab2 into nhse-develop-3.0 Sep 14, 2023
1 check passed
martinsumner added a commit that referenced this pull request Nov 14, 2023
* Allow nextgenrepl to real-time replicate reaps

This is to address the issue of reaping across sync'd clusters.  Without this feature it is necessary to disable full-sync whilst independently replicating on each cluster.

Now if reaping via riak_kv_reaper the reap will be replicated assuming the `riak_kv.repl_reap` flag has been enabled.  At the receiving cluster the reap will not be replicated any further.

There are some API changes to support this.  The `find_tombs` aae_fold will now return Keys/Clocks and not Keys/DeleteHash.  The ReapReference for riak_kv_repaer will now expect a clock (version vector) not a DeleteHash, and will also now expect an additional boolean to indicate if this repl is a replication candidate (it will be false for all pushed reaps).

The object encoding for nextgenrepl now has a flag to indicate a reap, with a special encoding for reap references.

* Update riak_object.erl

Clarify specs

* Take timestamp at correct point (after push)

* Updates following review

* Update rebar.config
martinsumner added a commit that referenced this pull request Nov 14, 2023
* Allow nextgenrepl to real-time replicate reaps

This is to address the issue of reaping across sync'd clusters.  Without this feature it is necessary to disable full-sync whilst independently replicating on each cluster.

Now if reaping via riak_kv_reaper the reap will be replicated assuming the `riak_kv.repl_reap` flag has been enabled.  At the receiving cluster the reap will not be replicated any further.

There are some API changes to support this.  The `find_tombs` aae_fold will now return Keys/Clocks and not Keys/DeleteHash.  The ReapReference for riak_kv_repaer will now expect a clock (version vector) not a DeleteHash, and will also now expect an additional boolean to indicate if this repl is a replication candidate (it will be false for all pushed reaps).

The object encoding for nextgenrepl now has a flag to indicate a reap, with a special encoding for reap references.

* Update riak_object.erl

Clarify specs

* Take timestamp at correct point (after push)

* Updates following review

* Update rebar.config
@martinsumner martinsumner mentioned this pull request Nov 14, 2023
martinsumner added a commit that referenced this pull request Feb 13, 2024
* Merge pull request #1 from nhs-riak/nhse-contrib-kv1871

KV i1871 - Handle timeout on remote connection

* Trigger batch correctly at each size (#4)

* Force timeout to trigger (#3)

Previously, the inactivity timeout on handle_continue could be cancelled by a call to riak_kv_rpelrtq_snk (e.g. from riak_kv_rpelrtq_peer).  this might lead to the log_stats loop never being triggered.

* Configurable %key query on leveled (#8)

Can be configured to ignore tombstone keys by default.

* Allow nextgenrepl to real-time replicate reaps (#6)

* Allow nextgenrepl to real-time replicate reaps

This is to address the issue of reaping across sync'd clusters.  Without this feature it is necessary to disable full-sync whilst independently replicating on each cluster.

Now if reaping via riak_kv_reaper the reap will be replicated assuming the `riak_kv.repl_reap` flag has been enabled.  At the receiving cluster the reap will not be replicated any further.

There are some API changes to support this.  The `find_tombs` aae_fold will now return Keys/Clocks and not Keys/DeleteHash.  The ReapReference for riak_kv_repaer will now expect a clock (version vector) not a DeleteHash, and will also now expect an additional boolean to indicate if this repl is a replication candidate (it will be false for all pushed reaps).

The object encoding for nextgenrepl now has a flag to indicate a reap, with a special encoding for reap references.

* Update riak_object.erl

Clarify specs

* Take timestamp at correct point (after push)

* Updates following review

* Update rebar.config

* Make current_peers empty when disabled (#10)

* Make current_peers empty when disabled

* Peer discovery to recognise suspend and disable of sink

* Update src/riak_kv_replrtq_peer.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Update src/riak_kv_replrtq_peer.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* De-lager

* Add support for v0 object in parallel-mode AAE (#11)

* Add support for v0 object in parallel-mode AAE

Cannot assume that v0 objects will not happen - capability negotiation down to v0 on 3.0 Riak during failure scenarios

* Update following review

As ?MAGIC is distinctive constant, then it should be the one on the pattern match - with everything else assume to be convertible by term_to_binary.

* Update src/riak_object.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Update riak_kv_ttaaefs_manager.erl (#13)

For bucket-based full-sync `{tree_compare, 0}` is the return on success.

* Correct log macro typo

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants