-
Notifications
You must be signed in to change notification settings - Fork 989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: split state validation status into kernel and rproof updates. #3096
fix: split state validation status into kernel and rproof updates. #3096
Conversation
…d fix sync status for these two states
Notice that
The In the simple examples above we may have an MMR of size 4 (3 leaves) where we only have a single leaf position in the
Note: outputs can be removed but not yet pruned because we need to support "rewind" in a fork situation where we undo a block and reapply a potentially different set of transactions to the chain state.
Not necessarily because we do not always want to know the number of leaves in the entire MMR. We have many instances where we need to know the number of leaves in a particular subtree or beneath a particular peak within an MMR. Hope this makes sense. The internals of the MMR data structure are pretty complex. But this complexity is worth it. The immutable, append-only semantics that it gives us though are very well aligned with the data we need to store on disk. |
core/src/core/pmmr/backend.rs
Outdated
@@ -55,6 +55,9 @@ pub trait Backend<T: PMMRable> { | |||
/// Iterator over current (unpruned, unremoved) leaf positions. | |||
fn leaf_pos_iter(&self) -> Box<dyn Iterator<Item = u64> + '_>; | |||
|
|||
/// Number of leaves | |||
fn n_leafs(&self) -> u64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should think about a better name for this. Its not the number of leaves. Its the number of "not removed" leaf positions but that's a pretty awkward name.
@@ -192,7 +192,8 @@ impl SyncRunner { | |||
match self.sync_state.status() { | |||
SyncStatus::TxHashsetDownload { .. } | |||
| SyncStatus::TxHashsetSetup | |||
| SyncStatus::TxHashsetValidation { .. } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@quentinlesceller What implications are there for changing the set of sync states here? Is this something we can safely do or should we maintain the existing set of states?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could add the TxHashsetValidation status back to maintain compatibility for any consumers? Was thinking we could remove it in a major release.. 3.0.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If think it's pretty safe to move forward and add this two new state. That could potentially break wallet using the status if they are doing strong type check but I presume it's okay for 3.0.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment for a (hopefully useful) explanation of n_leaves
I like this - but there are a couple of things to resolve.
Just testing this locally and the tui is just showing the following during rangeproof and signature validation -
I don't see it progress to |
I think this might be broken on master. Seems to be around getting header stats whilst txhashset validating/rebuilding is going on. Looking into a fix. |
I think it was broke by this #3045 Not sure how to get the status now. There is a quite long lived write lock on pmmr_header which prevents us from reading when updating the stats.. will have a think. |
b38b00b
to
76f0601
Compare
@antiochp this is working now. Another potential future improvement would be to have a separate state for catching up on the latest blocks, rather than flicking between sync state 1 and 7 (or 1 and 4 as it is now). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just synced from scratch with it. Worked great. Awesome work @JosephGoulden. Minor comments below.
Fixed those, thanks @quentinlesceller . |
Fixes #3084
I split out the sync validation status into kernel and range proof validation and added these as extra sync steps. Maybe this is too much information but I feel its nice to see progress being made when syncing and to help understand what's going on.
Also the percentage completion for range proofs and kernels is accurate from 0-100%.
There are a couple of things I'm not sure about though and could use some help.
The way I fixed the status was to add a function to the backend PMMR to get the leaf_set length, instead of using pmmr::n_leaves. Is this okay? I don't understand why the two functions give different results. Can't we just use the length of the leaf_set in all places instead of calculating leafs based on size in pmmr::n_leaves.
Breaking change in the server API - TxHashsetValidation replaced by TxHashsetRangeProofsValidation and TxHashsetKernelsValidation. Is this okay for a major version increment like 3.0.0 or do we need to depreciate first?