-
Notifications
You must be signed in to change notification settings - Fork 20.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
triedb/pathdb: track flat state changes in pathdb (snapshot integration pt 2) #30643
base: master
Are you sure you want to change the base?
triedb/pathdb: track flat state changes in pathdb (snapshot integration pt 2) #30643
Conversation
9918288
to
2288a96
Compare
522b841
to
7169f52
Compare
triedb/pathdb/states.go
Outdated
// revert takes the original value of accounts and storages as input and reverts | ||
// the latest state transition applied on the state set. | ||
func (s *stateSet) revert(accountOrigin map[common.Hash][]byte, storageOrigin map[common.Hash]map[common.Hash][]byte) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the name matches what I understand when I look at the code. Isn't it more like revertTo
, where you give it a set of accounts/slots, and say "hey reset yourself to these values, ignore anything else" ?
It doesn't revert the given values, it uses the given values to reset the internal state. (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the name matches what I understand when I look at the code. Isn't it more like revertTo, where you give it a set of accounts/slots, and say "hey reset yourself to these values, ignore anything else" ?
True, the supplied account and storage set is the "original" values of these mutated states and we want to reset them to the old value.
} | ||
|
||
// encode serializes the content of state set into the provided writer. | ||
func (s *stateSet) encode(w io.Writer) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We typically do not use rlp like this, encoding four times with this type of manual steps. How come you don't use something more auto-generated?
Is there some optimization at play here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not about optimization at all (also performance is not important here).
It's more like a hand-written encode rules. In StateSet, it has several maps and maps are not suitable/supported for RLP encoding.
Therefore, these code tells RLP encoder how to pack the data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it make sense to define a stateSetMarshalling
type then? And, in this encode method, conver the fields to the stateSetMarshalling
and then just rlp-encode that struct?
Rigth now it seems you are encoding fields back to back, with no envelope between stateSet
items ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes for sure, I can have a try!
ed5b4a8
to
689756e
Compare
return b | ||
} | ||
|
||
// revert is the reverse operation of commit. It also merges the provided states | ||
// and trie nodes into the buffer. The key difference is that the provided state | ||
// set should reverse the changes made by the most recent state transition. | ||
func (b *buffer) revert(db ethdb.KeyValueReader, nodes map[common.Hash]map[string]*trienode.Node) error { | ||
func (b *buffer) revert(db ethdb.KeyValueReader, nodes map[common.Hash]map[string]*trienode.Node, accounts map[common.Hash][]byte, storages map[common.Hash]map[common.Hash][]byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the semantics about revert
a bit confusing. The difference between merge
and revert
seems mostly to be an expectation on the direction of change.
But for some reason, nodeset.revert
and nodeset.merge
are quite dissimilar....
So, essentially, the nodes
here represent a bunch of trienodes, by path, which were removed at an earlier state progression. And now we want to put them back as they were?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The nodes
here represent a bunch of trienodes, by path, with the original value, which were modified at an earlier state progression.
At an earlier state progression,
if the node was deleted, then the value in the nodes
here will be the original value and it will be reset back here;
if the node was created, then the value in the nodes
here will be the null and it will be deleted here
if the node was modified, then the value in the nodes
here will be the original value and it will be reset back
All in all, revert
will reset the mutated nodes back to their original value.
But for some reason, nodeset.revert and nodeset.merge are quite dissimilar....
Yeah, essentially, nodeset.merge
aggregates two nodeset together, overwrite the node value with the later one
node.revert
undoes the last state mutation by resetting the node value back
} | ||
|
||
// account returns the account data associated with the specified address hash. | ||
func (s *stateSet) account(hash common.Hash) ([]byte, bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite get this (same for storage
below). You do not use the readlock in these methods. Do you assume that the caller handles the readlocking? If so, it seems pretty inconsistent.
They way you do it might be totally fine, but you should document somewhere what assumptions exist, regarding the locking model.
E.g. "It is assumed that calls to account/storage happen via disklayer/difflayer and are already readlocked. The mu
is used only to prevent foo bar gazonk... "
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you're more or less doing the same locking as already existing code. E.g. merge
happens when the disklayer does commit
, and that only happens when the disklayer
writelock is held. You can probably ignore my comments about locking
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the stateSet is an internal structure, it can only be held in diffLayer, or the buffer inside of diskLayer.
The stateSet itself doesn't need lock (it's lock free). Whenever the stateSet is modified (bottom-most diffLayer is merged into buffer), the original diskLayer will be marked as stale first, to prevent following access to the original stateSet. Then the new stateSet after mutation will be linked with new diskLayer.
In another word, the associated stateSet can be regarded as immutable (read-only) if the corresponding diffLayer or diskLayer is "unstale".
@@ -244,7 +332,7 @@ func (dl *diskLayer) revert(h *history) (*diskLayer, error) { | |||
// needs to be reverted is not yet flushed and cached in node | |||
// buffer, otherwise, manipulate persistent state directly. | |||
if !dl.buffer.empty() { | |||
err := dl.buffer.revert(dl.db.diskdb, nodes) | |||
err := dl.buffer.revert(dl.db.diskdb, nodes, accounts, storages) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This
If the node
// buffer is not empty, it means that the state transition that
// needs to be reverted is not yet flushed and cached in node
// buffer, otherwise, manipulate persistent state directly.
So, if that happens... A few lines up, we did this:
// Apply the reverse state changes upon the current state. This must
// be done before holding the lock in order to access state in "this"
// layer.
nodes, err := apply(dl.db, h.meta.parent, h.meta.root, h.accounts, h.storages)
Using the dl.db
as the nodereader. But some changes were not already flushed to that nodereader. Does that matter? Or are those two separate things?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whenever a state transition occurs, the state mutation set will be added as a new layer in the pathdb. If there are too many in-memory layers, some of the bottom-most layers will be aggregated and may eventually be flushed to disk. The aggregated or flushed layers might not be visible if they are overwritten by other layers.
For example, if there is a layer x in the buffer (diskLayer) and layer y is aggregated into buffer, then x is no longer visible. Similar, if the current persistent state is x, this state will be invisible if state y is flushed into disk.
In the revert function, we aim to revert the disk layer to previous state. The situations can be categorized into two scenarios:
- If the buffer of the disk layer contains some layers, we simply undo the aggregation of the last layer in the buffer
- If the buffer is empty, we undo the persistent state flush of the last layer
The mutation set for the last layer can be constructed using the supplied history object. The construction takes the current disk layer and the history object as inputs, producing the mutation set of the last state transition as the output.
Specifically, the dl.db
and h.meta.root
donates the current state of disk layer, which is always available!
In summary, we can always construct a mutation set for revert by having the disk layer and a corresponding history object. The constructed mutation set can undo the last state change of disk layer, no matter the state change is cached in the buffer, or applied in the disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So either we revert the last thing on disk (easy path) or we need to revert part of the buffer before writing it on disk? Both will be accomplished via the reverse diffs? So the assumption is that the reverse diffs are always available and never merged, but written to disk one by one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So either we revert the last thing on disk (easy path) or we need to revert part of the buffer before writing it on disk?
Yes
Both will be accomplished via the reverse diffs
Yes, no matter the state transition being reverted is in the disk, or it's in the buffer, the relevant state set is all constructed from the reverse diff.
So the assumption is that the reverse diffs are always available and never merged
No and Yes. The reverse diffs might be pruned if it's very old (we keep 90K latest reverse diffs by default and users can specify the number of reverse diffs to keep by themselves). If the reverse diff is not available, then the state revert will be rejected.
The reverse diffs are always stored individually, one by one.
// hash in the slim data format. | ||
// | ||
// Note the returned account is not a copy, please don't modify it. | ||
func (dl *diffLayer) account(hash common.Hash, depth int) ([]byte, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
go-ethereum/core/state/snapshot/difflayer.go
Lines 277 to 284 in df182a7
hit := dl.diffed.ContainsHash(accountBloomHash(hash)) | |
if !hit { | |
hit = dl.diffed.ContainsHash(destructBloomHash(hash)) | |
} | |
var origin *diskLayer | |
if !hit { | |
origin = dl.origin // extract origin while holding the lock | |
} |
If without bloomfilter, will it cause negative impact on performance?
689756e
to
42975b2
Compare
This pull request ports some changes from the main state snapshot integration one, specifically introducing the flat state tracking in pathdb. Note, the tracked flat state changes are only held in memory and won't be persisted in the disk. Meanwhile, the correspoding state retrieval in persistent state is also not supported yet. The states management in disk is more complicated and will be implemented in a separate pull request.
42975b2
to
709a238
Compare
) | ||
// Fill up a parent | ||
for i := 0; i < 100; i++ { | ||
h := randomHash() | ||
data := randomAccount() | ||
|
||
accounts[h] = data | ||
if rand.Intn(4) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somethings a bit weird here, if I re-add this and we mark every ~4th element as destructed by setting it nil, only the difflayer.AccountIterator
will spit out 100 elements, while the other two iterators will spit out ~75.
e.g. this tests passes some of the time:
diff --git a/core/state/snapshot/iterator_test.go b/core/state/snapshot/iterator_test.go
index b9fe370b86..dff8c86ab6 100644
--- a/core/state/snapshot/iterator_test.go
+++ b/core/state/snapshot/iterator_test.go
@@ -41,6 +41,9 @@ func TestAccountIteratorBasics(t *testing.T) {
data := randomAccount()
accounts[h] = data
+ if rand.Intn(4) == 0 {
+ accounts[h] = nil
+ }
if rand.Intn(2) == 0 {
accStorage := make(map[common.Hash][]byte)
value := make([]byte, 32)
@@ -55,11 +58,11 @@ func TestAccountIteratorBasics(t *testing.T) {
verifyIterator(t, 100, it, verifyNothing) // Nil is allowed for single layer iterator
it = diffLayer.newBinaryAccountIterator(common.Hash{})
- verifyIterator(t, 100, it, verifyNothing) // Nil is allowed for single layer iterator
+ verifyIterator(t, 80, it, verifyNothing) // Nil is allowed for single layer iterator
diskLayer := diffToDisk(diffLayer)
it = diskLayer.AccountIterator(common.Hash{})
- verifyIterator(t, 100, it, verifyNothing) // Nil is allowed for single layer iterator
+ verifyIterator(t, 80, it, verifyNothing) // Nil is allowed for single layer iterator
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An AccountIterator will yield every element in the layer.
A fast- or binary- iterator will only yield non-nil elements. Importantly: any nil
-accounts in the upper layers will mask the corresponding account in the lower layers.
Some background: a fast- or binary- will be used to serve snap requests, where we want to ship a slice of existing accounts, and not deleted ones. If an account has been recently deleted, we want to just ignore that one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Further background:
A binary iterator is the simplest "layered" iterator. It has two notion: "this
layer and the parent
layer". It will always "prefer" this
layer. In order to model N
layers, we simply make N
binary iterators, where each parent is another binary iterator. This is very slow, naturally, and used only as a simple-but-correct reference implementation.
The fast iterator is much more advanced, and a lot faster.
snapshotFlushAccountItemMeter.Mark(1) | ||
snapshotFlushAccountSizeMeter.Mark(int64(len(data))) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this means we don't flush the batches if they exceed 64k, we only flush at the end. Is this something that we don't need anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we don't need this batching flush anymore for Mainnet and live testnets, reasons:
Before cancun, we have the self-destruction and the deleted account may have a gigantic storage. If we want to wipe them in a single batch, then the OOM could occur.
The self-destruction has been deprecated since the cancun fork and we know the largest storage wiping over the past decade (mainnet) is around 15 megabytes, so we can confidently remove this protection mechanism.
However, in another pull request, I restore the original behavior back. The reason is that the Geth-based fork chains (e.g. L2s) may or may not still have the self-destruction. And they probably want to keep updating their codebase to align with the geth. It would be nice to leave this protection mechanism to them.
However, the flat states will ultimately be moved into pathdb, in which the state flushing (trie nodes and the flat states) MUST be flushed in the single batch. At that point, we won't have this protection mechanism in the pathdb, otherwise state corruption could occur if panic happens in the middle of two batches. Namely these fork chains have to disable self-destruction in order to use the pathdb.
slots = make(map[common.Hash][]byte) | ||
nodes = trienode.NewNodeSet(addrHash) | ||
slots = make(map[common.Hash][]byte) | ||
deletes = make(map[common.Hash][]byte) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect that in one of the follow-up prs you will fill this set with more interesting data, right? Otherwise we could use a struct{} map
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check the original pull request. I rename these variables,
slot (has been renamed to storageOrigins
) denotes the set in which the slot's original value is tracked;
deletes (has been renamed to storages
) denotes the set in which the slot's current value (nil) is tracked;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason to not use struct{} map is the storage set will be aggregated later in the newStateUpdate
, the []byte{} is expected.
origin[key] = slot | ||
} else { | ||
for key, slot := range op.storagesOrigin { | ||
if _, found := origin[key]; !found { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can we not use the maps.Copy trick here? Is it because we might overwrite existing values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maps.Copy(dst, src):
// Copy copies all key/value pairs in src adding them to dst.
// When a key in src is already present in dst,
// the value in dst will be overwritten by the value associated
// with the key in src.
Here:
If a key in src is already present in dst, we don't want to overwritten it with the value in src; rather than just skip it.
The semantic is different
// storage directly retrieves the storage data associated with a particular hash, | ||
// within a particular account. | ||
// | ||
// Note the returned account is not a copy, please don't modify it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Note the returned account is not a copy, please don't modify it. | |
// Note the returned storage slot is not a copy, please don't modify it. |
if len(data) == 0 { | ||
panic(fmt.Sprintf("invalid account mutation (null to null), %x", addrHash)) | ||
} | ||
s.accountData[addrHash] = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the case that you describe in the comment, right? Where the map after will retain an x=nil value that wasn't there previously.
Could you add a test for that? It looks like this edge case is not covered by the current tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the case that you describe in the comment, right? Where the map after will retain an x=nil value that wasn't there previously.
YES and I will add a test for it! It's definitely a very important edge case
This pull request ports some changes from the main state snapshot integration one, specifically introducing the flat state tracking in pathdb.
Note, the tracked flat state changes are only held in memory and won't be persisted in the disk. Meanwhile, the correspoding state retrieval in persistent state is also not supported yet. The states management in disk is more complicated and will be implemented in a separate pull request.
Part 1: #30752