-
Notifications
You must be signed in to change notification settings - Fork 20.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: reduce peak memory usage during reorg #30600
Conversation
// Insert an easy and a difficult chain afterwards | ||
easyBlocks, _ := GenerateChain(params.TestChainConfig, blockchain.GetBlockByHash(blockchain.CurrentBlock().Hash()), ethash.NewFaker(), db, chainLength, genValueTx(50000)) | ||
diffBlocks, _ := GenerateChain(params.TestChainConfig, blockchain.GetBlockByHash(blockchain.CurrentBlock().Hash()), ethash.NewFaker(), db, chainLength, genValueTx(50000)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear to me what "easy" and "difficult" means here. These chains are of the same length and form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original meaning was probably that the 'difficult' chain was heavier, and should take precedence over the 'easy'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, its copied from another test, will rename
This PR is kinda blocked on https://github.com/ethereum/go-ethereum/pull/30601/files |
Pls rebase |
100ffee
to
c7b3922
Compare
if len(rebirthLogs) > 512 { | ||
bc.logsFeed.Send(rebirthLogs) | ||
rebirthLogs = nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert this block. The previous code sent the log removals first, and then the log additions. You changes it so now it sends events for new logs first and them removes old logs.
core/blockchain.go
Outdated
for _, tx := range oldBlock.Transactions() { | ||
deletedTxs = append(deletedTxs, tx.Hash()) | ||
} | ||
} | ||
} else { | ||
// New chain is longer, stash all blocks away for subsequent insertion | ||
for ; newBlock != nil && newBlock.NumberU64() != oldBlock.NumberU64(); newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1) { | ||
newChain = append(newChain, newBlock) | ||
newChain = append(newChain, newBlock.Header()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this aprt will nt be correct. The headBlock is not part of the chain yet I think at this point, it's written later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, doc says head block is not processsed here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this codepath, it looks to me like it is written
func (bc *BlockChain) writeBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) {
if err := bc.writeBlockWithState(block, receipts, state); err != nil {
return NonStatTy, err
}
currentBlock := bc.CurrentBlock()
// Reorganise the chain if the parent is not the head block
if block.ParentHash() != currentBlock.Hash() {
if err := bc.reorg(currentBlock, block); err != nil {
return NonStatTy, err
}
}
// Set new head.
bc.writeHeadBlock(block)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bleh, my comments were left on pending
core/blockchain.go
Outdated
@@ -2278,14 +2278,14 @@ func (bc *BlockChain) reorg(oldHead *types.Header, newHead *types.Block) error { | |||
// as it will be handled separately outside of this function | |||
for i := len(newChain) - 1; i >= 1; i-- { | |||
// Insert the block in the canonical way, re-writing history | |||
bc.writeHeadBlock(newChain[i]) | |||
newBlock = bc.GetBlock(newChain[i].Hash(), newChain[i].Number.Uint64()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This begins at the latest in the new chain.
Q1: Is the latest block in the new chain already retrievable via bc.GetBlock
? I think the answer is Yes.
Q2: Isn't it the same as the incoming parameter newHead
? If so, we can use that. Most calls to reorg will only be 1 block, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@karalabe I thought you were commenting on this comment, but it's not quite the same, is it?
core/blockchain.go
Outdated
for _, tx := range oldBlock.Transactions() { | ||
deletedTxs = append(deletedTxs, tx.Hash()) | ||
} | ||
} | ||
} else { | ||
// New chain is longer, stash all blocks away for subsequent insertion | ||
for ; newBlock != nil && newBlock.NumberU64() != oldBlock.NumberU64(); newBlock = bc.GetBlock(newBlock.ParentHash(), newBlock.NumberU64()-1) { | ||
newChain = append(newChain, newBlock) | ||
newChain = append(newChain, newBlock.Header()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this codepath, it looks to me like it is written
func (bc *BlockChain) writeBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) {
if err := bc.writeBlockWithState(block, receipts, state); err != nil {
return NonStatTy, err
}
currentBlock := bc.CurrentBlock()
// Reorganise the chain if the parent is not the head block
if block.ParentHash() != currentBlock.Hash() {
if err := bc.reorg(currentBlock, block); err != nil {
return NonStatTy, err
}
}
// Set new head.
bc.writeHeadBlock(block)
I've reworked the entire code to operate on headers and not blocks. |
The reason it did so macny block loads is because the logic was implemented originally with blocks and Marius only changes the accumulators but left the logic based on blocks. Rewriting it to headers cleaned up everything with every block being loaded max once. That said, my code also fixed the log ordering, which is a breadking change on the API, though this is how it's correct and previously it was borked, so unsure what to do here. |
The problem with the logs is: The original code from 2 years ago collected all the logs and emitted them in one RPC message. E.g. with 2048 logs across say 4 blocks (each 512 logs), it emitted:
2 years ago #25711 introduced batching and kept the emission order:
This is a problem, because anyone reacting to logs, needs to revert them in reverse order, Log 2048 first, 2047 then, etc, down to Log 1. In the original code from 2 years ago the user sub got a huge log list and it was up to them to iterate it in reverse. In the current code however, they get the This PR changes emission to reverse order, so we emit:
This can be meaningfully handles client side again on the RPC subscription because you can apply log revertals immediately as they arrive. Unfortunately, this breaks the API. The problem is, that the current API is not usable, so I'm unsure what we're breaking here. |
I've added a shadow log filtering, so the events are kept emitted in the old faulty behavior for legacy APIs and there's a second emission pathway (unsued in this PR) that emits events in the correct order. The old API ordering can't really change as it would bork everything. All in all, the old API is foobar, so IMO we can leave it be and introduce an alternative with the correct ordering. |
~~Opening this as a draft to have a discussion.~~ Pressed the wrong button I had [a previous PR ](#24616 long time ago which reduced the peak memory used during reorgs by not accumulating all transactions and logs. This PR reduces the peak memory further by not storing the blocks in memory. However this means we need to pull the blocks back up from storage multiple times during the reorg. I collected the following numbers on peak memory usage: // Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440 allocs/op 1549443072 bytes of heap used // WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op 1564 allocs/op 1163870208 bytes of heap used // WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op 1564 allocs/op 1171890176 bytes of heap used Each block contains a transaction with ~50k bytes and we're doing a 10k block reorg, so the chain should be ~500MB in size --------- Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Opening this as a draft to have a discussion.Pressed the wrong buttonI had a previous PR a long time ago which reduced the peak memory used during reorgs by not accumulating all transactions and logs.
This PR reduces the peak memory further by not storing the blocks in memory.
However this means we need to pull the blocks back up from storage multiple times during the reorg.
I collected the following numbers on peak memory usage:
// Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440 allocs/op 1549443072 bytes of heap used
// WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op 1564 allocs/op 1163870208 bytes of heap used
// WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op 1564 allocs/op 1171890176 bytes of heap used
Each block contains a transaction with ~50k bytes and we're doing a 10k block reorg, so the chain should be ~500MB in size