-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature(op-node): pre-fetch receipts concurrently #100
feature(op-node): pre-fetch receipts concurrently #100
Conversation
continue | ||
} | ||
s.log.Debug("pre-fetching receipts", "block", currentL1Block) | ||
|
||
go func(ctx context.Context, blockInfo eth.L1BlockRef) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not put the line 152 "L1BlockRefByNumber" here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because when we reach the latest block height, L1BlockRefByNumber also has the effect of making our processing process wait for a period of time. We assume that if we parallelize the process of L1BlockRefByNumber, then when we reach the latest block height, the processing process will not stop, and it will continue to launch new goroutines to try to process blocks that have not yet been generated.
On the other hand, the performance of the L1BlockRefByNumber interface is not so bad, and it also has its own cache, so there is no need to parallelize the processing.
if err != nil { | ||
s.log.Warn("failed to pre-fetch receipts", "err", err) | ||
time.Sleep(200 * time.Millisecond) | ||
waitErr := s.preFetchReceiptsRateLimiter.Wait(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a better way to put this ratelimit.wait before L1BlockRefByNumber call?
After triggering the rate limiting threshold, can reduce some unnecessary L1BlockRefByNumber calls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The requests for L1BlockRefByNumber should all be necessary:
When we have not reached the latest block height, the request parameters for L1BlockRefByNumber are different every time, and the results obtained are useful.
When we reach the latest block height, continuously requesting L1BlockRefByNumber allows us to process subsequent processes as soon as a new block height appears. If we add a limiter here, we will still need to wait for a period of time before entering the subsequent process when a new block height appears.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/LGTM
I changed the value of MaxConcurrentRequests from 10 to 20 in ab8dcdf and modified the limiter in GoOrUpdatePreFetchReceipts to be half of MaxConcurrentRequests. This is because if GoOrUpdatePreFetchReceipts takes up all of MaxConcurrentRequests, other places that need to request will be throttled. See the code here: opbnb/op-node/sources/limit.go Line 19 in e62988a
@krish-nr PTAL |
* feature(op-node): concurrent pre-fetch receipts * use background ctx in GoOrUpdatePreFetchReceipts * change MaxConcurrentRequests from 10 to 20 --------- Co-authored-by: Welkin <welkin.b@nodereal.com>
* feature(op-node): concurrent pre-fetch receipts * use background ctx in GoOrUpdatePreFetchReceipts * change MaxConcurrentRequests from 10 to 20 --------- Co-authored-by: Welkin <welkin.b@nodereal.com>
Description
I added pre-fetch receipt logic in the #57. However, if the L1 endpoint is not functioning well and the interface response time increases, the efficiency of pre-fetch will be significantly reduced. To address this issue, we parallelize the pre-fetch process for multiple future block heights. This allows us to achieve better performance even when the L1 endpoint is not in good condition.
Rationale
To solve the problem of low L1 endpoint performance, we have added concurrent logic to improve the efficiency of pre-fetching receipts.
Example
none
Changes
Notable changes: