-
Notifications
You must be signed in to change notification settings - Fork 402
[Persistence] Don't persist ALL channel_monitors on every bitcoin block connection. #2647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Assigning to myself, will see if it is doable. |
Adding more detail: This can be troublesome for large node operators with 1000's of channels. It also causes a thunder herd problem (ref), and hammers the storage with many requests all at once. |
Probably easiest way to do this would just have a config option and do the writes in batches |
Approach: This will cut down IO by a factor of 10/50 but doesn't solve the thundering herd problem. All monitors will rush to get persisted after the same block. So idea is to introduce somewhat random yet deterministic distribution scheme for monitor persists. This partitioning strategy will alleviate thundering herd issue and hot partition problem for monitor persists and we can kind of evenly distribute the IO load. For a node with 500 channels, this should cut down IO from 250k monitor persist calls to ~5-6k persists in an 8 hour interval. Note this will also mean that on node restarting, a monitor will be at max 50 blocks out-of-date and we will need to sync them. |
Currently on every bitcoin block update we persist all channel_monitors with updated best_block.
This can be troublesome for large node operators with 1000's of channels.
It also causes a thunder herd problem (ref), and hammers the storage with many requests all at once.
Ideal outcome: After doing this, Ldk's IO footprint should be reduced by ~50 times and processing time to sync each block will be drastically reduced(this can be very long for nodes with 1000s of channels).
The text was updated successfully, but these errors were encountered: