-
Notifications
You must be signed in to change notification settings - Fork 783
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Drop blocks that have been unconfirmed for x amount of time #2113
Comments
Yea that’s a good observation. We added that in v19 at the same time as the
PoW prioritization. We kick things out of the memory area dedicated to
tracking confirmation for blocks.
If there gets to be X number of blocks it won’t spend any network traffic
on lower priority blocks until they’ve cleared. Then if there gets to be Y
number of blocks it's completely dropped from memory and someone else on
the network, probably the originator, will have to solicit votes for
confirmation.
…On Sun, Jun 30, 2019 at 14:21 joohansson ***@***.***> wrote:
Maybe there already is a mechanism of this kind but if there isn't, this
is the issue:
Considering the edge case where new blocks are being published at a higher
rate than the network can confirm ie. BPS (blocks/s) > CPS
(confirmations/s), for a very long time or indefinitely. The data must be
saved somewhere and that storage will be increasingly harder to access and
sort out up to a point of failure. Also, take up valuable disk space ie.
bloating.
The suggestion is to simply remove blocks from that space after a given
amount of time, for example, 24h. The normal PoW prioritization will make
sure only the lowest PoW is removed and it will be up to the sender
(wallet) to make sure it's republished (with higher PoW to make sure it's
being prioritized higher). A typical spam attacker would need to increase
the cost of producing the spam or it will continue to be a worthless
attempt to disrupt the system.
I don't know how the actual "drop" would work in practice. Just remove the
block from existence or send back the transaction and remove the pending
status somehow? Sending back might be problematic since it was never
received and would also require a receive PoW in the other end. I
understand this would be an extreme measure and large conflict in the
protocol but maybe one of the few solutions to an extreme edge case (that
could potentially destroy the system). Please correct me if I'm wrong.
There is a reference in here but I'm not sure if it's related:
https://medium.com/nanocurrency/dynamic-proof-of-work-prioritization-4618b78c5be9
If too many blocks linger in the active confirmation queue for too long,
some of the lowest difficulty blocks will be dropped out to make room for
others.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2113?email_source=notifications&email_token=AACCUD6XWAVDFC7SCHJAASDP5CQLFA5CNFSM4H4MN742YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4G4PU4JA>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AACCUD5EBWTYOQSCLKAYE6TP5CQLFANCNFSM4H4MN74Q>
.
|
Ok nice. Are blocks still stored on disk, in the ldb? |
Yea, they're on disk though they are above the confirmation height of the account so they're not in a confirmed state. So that means either:
The node could trim blocks above the confirmation height but this would essentially be the node trimming valid, yet unconfirmed blocks. |
So what does this mean regarding ledger bloating and possible clean up of the ldb to save space in case it grows toward infinity? Can it be done? Sorry for my noob questions. Just verifying what you mean with "trim". |
This would only clean unconfirmed transactions, it wouldn't clean confirmed ones and I'd rather have a wholistic solution instead of for just one case. In the future we can use tiered storage to move infrequently accessed blocks or dust amounts in to slower, cloud-based storage. The cloud storage would replicate the block some times on the network instead of every processing node keeping it stored locally, this trades off bandwidth for storage. But it's important to keep in mind storage is still fairly cheap, it's just a matter of infrastructure tuning to get the optimal enterprise-scale setup. |
Is it confirmed that blocks are being removed from active confirmations? During high load tests we’ve seen active confirmations count from the RPC reach into the 100,000’s and the node starts to slow down. RPC commands take longer to respond and confirmation rate slows. What’s the metric to identify that blocks are being dropped after Y blocks build up? Also how would we measure the blocks impacted by X number of active confirmations to know which are being voted on? One thought is to be more aggressive in the dropping from active confirmations so it doesn’t build beyond 10-20k blocks and then increase the confirmation height processor speed once active confirmations is low. Instead of limiting it to 1000 blocks increase it more to 5-10k. Lastly I would suggest changing the confirm_req process to prioritize peers with high weight. If it’s only sending to random peers there’s a high chance it hits peers with little to no weight which is wasted traffic. If instead it could prioritize confirmation requests to peers with high weight it would decrease the time it takes to confirm blocks added by confirmation height processor. In my recording it takes 120+ seconds to confirm the 1000 blocks that get added to active confirmations which seems awfully slow when there is negligible network traffic. |
@Srayman An issue was identified with dropping of blocks out of active confirmations which is being included in the release: #2116. For the dropping metric: the 2 lowest adjusted difficulty blocks which have been in the container for 2 or more announcement rounds will be dropped (unless they were produced by the node wallet) after a new block is inserted into the container, but only when the count of transactions in the container is at or above the config limit (active_elections_size, default 8000). For the more aggressive confirmation height processing, we are also looking into some options as this will make sense alongside the dropping fix. For the prioritizing high weight peers on confirm_req, it appears to be a worthwhile item to explore as I don't believe it happens now. It may make sense to have a floor weight below which no request would be made (perhaps 1000 weight as is set by default as the weight required for voting), and then could be worth splitting the reps into tiers based on weight: selecting a higher portion of peers from the heavy weight nodes, smaller portion from middle weight nodes and smallest portion to the smallest weight nodes - this could help improve confirmation times while also providing a better spread of traffic than just hitting the top nodes all the time. @clemahieu Any thoughts on these points? |
I asked that a test be added to the fixing PR but other than that it's good to get feedback. Ballooning memory is definitely an issue. |
And I agree, we should look at the confirm_req prioritization. We have a function for it to order the result by rep weight. |
Updates here for V20:
|
Closing this issue since it's captured by the newer bounded block backlog issues: Parent issue: Related sub-issues: |
Maybe there already is a mechanism of this kind but if there isn't, this is the issue:
Considering the edge case where new blocks are being published at a higher rate than the network can confirm ie. BPS (blocks/s) > CPS (confirmations/s), for a very long time or indefinitely. The data must be saved somewhere and that storage will be increasingly harder to access and sort out up to a point of failure (maybe, not proven). Also, take up valuable disk space ie. bloating.
The suggestion is to simply remove blocks from that space after a given amount of time, for example, 24h. The normal PoW prioritization will make sure only the lowest PoW is removed and it will be up to the sender (wallet) to make sure it's republished (with higher PoW to make sure it's being prioritized higher). A typical spam attacker would need to increase the cost of producing the spam or it will continue to be a worthless attempt to disrupt the system.
There would probably need to be other criteria as well like that network needs to be congested. Or blocks could accidentally be removed if the network stops voting for x time due to a low quorum.
There is a reference in here but I'm not sure if it's related: https://medium.com/nanocurrency/dynamic-proof-of-work-prioritization-4618b78c5be9
The text was updated successfully, but these errors were encountered: