You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.
Having an unfinalised head grow indefinitely when finality halts for some reason can become extremely problematic as the client suffers a major slowdown in these circumstances. Slowing down block production to some degree until finality comes back online seems like a reasonable practical step towards mitigating this issue.
As the unfinalised head gets bigger, we roughly speaking skip that amount of blocks.
e.g. with an unfinalised head suffix length l: we take interval = ((l - C) / M).clamp(0, X) and author only when slot > last_slot + interval. Example parameters would be X=100, C=5 and M=2, which will give an incremental slowdown to authing a block every 10 minutes when the unfinalised length has grown to 205 blocks (which will happen after approximately 8 hours). Until the unfinalised length is greater than 5 blocks, the chain will operate as before.
X, C and M can be changed to offset the point at which slowdown occurs, reduce how fast it ramps up or reduce its maximum effect.
It doesn't (necessarily) alter any on-chain logic, so theoretically validators could author blocks regardless. This is probably safest for now, but in principle could also be policed on-chain.
The text was updated successfully, but these errors were encountered:
In Polkadot, we are planning to delay finality for a couple of minutes intentionally to give time for parachain disputes to come in. So we will probably want to have C in the range of 50-100 or something like that.
Having an unfinalised head grow indefinitely when finality halts for some reason can become extremely problematic as the client suffers a major slowdown in these circumstances. Slowing down block production to some degree until finality comes back online seems like a reasonable practical step towards mitigating this issue.
As the unfinalised head gets bigger, we roughly speaking skip that amount of blocks.
e.g. with an unfinalised head suffix length
l
: we takeinterval = ((l - C) / M).clamp(0, X)
and author only whenslot > last_slot + interval
. Example parameters would beX=100
,C=5
andM=2
, which will give an incremental slowdown to authing a block every 10 minutes when the unfinalised length has grown to 205 blocks (which will happen after approximately 8 hours). Until the unfinalised length is greater than 5 blocks, the chain will operate as before.X
,C
andM
can be changed to offset the point at which slowdown occurs, reduce how fast it ramps up or reduce its maximum effect.It doesn't (necessarily) alter any on-chain logic, so theoretically validators could author blocks regardless. This is probably safest for now, but in principle could also be policed on-chain.
The text was updated successfully, but these errors were encountered: