-
Notifications
You must be signed in to change notification settings - Fork 744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
paraInclusion
is massively overestimating its weight
#849
Comments
I think probably we can solve this with a weight refund. My guess is this overestimate comes from the fact that we must assume a lot of things about how many parachains there are, how many are upgrading, how many are sending messages, etc... If we can track those real numbers and stick it back into the benchmark function, then we can refund any unused weight. |
It depends what weight means too, like does off chain work count? We only do availability when backing a parachain block, but |
I'm not sure if I understand your question. The weight should reflect the time that it takes validators to import the block. Otherwise we have the problem like right now that the blocks are 25% full without much in them. @burdges |
Alright cool. We de facto rate limit approvals, etc. by the number of availability cores anyways, which makes sense. |
Yeah @burdges this only concerns the on-chain weight, not the implicit weight of coordinated off-chain stuff. |
I think this should be tackled not too long in the future. Perhaps someone from FRAME team can give you a hand (az Zeke did last year), especially if someone knowledgable is willing to help out. |
Indeed. We need a better way of tracking weight. The current top-level benchmarking of We might need to go back to (semi-) manual tracking. E.g. benchmarking at building blocks and combining them. This would reduce the number of parameters needed for any single benchmark significantly and might make "proper" weight tracking actually feasible and manageable. |
Why was this so heavy in the first place? Just on-chain messages? If so, we could bill for inclusion (cheap) and message count (expensive). |
The way it is calculated is just severely messed up, this combined with benchmarking doing weird stuff if you have many parameters. We can chat, if you are interested. |
* Cleanup readme docs * fix broken test
related: #959 |
* Start generalizing rialto-millau commands. * cargo fmt --all * Introduce generic balance. * Unify message payloads. * cargo fmt --all * init - generic * Attempt to unify send message. * Start moving things around. * cargo fmt --all * Move init-bridge. * cargo fmt --all * Improve UX of bridge argument. * Fix clippy. * Fix docs and scripts. * Add docs. * Apply suggestions from code review Co-authored-by: Hernando Castano <HCastano@users.noreply.github.com> * Fix copyright. * Add issue numbers. * More todos. * Update comments. Co-authored-by: Hernando Castano <HCastano@users.noreply.github.com>
@ordian you have been looking into this and related issues already, right? |
OverestimationLooking at https://www.polkadot-weigher.com/history, we see the current block weight numbers for Kusama are around 75%: Similarly, Polkadot's weight is around 42% with 300 bitfields and 20 backed candidates per block. Importing these blocks in practice takes much less time suggesting a benchmarking overhead over 10x. Back-of-the-envelopeLooking at the bench numbers for kusama, we have:
The weights differ significantly based on whether the weight is set to RocksDB or ParityDB backend (currently, it's rocksdb). Given that a typical block will contain CausesCurrently, the way we calculate the weight of processing a bitfield and a backed candidate is by running the whole
Short-term fixA proposed short-term fix is to address problem 1 and switch to ParityDB weights as it should be the default. The estimated weights with the proposed fix should be (note that the bench weights would remain the same, but we subtract the weight of the empty parainherent when processing
Given these numbers, current Kusama blocks should be estimated to weigh 1.859ms * 500 + 20 * 2.763ms = 0.985s = 49% With these (projected) numbers we should be able to scale to 1k validators with See #5082 which addresses point 1. We can optimize this further by addressing the point 2 in a follow-up PR (being worked on). Long-term fixInstead of running the whole ConclusionWe are not blocked on |
closes #849 ## Context For the background on this and the long-term fix, see #849 (comment). ## Changes * The weigh files are renamed from `runtime_(parachains|common).*` to `polkadot_runtime_(parachains|common).*`. The reason for it is the renaming introduced in #4633. The new weight command and files are generated now include `polkadot_` prefix. * The WeightInfo for `paras_inherent` now includes `enter_empty` which calculates the cost of processing an empty parachains inherent. This cost is subtracted dynamically when calculating other weights (so the other weights remain the same) ## Benefits See #849 (comment), but TL;DR is that we are not blocked on weights for scaling the number of validators and cores further. Resolved questions: - [x] why new benchmarks for westend are doing fewer db IOPS? Is it due polkadot-sdk update (db IOPS diff)? or the bench setup is no longer valid? https://github.com/polkadot-fellows/runtimes/blob/7723274a2c5cbb10213379271094d5180716ca7d/relay/polkadot/src/weights/runtime_parachains_paras_inherent.rs#L131-L196 Answer: see background section of #5270 TODOs: - [x] Rerun benchmarks for Rococo and Westend - [x] PRDoc --------- Co-authored-by: command-bot <>
Hi, I just observed that on kusama the paraInherent weight halfed after we enacted Update Kusama v1.3.0 it went from around 73% before the enactment to around 35% after the enactment, the number of the candidates does not seem to change, so I guess the weights changed. That's good news, but was that expected ? |
Yes, that's expected. See #5082 (comment) |
Take a look at some recent Polkadot blocks. Even empty blocks consume ~500ms of weight.
Benchmarking such blocks shows that only a fraction (~4%) of the weight is actually needed.
That means
paraInclusion
is massively overestimating its weight cost.@shawntabrizi noted that we could think about refunding weight here.
Output from benchmark-block on reference hardware with production profile and wasm executor:
The text was updated successfully, but these errors were encountered: