-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Conversation
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez |
You are probably using a network drive and not a NVMe SSD disk as mentioned in the wiki. In AWS it is probably possible to use multiple disks like in GCP. We should extend the explanations in the wiki for that. In GCP I used 4 network disks to archive performance close to a local disk. cc @bakhtin |
Why do we need faster disks? Instead of a faster CPU? |
It would be nice to have faster CPU as well, but the most widely available cloud hosters only offer server CPUs, like Xeon and EPYC. These are mostly inferior in single-thread speed when compared to consumer hardware like Intel i7 or i9. |
Okay, ty for the explanation! |
@ggwpez Disk Seq Write's Disk Rnd Write values are My main confusion is, can I use this configuration to run polkadot/kusama validator-node. I mainly want to avoid slashing my assets, which should be related to the outgoing blocks of the node, so as long as I have the And I think we should offer three configurations, |
I cannot tell you that on a case-by-case basis. These are just recommendations; they are not hard requirements nor exhaustive. For more concrete advice you can ask in the 1KV Program matrix/discord chat, they often talk about their server hardware. |
What is the sTPS after this change? (as compared to 1,500) |
@ggwpez I can't find the matrix/discord link you mentioned. I would like to confirm that the machines running kusama are not also of this standard. Aren't there many parallel chains on the kusama network. Compared to polkadot, does kusama require the same machine performance as polkadot? |
@99Kies if this PR goes in, you should probably assume that both Polkadot and Kusama are updated to the same standard. Generally speaking, Kusama will always be as close to Polkadot as possible in these kinds of things, as that is the purpose of the canary network. |
BTW the discussion around this change continues in the issue: #13308 We will not merge this before we have not solved the weight noise issue. |
@99Kies see https://thousand-validators.kusama.network/#/getting-started the matrix channel is
Currently getting a build error in the VM ref hardware image 🤦♂️. Will need to update the DB weights first. |
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
Not sure if the aforementioned issues are solved, but seems like something that we want to see merged. |
@oleg-plakida just to check: Substrate and Polkadot is updated? Now just Cumulus, or? |
Rococo and cumulus remain. |
bot rebase |
Rebased |
bot merge |
Error: Required status check "pr-custom-review" is cancelled. |
bot merge |
* Remove Polkadot Wiki Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Update requirements for new ref hardware Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> * Add test Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: parity-processbot <>
Updating to the new hardware specs. CPU got slower but disk faster. This was the trade-off for chosing Cloud VM machines.
The new numbers were generated on a Cloud reference server with:
Value changes:
I rounded the new disk speed down a bit (971->950, 445->420) since the disk benches are known to not be as consistent as the CPU ones.
Closes #13308. Marking as noteworthy so this is mentioned in the change log.