-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for modifying bitvec values from the chainspec #1705
Comments
Hi @alindima, thanks for your feedback. Can you send me an example of the chain-spec and the value you want to update? Also, zombienet don't modify the sudo account defined in the spec so most of the times you van just use Alice. Thanks!! |
Here's an excerpt of the network config file:
Here's the HostConfiguration field defined: https://github.com/paritytech/polkadot-sdk/blob/84a246c24a5d3c410a2e4f7d63c0215decddb2b1/polkadot/runtime/parachains/src/configuration.rs#L266 Here's the exposed extrinsic for setting it: https://github.com/paritytech/polkadot-sdk/blob/84a246c24a5d3c410a2e4f7d63c0215decddb2b1/polkadot/runtime/parachains/src/configuration.rs#L1220
Cool. I just don't know how I could write a js script for doing that. There's no docs about that AFAICT and I haven't written JS in a while :D Thanks! |
Hi @alindima, thanks for the info! I checked how the field is displayed in rococo-local chain spec ( I think is using an empty bitvec) "node_features": {
"bits": 0,
"data": [],
"head": {
"index": 0,
"width": 8
},
"order": "bitvec::order::Lsb0"
}, Also, looks that we are using the default Thx! |
yeah, would be good if we could have an easy way of setting this when building the chainspec. but I guess it's quite difficult to abstract the BitVec into the toml format. I think setting this with sudo and the extrinsic would be simpler (as we only pass an index as input). If you could help me with writing a script that does that for zombienet, I think that would suffice. thanks! |
Hey @alindima, yes. I can write an small
Can you send me the index to set? Thx! |
yes. The index doesn't matter, I can customise it (set it to 1). Just |
Hey @alindima, sorry about the delay. This script will send the extrinsic using zombienet dsl like this
And the script (js) async function run(nodeName, networkInfo, _jsArgs) {
const { wsUri, userDefinedTypes } = networkInfo.nodesByName[nodeName];
const api = await zombie.connect(wsUri, userDefinedTypes);
await zombie.util.cryptoWaitReady();
// account to submit tx
const keyring = new zombie.Keyring({ type: "sr25519" });
const alice = keyring.addFromUri("//Alice");
await api.tx.sudo
.sudo(api.tx.configuration.setNodeFeature(1, true))
.signAndSend(alice, ({ status, isError }) => {
if (status.isInBlock) {
console.log(
`Transaction included at blockhash ${status.asInBlock}`,
);
if (finalization) {
console.log("Waiting for finalization...");
} else {
unsub();
return resolve();
}
} else if (status.isFinalized) {
console.log(
`Transaction finalized at blockHash ${status.asFinalized}`,
);
unsub();
return resolve();
} else if (isError) {
console.log(`Transaction error`);
reject(`Transaction error`);
}
});
return 0;
}
module.exports = { run }; But this should be run as part of a testing suite, let me know if you want to just spawn the network and run script later. Thx! |
thanks! I'll try that out soon and let you know if it works for me! |
Hi! I get this error when putting this into a zombinet test:
|
Nevermind, I was running the script from another validator (not alice). Seems to be doing what I need now. Thanks! |
**Don't look at the commit history, it's confusing, as this branch is based on another branch that was merged** Fixes #598 Also implements [RFC #47](polkadot-fellows/RFCs#47) ## Description - Availability-recovery now first attempts to request the systematic chunks for large POVs (which are the first ~n/3 chunks, which can recover the full data without doing the costly reed-solomon decoding process). This has a fallback of recovering from all chunks, if for some reason the process fails. Additionally, backers are also used as a backup for requesting the systematic chunks if the assigned validator is not offering the chunk (each backer is only used for one systematic chunk, to not overload them). - Quite obviously, recovering from systematic chunks is much faster than recovering from regular chunks (4000% faster as measured on my apple M2 Pro). - Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is different for every core, in order to avoid only querying the first n/3 validators over and over again in the same session. The mapping is the one described in RFC 47. - The mapping is feature-gated by the [NodeFeatures runtime API](#2177) so that it can only be enabled via a governance call once a sufficient majority of validators have upgraded their client. If the feature is not enabled, the mapping will be the identity mapping and backwards-compatibility will be preserved. - Adds a new chunk request protocol version (v2), which adds the ChunkIndex to the response. This may or may not be checked against the expected chunk index. For av-distribution and systematic recovery, this will be checked, but for regular recovery, no. This is backwards compatible. First, a v2 request is attempted. If that fails during protocol negotiation, v1 is used. - Systematic recovery is only attempted during approval-voting, where we have easy access to the core_index. For disputes and collator pov_recovery, regular chunk requests are used, just as before. ## Performance results Some results from subsystem-bench: with regular chunk recovery: CPU usage per block 39.82s with recovery from backers: CPU usage per block 16.03s with systematic recovery: CPU usage per block 19.07s End-to-end results here: #598 (comment) #### TODO: - [x] [RFC #47](polkadot-fellows/RFCs#47) - [x] merge #2177 and rebase on top of those changes - [x] merge #2771 and rebase - [x] add tests - [x] preliminary performance measure on Versi: see #598 (comment) - [x] Rewrite the implementer's guide documentation - [x] #3065 - [x] paritytech/zombienet#1705 and fix zombienet tests - [x] security audit - [x] final versi test and performance measure --------- Signed-off-by: alindima <alin@parity.io> Co-authored-by: Javier Viola <javier@parity.io>
**Don't look at the commit history, it's confusing, as this branch is based on another branch that was merged** Fixes paritytech#598 Also implements [RFC paritytech#47](polkadot-fellows/RFCs#47) ## Description - Availability-recovery now first attempts to request the systematic chunks for large POVs (which are the first ~n/3 chunks, which can recover the full data without doing the costly reed-solomon decoding process). This has a fallback of recovering from all chunks, if for some reason the process fails. Additionally, backers are also used as a backup for requesting the systematic chunks if the assigned validator is not offering the chunk (each backer is only used for one systematic chunk, to not overload them). - Quite obviously, recovering from systematic chunks is much faster than recovering from regular chunks (4000% faster as measured on my apple M2 Pro). - Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is different for every core, in order to avoid only querying the first n/3 validators over and over again in the same session. The mapping is the one described in RFC 47. - The mapping is feature-gated by the [NodeFeatures runtime API](paritytech#2177) so that it can only be enabled via a governance call once a sufficient majority of validators have upgraded their client. If the feature is not enabled, the mapping will be the identity mapping and backwards-compatibility will be preserved. - Adds a new chunk request protocol version (v2), which adds the ChunkIndex to the response. This may or may not be checked against the expected chunk index. For av-distribution and systematic recovery, this will be checked, but for regular recovery, no. This is backwards compatible. First, a v2 request is attempted. If that fails during protocol negotiation, v1 is used. - Systematic recovery is only attempted during approval-voting, where we have easy access to the core_index. For disputes and collator pov_recovery, regular chunk requests are used, just as before. ## Performance results Some results from subsystem-bench: with regular chunk recovery: CPU usage per block 39.82s with recovery from backers: CPU usage per block 16.03s with systematic recovery: CPU usage per block 19.07s End-to-end results here: paritytech#598 (comment) #### TODO: - [x] [RFC paritytech#47](polkadot-fellows/RFCs#47) - [x] merge paritytech#2177 and rebase on top of those changes - [x] merge paritytech#2771 and rebase - [x] add tests - [x] preliminary performance measure on Versi: see paritytech#598 (comment) - [x] Rewrite the implementer's guide documentation - [x] paritytech#3065 - [x] paritytech/zombienet#1705 and fix zombienet tests - [x] security audit - [x] final versi test and performance measure --------- Signed-off-by: alindima <alin@parity.io> Co-authored-by: Javier Viola <javier@parity.io>
**Don't look at the commit history, it's confusing, as this branch is based on another branch that was merged** Fixes paritytech#598 Also implements [RFC paritytech#47](polkadot-fellows/RFCs#47) ## Description - Availability-recovery now first attempts to request the systematic chunks for large POVs (which are the first ~n/3 chunks, which can recover the full data without doing the costly reed-solomon decoding process). This has a fallback of recovering from all chunks, if for some reason the process fails. Additionally, backers are also used as a backup for requesting the systematic chunks if the assigned validator is not offering the chunk (each backer is only used for one systematic chunk, to not overload them). - Quite obviously, recovering from systematic chunks is much faster than recovering from regular chunks (4000% faster as measured on my apple M2 Pro). - Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is different for every core, in order to avoid only querying the first n/3 validators over and over again in the same session. The mapping is the one described in RFC 47. - The mapping is feature-gated by the [NodeFeatures runtime API](paritytech#2177) so that it can only be enabled via a governance call once a sufficient majority of validators have upgraded their client. If the feature is not enabled, the mapping will be the identity mapping and backwards-compatibility will be preserved. - Adds a new chunk request protocol version (v2), which adds the ChunkIndex to the response. This may or may not be checked against the expected chunk index. For av-distribution and systematic recovery, this will be checked, but for regular recovery, no. This is backwards compatible. First, a v2 request is attempted. If that fails during protocol negotiation, v1 is used. - Systematic recovery is only attempted during approval-voting, where we have easy access to the core_index. For disputes and collator pov_recovery, regular chunk requests are used, just as before. ## Performance results Some results from subsystem-bench: with regular chunk recovery: CPU usage per block 39.82s with recovery from backers: CPU usage per block 16.03s with systematic recovery: CPU usage per block 19.07s End-to-end results here: paritytech#598 (comment) #### TODO: - [x] [RFC paritytech#47](polkadot-fellows/RFCs#47) - [x] merge paritytech#2177 and rebase on top of those changes - [x] merge paritytech#2771 and rebase - [x] add tests - [x] preliminary performance measure on Versi: see paritytech#598 (comment) - [x] Rewrite the implementer's guide documentation - [x] paritytech#3065 - [x] paritytech/zombienet#1705 and fix zombienet tests - [x] security audit - [x] final versi test and performance measure --------- Signed-off-by: alindima <alin@parity.io> Co-authored-by: Javier Viola <javier@parity.io>
I need to modify a HostConfiguration value from the chainspec in a zombienet test. The problem is that I need to flip a bit in a BitVec.
Alternatively, I need to be able to call an extrinsic on the relay chain with the sudo key, in order to flip this bit.
Any help is appreciated!
The text was updated successfully, but these errors were encountered: