Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for modifying bitvec values from the chainspec #1705

Closed
alindima opened this issue Jan 26, 2024 · 10 comments
Closed

Support for modifying bitvec values from the chainspec #1705

alindima opened this issue Jan 26, 2024 · 10 comments
Assignees
Labels
question Further information is requested

Comments

@alindima
Copy link

I need to modify a HostConfiguration value from the chainspec in a zombienet test. The problem is that I need to flip a bit in a BitVec.

Alternatively, I need to be able to call an extrinsic on the relay chain with the sudo key, in order to flip this bit.

Any help is appreciated!

@pepoviola
Copy link
Collaborator

Hi @alindima, thanks for your feedback. Can you send me an example of the chain-spec and the value you want to update? Also, zombienet don't modify the sudo account defined in the spec so most of the times you van just use Alice.

Thanks!!

@alindima
Copy link
Author

Here's an excerpt of the network config file:

[settings]
timeout = 1000
bootnode = true

[relaychain.genesis.runtimeGenesis.patch.configuration.config]
  max_validators_per_core = 5
  needed_approvals = 10
  # I need to set a bit here.
  # node_features = ?

[relaychain]
default_image = "{{ZOMBIENET_INTEGRATION_TEST_IMAGE}}"
chain = "rococo-local"
default_command = "polkadot"

Here's the HostConfiguration field defined: https://github.com/paritytech/polkadot-sdk/blob/84a246c24a5d3c410a2e4f7d63c0215decddb2b1/polkadot/runtime/parachains/src/configuration.rs#L266
And the type is a BitVec: https://github.com/paritytech/polkadot-sdk/blob/84a246c24a5d3c410a2e4f7d63c0215decddb2b1/polkadot/primitives/src/vstaging/mod.rs#L56

Here's the exposed extrinsic for setting it: https://github.com/paritytech/polkadot-sdk/blob/84a246c24a5d3c410a2e4f7d63c0215decddb2b1/polkadot/runtime/parachains/src/configuration.rs#L1220

Also, zombienet don't modify the sudo account defined in the spec so most of the times you van just use Alice.

Cool. I just don't know how I could write a js script for doing that. There's no docs about that AFAICT and I haven't written JS in a while :D

Thanks!

@pepoviola
Copy link
Collaborator

Hi @alindima, thanks for the info! I checked how the field is displayed in rococo-local chain spec ( I think is using an empty bitvec)

            "node_features": {
              "bits": 0,
              "data": [],
              "head": {
                "index": 0,
                "width": 8
              },
              "order": "bitvec::order::Lsb0"
            },

Also, looks that we are using the default serde serialization and maybe we should add a custom logic to allow to easily set the value from the chain-spec.

Thx!

@alindima
Copy link
Author

alindima commented Feb 1, 2024

yeah, would be good if we could have an easy way of setting this when building the chainspec. but I guess it's quite difficult to abstract the BitVec into the toml format.

I think setting this with sudo and the extrinsic would be simpler (as we only pass an index as input). If you could help me with writing a script that does that for zombienet, I think that would suffice. thanks!

@pepoviola
Copy link
Collaborator

Hey @alindima, yes. I can write an small js that set the value through an extrinsic. To be in the same page, the flow will be:

  • start the whole network
  • run the js script for set the BitVec value

Can you send me the index to set?

Thx!

@pepoviola pepoviola self-assigned this Feb 2, 2024
@pepoviola pepoviola added the question Further information is requested label Feb 2, 2024
@alindima
Copy link
Author

alindima commented Feb 2, 2024

yes. The index doesn't matter, I can customise it (set it to 1).
This is the extrinsic: https://github.com/paritytech/polkadot-sdk/blob/84a246c24a5d3c410a2e4f7d63c0215decddb2b1/polkadot/runtime/parachains/src/configuration.rs#L1220 (on the configuration pallet). has to be run with sudo

Just set_node_feature(1, true)

@pepoviola
Copy link
Collaborator

Hey @alindima, sorry about the delay. This script will send the extrinsic using zombienet dsl like this

# configure relay chain
alice: js-script ./set-nodeFeature.js  return is 0 within 600 secs

And the script (js)

async function run(nodeName, networkInfo, _jsArgs) {
  const { wsUri, userDefinedTypes } = networkInfo.nodesByName[nodeName];
  const api = await zombie.connect(wsUri, userDefinedTypes);

  await zombie.util.cryptoWaitReady();

  // account to submit tx
  const keyring = new zombie.Keyring({ type: "sr25519" });
  const alice = keyring.addFromUri("//Alice");
  await api.tx.sudo
  .sudo(api.tx.configuration.setNodeFeature(1, true))
  .signAndSend(alice, ({ status, isError }) => {
    if (status.isInBlock) {
      console.log(
        `Transaction included at blockhash ${status.asInBlock}`,
      );
      if (finalization) {
        console.log("Waiting for finalization...");
      } else {
        unsub();
        return resolve();
      }
    } else if (status.isFinalized) {
      console.log(
        `Transaction finalized at blockHash ${status.asFinalized}`,
      );
      unsub();
      return resolve();
    } else if (isError) {
      console.log(`Transaction error`);
      reject(`Transaction error`);
    }        
  });

  return 0;
}

module.exports = { run };

But this should be run as part of a testing suite, let me know if you want to just spawn the network and run script later.

Thx!

@alindima
Copy link
Author

alindima commented Feb 7, 2024

thanks! I'll try that out soon and let you know if it works for me!

@alindima
Copy link
Author

alindima commented Feb 9, 2024

Hi! I get this error when putting this into a zombinet test:

UnhandledRejection:      RpcError: 1014: Priority is too low: (2174005775598 vs 2174005775598): The transaction has too low priority to replace another transaction already in the pool.

@alindima
Copy link
Author

alindima commented Feb 9, 2024

Nevermind, I was running the script from another validator (not alice). Seems to be doing what I need now. Thanks!

@alindima alindima closed this as completed Feb 9, 2024
github-merge-queue bot pushed a commit to paritytech/polkadot-sdk that referenced this issue May 28, 2024
**Don't look at the commit history, it's confusing, as this branch is
based on another branch that was merged**

Fixes #598 
Also implements [RFC
#47](polkadot-fellows/RFCs#47)

## Description

- Availability-recovery now first attempts to request the systematic
chunks for large POVs (which are the first ~n/3 chunks, which can
recover the full data without doing the costly reed-solomon decoding
process). This has a fallback of recovering from all chunks, if for some
reason the process fails. Additionally, backers are also used as a
backup for requesting the systematic chunks if the assigned validator is
not offering the chunk (each backer is only used for one systematic
chunk, to not overload them).
- Quite obviously, recovering from systematic chunks is much faster than
recovering from regular chunks (4000% faster as measured on my apple M2
Pro).
- Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is
different for every core, in order to avoid only querying the first n/3
validators over and over again in the same session. The mapping is the
one described in RFC 47.
- The mapping is feature-gated by the [NodeFeatures runtime
API](#2177) so that it
can only be enabled via a governance call once a sufficient majority of
validators have upgraded their client. If the feature is not enabled,
the mapping will be the identity mapping and backwards-compatibility
will be preserved.
- Adds a new chunk request protocol version (v2), which adds the
ChunkIndex to the response. This may or may not be checked against the
expected chunk index. For av-distribution and systematic recovery, this
will be checked, but for regular recovery, no. This is backwards
compatible. First, a v2 request is attempted. If that fails during
protocol negotiation, v1 is used.
- Systematic recovery is only attempted during approval-voting, where we
have easy access to the core_index. For disputes and collator
pov_recovery, regular chunk requests are used, just as before.

## Performance results

Some results from subsystem-bench:

with regular chunk recovery: CPU usage per block 39.82s
with recovery from backers: CPU usage per block 16.03s
with systematic recovery: CPU usage per block 19.07s

End-to-end results here:
#598 (comment)

#### TODO:

- [x] [RFC #47](polkadot-fellows/RFCs#47)
- [x] merge #2177 and
rebase on top of those changes
- [x] merge #2771 and
rebase
- [x] add tests
- [x] preliminary performance measure on Versi: see
#598 (comment)
- [x] Rewrite the implementer's guide documentation
- [x] #3065 
- [x] paritytech/zombienet#1705 and fix
zombienet tests
- [x] security audit
- [x] final versi test and performance measure

---------

Signed-off-by: alindima <alin@parity.io>
Co-authored-by: Javier Viola <javier@parity.io>
hitchhooker pushed a commit to ibp-network/polkadot-sdk that referenced this issue Jun 5, 2024
**Don't look at the commit history, it's confusing, as this branch is
based on another branch that was merged**

Fixes paritytech#598 
Also implements [RFC
paritytech#47](polkadot-fellows/RFCs#47)

## Description

- Availability-recovery now first attempts to request the systematic
chunks for large POVs (which are the first ~n/3 chunks, which can
recover the full data without doing the costly reed-solomon decoding
process). This has a fallback of recovering from all chunks, if for some
reason the process fails. Additionally, backers are also used as a
backup for requesting the systematic chunks if the assigned validator is
not offering the chunk (each backer is only used for one systematic
chunk, to not overload them).
- Quite obviously, recovering from systematic chunks is much faster than
recovering from regular chunks (4000% faster as measured on my apple M2
Pro).
- Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is
different for every core, in order to avoid only querying the first n/3
validators over and over again in the same session. The mapping is the
one described in RFC 47.
- The mapping is feature-gated by the [NodeFeatures runtime
API](paritytech#2177) so that it
can only be enabled via a governance call once a sufficient majority of
validators have upgraded their client. If the feature is not enabled,
the mapping will be the identity mapping and backwards-compatibility
will be preserved.
- Adds a new chunk request protocol version (v2), which adds the
ChunkIndex to the response. This may or may not be checked against the
expected chunk index. For av-distribution and systematic recovery, this
will be checked, but for regular recovery, no. This is backwards
compatible. First, a v2 request is attempted. If that fails during
protocol negotiation, v1 is used.
- Systematic recovery is only attempted during approval-voting, where we
have easy access to the core_index. For disputes and collator
pov_recovery, regular chunk requests are used, just as before.

## Performance results

Some results from subsystem-bench:

with regular chunk recovery: CPU usage per block 39.82s
with recovery from backers: CPU usage per block 16.03s
with systematic recovery: CPU usage per block 19.07s

End-to-end results here:
paritytech#598 (comment)

#### TODO:

- [x] [RFC paritytech#47](polkadot-fellows/RFCs#47)
- [x] merge paritytech#2177 and
rebase on top of those changes
- [x] merge paritytech#2771 and
rebase
- [x] add tests
- [x] preliminary performance measure on Versi: see
paritytech#598 (comment)
- [x] Rewrite the implementer's guide documentation
- [x] paritytech#3065 
- [x] paritytech/zombienet#1705 and fix
zombienet tests
- [x] security audit
- [x] final versi test and performance measure

---------

Signed-off-by: alindima <alin@parity.io>
Co-authored-by: Javier Viola <javier@parity.io>
TarekkMA pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this issue Aug 2, 2024
**Don't look at the commit history, it's confusing, as this branch is
based on another branch that was merged**

Fixes paritytech#598 
Also implements [RFC
paritytech#47](polkadot-fellows/RFCs#47)

## Description

- Availability-recovery now first attempts to request the systematic
chunks for large POVs (which are the first ~n/3 chunks, which can
recover the full data without doing the costly reed-solomon decoding
process). This has a fallback of recovering from all chunks, if for some
reason the process fails. Additionally, backers are also used as a
backup for requesting the systematic chunks if the assigned validator is
not offering the chunk (each backer is only used for one systematic
chunk, to not overload them).
- Quite obviously, recovering from systematic chunks is much faster than
recovering from regular chunks (4000% faster as measured on my apple M2
Pro).
- Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is
different for every core, in order to avoid only querying the first n/3
validators over and over again in the same session. The mapping is the
one described in RFC 47.
- The mapping is feature-gated by the [NodeFeatures runtime
API](paritytech#2177) so that it
can only be enabled via a governance call once a sufficient majority of
validators have upgraded their client. If the feature is not enabled,
the mapping will be the identity mapping and backwards-compatibility
will be preserved.
- Adds a new chunk request protocol version (v2), which adds the
ChunkIndex to the response. This may or may not be checked against the
expected chunk index. For av-distribution and systematic recovery, this
will be checked, but for regular recovery, no. This is backwards
compatible. First, a v2 request is attempted. If that fails during
protocol negotiation, v1 is used.
- Systematic recovery is only attempted during approval-voting, where we
have easy access to the core_index. For disputes and collator
pov_recovery, regular chunk requests are used, just as before.

## Performance results

Some results from subsystem-bench:

with regular chunk recovery: CPU usage per block 39.82s
with recovery from backers: CPU usage per block 16.03s
with systematic recovery: CPU usage per block 19.07s

End-to-end results here:
paritytech#598 (comment)

#### TODO:

- [x] [RFC paritytech#47](polkadot-fellows/RFCs#47)
- [x] merge paritytech#2177 and
rebase on top of those changes
- [x] merge paritytech#2771 and
rebase
- [x] add tests
- [x] preliminary performance measure on Versi: see
paritytech#598 (comment)
- [x] Rewrite the implementer's guide documentation
- [x] paritytech#3065 
- [x] paritytech/zombienet#1705 and fix
zombienet tests
- [x] security audit
- [x] final versi test and performance measure

---------

Signed-off-by: alindima <alin@parity.io>
Co-authored-by: Javier Viola <javier@parity.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants