forked from paritytech/polkadot-sdk
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] master from paritytech:master #43
Open
pull
wants to merge
144
commits into
subversive-upstream:master
Choose a base branch
from
paritytech:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+37,628
−19,822
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
related issue: #7018 replaced duplicated whitelists with `AllPalletsWithSystem::whitelisted_storage_keys();` in this PR --------- Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de>
#7028) Currently `(A, B, C)` and `((A, B), C)` change the order of implications in the transaction extension pipeline. This order is not accessible in the metadata, because the metadata is just a vector of transaction extension, the nested structure is not visible. This PR make the implementation for tuple of `TransactionExtension` better for tuple of tuple. `(A, B, C)` and `((A, B), C)` don't change the implication for the validation A. This is a breaking change but only when using the trait `TransactionExtension` the code implementing the trait is not breaking (surprising rust behavior but fine). --------- Co-authored-by: command-bot <> Co-authored-by: Bastian Köcher <git@kchr.de>
…ion + frame system ReclaimWeight (#6140) (rebasing of #5234) ## Issues: * Transaction extensions have weights and refund weight. So the reclaiming of unused weight must happen last in the transaction extension pipeline. Currently it is inside `CheckWeight`. * cumulus storage weight reclaim transaction extension misses the proof size of logic happening prior to itself. ## Done: * a new storage `ExtrinsicWeightReclaimed` in frame-system. Any logic which attempts to do some reclaim must use this storage to avoid double reclaim. * a new function `reclaim_weight` in frame-system pallet: info and post info in arguments, read the already reclaimed weight, calculate the new unused weight from info and post info. do the more accurate reclaim if higher. * `CheckWeight` is unchanged and still reclaim the weight in post dispatch * `ReclaimWeight` is a new transaction extension in frame system. For solo chains it must be used last in the transactino extension pipeline. It does the final most accurate reclaim * `StorageWeightReclaim` is moved from cumulus primitives into its own pallet (in order to define benchmark) and is changed into a wrapping transaction extension. It does the recording of proof size and does the reclaim using this recording and the info and post info. So parachains don't need to use `ReclaimWeight`. But also if they use it, there is no bug. ```rust /// The TransactionExtension to the basic transaction logic. pub type TxExtension = cumulus_pallet_weight_reclaim::StorageWeightReclaim< Runtime, ( frame_system::CheckNonZeroSender<Runtime>, frame_system::CheckSpecVersion<Runtime>, frame_system::CheckTxVersion<Runtime>, frame_system::CheckGenesis<Runtime>, frame_system::CheckEra<Runtime>, frame_system::CheckNonce<Runtime>, frame_system::CheckWeight<Runtime>, pallet_transaction_payment::ChargeTransactionPayment<Runtime>, BridgeRejectObsoleteHeadersAndMessages, (bridge_to_rococo_config::OnBridgeHubWestendRefundBridgeHubRococoMessages,), frame_metadata_hash_extension::CheckMetadataHash<Runtime>, ), >; ``` --------- Co-authored-by: GitHub Action <action@github.com> Co-authored-by: georgepisaltu <52418509+georgepisaltu@users.noreply.github.com> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Sebastian Kunert <skunert49@gmail.com> Co-authored-by: command-bot <>
I can't find taplo version in the log, and current version is incompatible with latest version.
Co-authored-by: Dónal Murray <donal.murray@parity.io>
Fix this zombienet test It was failing because in #6452 I enabled the v2 receipts for testnet genesis, so the collators started sending v2 receipts with zeroed collator signatures to old validators that were still checking those signatures (which lead to disputes, since new validators considered the candidates valid). The fix is to also use an old image for collators, so that we don't create v2 receipts. We cannot remove this test yet because collators also perform chunk recovery, so until all collators are upgraded, we need to maintain this compatibility with the old protocol version (which is also why systematic recovery was not yet enabled)
…ue (#7050) ## Problem In the parachain template we use the [fully verifying import queue ](https://github.com/paritytech/polkadot-sdk/blob/3d9eddbeb262277c79f2b93b9efb5af95a3a35a8/cumulus/client/consensus/aura/src/equivocation_import_queue.rs#L224-L224) which does extra equivocation checks. However, when we import a warp synced block with state, we don't set a fork choice, leading to an incomplete block import pipeline and error here: https://github.com/paritytech/polkadot-sdk/blob/3d9eddbeb262277c79f2b93b9efb5af95a3a35a8/substrate/client/service/src/client/client.rs#L488-L488 This renders warp sync useless for chains using this import queue. ## Fix The fix is to always import a block with state as best block, as we already do in the normal Aura Verifier. In a follow up we should also take another look into unifying the usage of the different import queues. fixes paritytech/project-mythical#256 --------- Co-authored-by: command-bot <>
Changes: - Add call `import_member` to the core-fellowship pallet. - Move common logic between `import` and `import_member` into `do_import`. ## `import_member` Can be used to induct an arbitrary collective member and is callable by any signed origin. Pays no fees upon success. This is useful in the case that members did not induct themselves and are idling on their rank. --------- Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: command-bot <>
Co-authored-by: Dónal Murray <donal.murray@parity.io>
# Description This PR removes usage of deprecated `sp-std` from Substrate. (following PR of #5010) ## Integration This PR doesn't remove re-exported `sp_std` from any crates yet, so downstream projects using re-exported `sp_std` will not be affected. ## Review Notes The existing code using `sp-std` is refactored to use `alloc` and `core` directly. The key-value maps are instantiated from a vector of tuples directly instead of using `sp_std::map!` macro. `sp_std::Writer` is a helper type to use `Vec<u8>` with `core::fmt::Write` trait. This PR copied it into `sp-runtime`, because all crates using `sp_std::Writer` (including `sp-runtime` itself, `frame-support`, etc.) depend on `sp-runtime`. If this PR is merged, I would write following PRs to remove remaining usage of `sp-std` from `bridges` and `cumulus`. --------- Co-authored-by: command-bot <> Co-authored-by: Guillaume Thiolliere <guillaume.thiolliere@parity.io> Co-authored-by: Bastian Köcher <info@kchr.de> Co-authored-by: Bastian Köcher <git@kchr.de>
# Description Introduce a workflow `debug` input for `misc-sync-templates.yml` and use it instead of the `runner.debug` context variable, which is set to '1' when `ACTIONS_RUNNER_DEBUG` env/secret is set (https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/troubleshooting-workflows/enabling-debug-logging#enabling-runner-diagnostic-logging). This is useful for controlling when to show debug prints. ## Integration N/A ## Review Notes Using `runner.debug` requires setting the `ACTIONS_RUNNER_DEBUG` env variable, but setting it to false/true is doable through an input, or by importing a variable from the github env file (which requires a code change). This input alone can replace the entire `runner.debug` + `ACTIONS_RUNNER_DEBUG` setup, which simplifies debug printing, but it doesn't look as standard as `runner.debug`. I don't think it is a big deal overall, for this action alone, but happy to account for other opinions. Note: setting the `ACTIONS_RUNNER_DEBUG` whenever we want in a separate branch wouldn't be useful because we can not run the `misc-sync-templates.yml` action from other branch than `master` (due to branch protection rules), so we need to expose this input to be controllable from `master`. --------- Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
# Description Migrate pallet-node-authorization to use umbrella crate. Part of #6504 ## Review Notes * This PR migrates pallet-node-authorization to use the umbrella crate. * Some imports like below have not been added to any prelude as they have very limited usage across the various pallets. ```rust use sp_core::OpaquePeerId as PeerId; ``` * Added a commonly used runtime trait for testing in the `testing_prelude` in `substrate/frame/src/lib.rs`: ```rust pub use sp_runtime::traits::BadOrigin; ``` * `weights.rs` uses the `weights_prelude` like: ```rust use frame::weights_prelude::*; ``` * `tests.rs` and `mock.rs` use the `testing_prelude`: ```rust use frame::testing_prelude::*; ``` * `lib.rs` uses the main `prelude` like: ```rust use frame::prelude::*; ``` * For testing: Checked that local build works and tests run successfully.
# Description Implements NetworkRequest::request for litep2p that we need for networking benchmarks ## Review Notes Duplicates implementation for NetworkService https://github.com/paritytech/polkadot-sdk/blob/5bf9dd2aa9bf944434203128783925bdc2ad8c01/substrate/client/network/src/service.rs#L1186-L1205 --------- Co-authored-by: command-bot <>
Co-authored-by: Dónal Murray <donalm@seadanda.dev> Co-authored-by: Dónal Murray <donal.murray@parity.io> Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
# Description Seems like I added `SKIP_WASM_BUILD=1` 💀 for arch64 binaries, which results in various errors like: #6966. This PR unsets the variable. Closes #6966. ## Integration People who found workarounds as in #6966 can consume the fixed binaries again. ## Review Notes I introduced SKIP_WASM_BUILD=1 for some reason for aarch64 (probably to speed up testing) and forgot to remove it. It slipped through and interfered with `stable2412` release artifacts. Needs backporting to `stable2412` and then rebuilding/overwriting the aarch64 artifacts. --------- Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
PR for #3581 Added a cfg to show a deprecated warning message when using std --------- Co-authored-by: command-bot <> Co-authored-by: Adrian Catangiu <adrian@parity.io>
Changes: 1. Use the 0x0000000000000000000000000000000000000000 token address as Native ETH. 2. Convert it to/from `{ parents: 2, interior: X1(GlobalConsensus(Ethereum{chain_id: 1})) }` when encountered. Onchain changes: This will require a governance request to register native ETH (with the above location) in the foreign assets pallet and make it sufficient. Related solidity changes: Snowfork/snowbridge#1354 TODO: - [x] Emulated Tests --------- Co-authored-by: Vincent Geddes <117534+vgeddes@users.noreply.github.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Bastian Köcher <info@kchr.de>
Co-authored-by: Dónal Murray <donalm@seadanda.dev>
# Description Migrating salary pallet to use umbrella crate. It is a follow-up from #7025 Why did I create this new branch? I did this, so that the unnecessary cargo fmt changes from the previous branch are discarded and hence opened this new PR. ## Review Notes This PR migrates pallet-salary to use the umbrella crate. Added change: Explanation requested for why `TestExternalities` was replaced by `TestState` as testing_prelude already includes it `pub use sp_io::TestExternalities as TestState;` I have also modified the defensive! macro to be compatible with umbrella crate as it was being used in the salary pallet
# Description - Used 10 notifications and requests within the benchmarks. After moving the network workers' initialization out of the benchmarks, it is acceptable to use this small number without losing precision. - Removed the 128MB payload that consumed most of the execution time.
Collectives-westend was using `FixedWeightBounds`, meaning the same weight per instruction. Added proper benchmarks. --------- Co-authored-by: GitHub Action <action@github.com> Co-authored-by: Branislav Kontur <bkontur@gmail.com>
## Description This PR deprecates `UnpaidLocalExporter` in favor of the new `LocalExporter`. First, the name is misleading, as it can be used in both paid and unpaid scenarios. Second, it contains a hard-coded channel 0, whereas `LocalExporter` uses the same algorithm as `xcm-exporter`. ## Future Improvements Remove the `channel` argument and slightly modify the `ExportXcm::validate` signature as part of [this issue](https://github.com/orgs/paritytech/projects/145/views/8?pane=issue&itemId=84899273). --------- Co-authored-by: command-bot <>
Update the current approach to attach the `ref_time`, `pov` and `deposit` parameters to an Ethereum transaction. Previously we will pass these 3 parameters along with the signed payload, and check that the fees resulting from `gas x gas_price` match the actual fees paid by the user for the extrinsic. This approach unfortunately can be attacked. A malicious actor could force such a transaction to fail by injecting low values for some of these extra parameters as they are not part of the signed payload. The new approach encodes these 3 extra parameters in the lower digits of the transaction gas, approximating the the log2 of the actual values to encode each components on 2 digits --------- Co-authored-by: GitHub Action <action@github.com> Co-authored-by: command-bot <>
Reference hardware requirements have been bumped to at least 8 cores so we can no allocate 50% of that capacity to PVF execution. --------- Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
# Description This PR modifies the hard-coded size of extrinsics cache within [`PoolRotator`](https://github.com/paritytech/polkadot-sdk/blob/cdf107de700388a52a17b2fb852c98420c78278e/substrate/client/transaction-pool/src/graph/rotator.rs#L36-L45) to be inline with pool limits. The problem was, that due to small size (comparing to number of txs in single block) of hard coded size: https://github.com/paritytech/polkadot-sdk/blob/cdf107de700388a52a17b2fb852c98420c78278e/substrate/client/transaction-pool/src/graph/rotator.rs#L34 excessive number of unnecessary verification were performed in `prune_tags`: https://github.com/paritytech/polkadot-sdk/blob/cdf107de700388a52a17b2fb852c98420c78278e/substrate/client/transaction-pool/src/graph/pool.rs#L369-L370 This was resulting in quite long durations of `prune_tags` execution time (which was ok for 6s, but becomes noticable for 2s blocks): ``` Pruning at HashAndNumber { number: 83, ... }. Resubmitting transactions: 6142, reverification took: 237.818955ms Pruning at HashAndNumber { number: 84, ... }. Resubmitting transactions: 5985, reverification took: 222.118218ms Pruning at HashAndNumber { number: 85, ... }. Resubmitting transactions: 5981, reverification took: 215.546847ms ``` The fix reduces the overhead: ``` Pruning at HashAndNumber { number: 92, ... }. Resubmitting transactions: 6325, reverification took: 14.728354ms Pruning at HashAndNumber { number: 93, ... }. Resubmitting transactions: 7030, reverification took: 23.973607ms Pruning at HashAndNumber { number: 94, ... }. Resubmitting transactions: 4465, reverification took: 9.532472ms ``` ## Review Notes I decided to leave the hardocded `EXPECTED_SIZE` for the legacy transaction pool. Removing verification of transactions during re-submission may negatively impact the behavior of the legacy (single-state) pool. As in long-term we probably want to deprecate old pool, I did not invest time to assess the impact of rotator change in behavior of the legacy pool. --------- Co-authored-by: command-bot <> Co-authored-by: Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
- remove old bench from cmd.py and left alias for backward compatibility - reverted the frame-wight-template as the problem was that it umbrella template wasn't picked correctly in the old benchmarks, in frame-omni-bench it correctly identifies the dependencies and uses correct template --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR modifies `named_reserve()` in frame-balances to use checked math instead of defensive saturating math. The use of saturating math relies on the assumption that the sum of the values will always fit in `u128::MAX`. However, there is nothing preventing the implementing pallet from passing a larger value which overflows. This can happen if the implementing pallet does not validate user input and instead relies on `named_reserve()` to return an error (this saves an additional read) This is not a security concern, as the method will subsequently return an error thanks to `<Self as ReservableCurrency<_>>::reserve(who, value)?;`. However, the `defensive_saturating_add` will panic in `--all-features`, creating false positive crashes in fuzzing operations. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR implements the block author API method. Runtimes ought to implement it such that it corresponds to the `coinbase` EVM opcode. --------- Signed-off-by: xermicus <cyrill@parity.io> Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com> Co-authored-by: command-bot <> Co-authored-by: Alexander Theißen <alex.theissen@me.com> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Migrating cumulus-pallet-session-benchmarking to the new benchmarking syntax v2. This is a part of #6202 --------- Co-authored-by: seemantaggarwal <32275622+seemantaggarwal@users.noreply.github.com> Co-authored-by: Bastian Köcher <git@kchr.de>
This PR contains small fixes and backwards compatibility issues identified during work on the larger PR: #6906. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Related to: #7295 (comment) --------- Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Adrian Catangiu <adrian@parity.io>
…rks and testing (#7379) # Description Currently benchmarks and tests on pallet_balances would fail when the feature insecure_zero_ed is enabled. This PR allows to run such benchmark and tests keeping into account the fact that accounts would not be deleted when their balance goes below a threshold. ## Integration *In depth notes about how this PR should be integrated by downstream projects. This part is mandatory, and should be reviewed by reviewers, if the PR does NOT have the `R0-Silent` label. In case of a `R0-Silent`, it can be ignored.* ## Review Notes *In depth notes about the **implementation** details of your PR. This should be the main guide for reviewers to understand your approach and effectively review it. If too long, use [`<details>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details)*. *Imagine that someone who is depending on the old code wants to integrate your new code and the only information that they get is this section. It helps to include example usage and default value here, with a `diff` code-block to show possibly integration.* *Include your leftover TODOs, if any, here.* # Checklist * [x] My PR includes a detailed description as outlined in the "Description" and its two subsections above. * [x] My PR follows the [labeling requirements]( https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process ) of this project (at minimum one label for `T` required) * External contributors: ask maintainers to put the right label on your PR. * [x] I have made corresponding changes to the documentation (if applicable) * [x] I have added tests that prove my fix is effective or that my feature works (if applicable) You can remove the "Checklist" section once all have been checked. Thank you for your contribution! ✄ ----------------------------------------------------------------------------- --------- Co-authored-by: Rodrigo Quelhas <rodrigo_quelhas@outlook.pt>
# Description Close #7122. This PR replaces the unmaintained `derivative` dependency with `derive-where`. ## Integration This PR doesn't change the public interfaces. ## Review Notes The `derivative` crate, previously used to derive basic traits for structs with generics or enums, is no longer actively maintained. It has been replaced with the `derive-where` crate, which offers a more straightforward syntax while providing the same features as `derivative`. --------- Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
- added 3 links for subweight comparison - now, ~1 month ago release, ~3 month ago release tag - added `--3way --ours` flags for `git apply` to resolve potential conflict - stick to the weekly branch from the start until the end, to prevent race condition with conflicts
…n to all backing groups (#6924) ## Issues - [[#5049] Elastic scaling: zombienet tests](#5049) - [[#4526] Add zombienet tests for malicious collators](#4526) ## Description Modified the undying collator to include a malus mode, in which it submits the same collation to all assigned backing groups. ## TODO * [X] Implement malicious collator that submits the same collation to all backing groups; * [X] Avoid the core index check in the collation generation subsystem: https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/node/collation-generation/src/lib.rs#L552-L553; * [X] Resolve the mismatch between the descriptor and the commitments core index: #7104 * [X] Implement `duplicate_collations` test with zombienet-sdk; * [X] Add PRdoc.
This should fix the error log related to PoV pre-dispatch weight being lower than post-dispatch for `ParasInherent`: ``` ERROR tokio-runtime-worker runtime::frame-support: Post dispatch weight is greater than pre dispatch weight. Pre dispatch weight may underestimating the actual weight. Greater post dispatch weight components are ignored. Pre dispatch weight: Weight { ref_time: 47793353978, proof_size: 1019 }, Post dispatch weight: Weight { ref_time: 5030321719, proof_size: 135395 } ``` --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR backports regular version bumps and prdoc reorganization from stable release branch back to master
# Description There is a small error (which slipped through reviews) in matrix strategy expansion which results in errors like this: https://github.com/paritytech/polkadot-sdk/actions/runs/13079943579/job/36501002368. ## Integration N/A ## Review Notes Need to fix this in master and then rerun it manually against `stable2412-1`. Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
Part of #5079. Removes all usage of the static async backing params, replacing them with dynamically computed equivalent values (based on the claim queue and scheduling lookahead). Adds a new runtime API for querying the scheduling lookahead value. If not present, falls back to 3 (the default value that is backwards compatible with values we have on production networks for allowed_ancestry_len) Also resolves most of #4447, removing code that handles async backing not yet being enabled. While doing this, I removed the support for collation protocol version 1 on collators, as it only worked for leaves not supporting async backing (which are none). I also unhooked the legacy v1 statement-distribution (for the same reason as above). That subsystem is basically dead code now, so I had to remove some of its tests as they would no longer pass (since the subsystem no longer sends messages to the legacy variant). I did not remove the entire legacy subsystem yet, as that would pollute this PR too much. We can remove the entire v1 and v2 validation protocols in a follow up PR. In another PR: remove test files with names `prospective_parachains` (it'd pollute this PR if we do now) TODO: - [x] add deprecation warnings - [x] prdoc --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
address failed CI after full regeneration Example #7406 Failed CI https://github.com/paritytech/polkadot-sdk/actions/runs/13070646240 Monkey-patched weights which have been overridden by automation ![image](https://github.com/user-attachments/assets/ecf69173-f4dd-4113-a319-4f29d779ecae)
…te contracts (#7414) This PR changes the behavior of `instantiate` when the resulting contract address already exists (because the caller tried to instantiate the same contract with the same salt multiple times): Instead of trapping the caller, return an error code. Solidity allows `catch`ing this, which doesn't work if we are trapping the caller. For example, the change makes the following snippet work: ```Solidity try new Foo{salt: hex"00"}() returns (Foo) { // Instantiation was successful (contract address was free and constructor did not revert) } catch { // This branch is expected to be taken if the instantiation failed because of a duplicate salt } ``` `revive` PR: paritytech/revive#188 --------- Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Aligned `polkadot-omni-node` & `polkadot-parachain` versions. There is one `NODE_VERSION` constant, in `polkadot-omni-node-lib`, used by both binaries. Closes #7276 . ## Integration Node operators will know what versions of `polkadot-omni-node` & `polkadot-parachain` they use since their versions will be kept in sync with the stable release `polkadot` SemVer version. ## Review Notes TODO: - [x] update NODE_VERSION of `polkadot-omni-node-lib` when running branch off workflow --------- Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
…7439) # Description Another small fix for sync-templates. We're copying the `polkadot-sdk`'s `parachain-template` files (including the `parachain-template-docs`'s Cargo.toml) to the directory where we're creating the workspace with all `parachain-template` members crates, and workspace's toml. The error is that in this directory for the workspace we first create the workspace's Cargo.toml, and then copy the files of the `polkadot-sdk`'s `parachain-template`, including the `Cargo.toml` of the `parachain-template-docs` crate, which overwrites the workspace Cargo.toml. In the end we delete the `Cargo.toml` (which we assume it is of the `parachain-template-docs` crate), forgetting that previously there should've been a workspace Cargo.toml, which should still be kept and committed to the template's repository. The error happens here: https://github.com/paritytech/polkadot-sdk/actions/runs/13111697690/job/36577834127 ## Integration N/A ## Review Notes Once again, merging this into master requires re-running sync templates based on latest version on master. Hopefully this will be the last issue related to the workflow itself. --------- Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
#### Description During 2s block investigation it turned out that [ForkAwareTxPool::register_listeners](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/client/transaction-pool/src/fork_aware_txpool/fork_aware_txpool.rs#L1036) call takes significant amount of time. ``` register_listeners: at HashAndNumber { number: 12, hash: 0xe9a1...0b1d2 } took 200.041933ms register_listeners: at HashAndNumber { number: 13, hash: 0x5eb8...a87c6 } took 264.487414ms register_listeners: at HashAndNumber { number: 14, hash: 0x30cb...2e6ec } took 340.525566ms register_listeners: at HashAndNumber { number: 15, hash: 0x0450...4f05c } took 405.686659ms register_listeners: at HashAndNumber { number: 16, hash: 0xfa6f...16c20 } took 477.977836ms register_listeners: at HashAndNumber { number: 17, hash: 0x5474...5d0c1 } took 483.046029ms register_listeners: at HashAndNumber { number: 18, hash: 0x3ca5...37b78 } took 482.715468ms register_listeners: at HashAndNumber { number: 19, hash: 0xbfcc...df254 } took 484.206999ms register_listeners: at HashAndNumber { number: 20, hash: 0xd748...7f027 } took 414.635236ms register_listeners: at HashAndNumber { number: 21, hash: 0x2baa...f66b5 } took 418.015897ms register_listeners: at HashAndNumber { number: 22, hash: 0x5f1d...282b5 } took 423.342397ms register_listeners: at HashAndNumber { number: 23, hash: 0x7a18...f2d03 } took 472.742939ms register_listeners: at HashAndNumber { number: 24, hash: 0xc381...3fd07 } took 489.625557ms ``` This PR implements the idea outlined in #7071. Instead of having a separate listener for every transaction in each view, we now use a single stream of aggregated events per view, with each stream providing events for all transactions in that view. Each event is represented as a tuple: (transaction-hash, transaction-status). This significantly reduce the time required for `maintain`. #### Review Notes - single aggregated stream, provided by the individual view delivers events in form of `(transaction-hash, transaction-status)`, - `MultiViewListener` now has a task. This task is responsible for: - polling the stream map (which consists of individual view's aggregated streams) and the `controller_receiver` which provides side-channel [commands](https://github.com/paritytech/polkadot-sdk/blob/2b18e080cfcd6b56ee638c729f891154e566e52e/substrate/client/transaction-pool/src/fork_aware_txpool/multi_view_listener.rs#L68-L95) (like `AddView` or `FinalizeTransaction`) sent from the _transaction pool_. - dispatching individual transaction statuses and control commands into the external (created via API, e.g. over RPC) listeners of individual transactions, - external listener is responsible for status handling _logic_ (e.g. deduplication of events, or ignoring some of them) and triggering statuses to external world (_this was not changed_). - level of debug messages was adjusted (per-tx messages shall be _trace_), Closes #7071 --------- Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Remove the specific fee amount checks in integration tests, since it changes every time weights are regenerated.
Found via open-web3-stack/polkadot-ecosystem-tests#165. Closes #7370 . # Description Some extrinsics from `pallet_nomination_pools` were not emitting events: * `set_configs` * `set_claim_permission` * `set_metadata` * `chill` * `nominate` ## Integration N/A ## Review Notes N/A --------- Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com>
…_base_deposit` (#7230) This PR is centered around a main fix regarding the base deposit and a bunch of drive by or related fixtures that make sense to resolve in one go. It could be broken down more but I am constantly rebasing this PR and would appreciate getting those fixes in as-one. **This adds a multi block migration to Westend AssetHub that wipes the pallet state clean. This is necessary because of the changes to the `ContractInfo` storage item. It will not delete the child storage though. This will leave a tiny bit of garbage behind but won't cause any problems. They will just be orphaned.** ## Record the deposit for immutable data into the `storage_base_deposit` The `storage_base_deposit` are all the deposit a contract has to pay for existing. It included the deposit for its own metadata and a deposit proportional (< 1.0x) to the size of its code. However, the immutable code size was not recorded there. This would lead to the situation where on terminate this portion wouldn't be refunded staying locked into the contract. It would also make the calculation of the deposit changes on `set_code_hash` more complicated when it updates the immutable data (to be done in #6985). Reason is because it didn't know how much was payed before since the storage prices could have changed in the mean time. In order for this solution to work I needed to delay the deposit calculation for a new contract for after the contract is done executing is constructor as only then we know the immutable data size. Before, we just charged this eagerly in `charge_instantiate` before we execute the constructor. Now, we merely send the ED as free balance before the constructor in order to create the account. After the constructor is done we calculate the contract base deposit and charge it. This will make `set_code_hash` much easier to implement. As a side effect it is now legal to call `set_immutable_data` multiple times per constructor (even though I see no reason to do so). It simply overrides the immutable data with the new value. The deposit accounting will be done after the constructor returns (as mentioned above) instead of when setting the immutable data. ## Don't pre-charge for reading immutable data I noticed that we were pre-charging weight for the max allowable immutable data when reading those values and then refunding after read. This is not necessary as we know its length without reading the storage as we store it out of band in contract metadata. This makes reading it free. Less pre-charging less problems. ## Remove delegate locking Fixes #7092 This is also in the spirit of making #6985 easier to implement. The locking complicates `set_code_hash` as we might need to block settings the code hash when locks exist. Check #7092 for further rationale. ## Enforce "no terminate in constructor" eagerly We used to enforce this rule after the contract execution returned. Now we error out early in the host call. This makes it easier to be sure to argue that a contract info still exists (wasn't terminated) when a constructor successfully returns. All around this his just much simpler than dealing this check. ## Moved refcount functions to `CodeInfo` They never really made sense to exist on `Stack`. But now with the locking gone this makes even less sense. The refcount is stored inside `CodeInfo` to lets just move them there. ## Set `CodeHashLockupDepositPercent` for test runtime The test runtime was setting `CodeHashLockupDepositPercent` to zero. This was trivializing many code paths and excluded them from testing. I set it to `30%` which is our default value and fixed up all the tests that broke. This should give us confidence that the lockup doeposit collections properly works. ## Reworked the `MockExecutable` to have both a `deploy` and a `call` entry point This type used for testing could only have either entry points but not both. In order to fix the `immutable_data_set_overrides` I needed to a new function `add_both` to `MockExecutable` that allows to have both entry points. Make sure to make use of it in the future :) --------- Co-authored-by: command-bot <> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: PG Herveou <pgherveou@gmail.com> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
We are using the substrate weights on the test net. Removing the benches so that they are not generated by accident and then not used.
This PR will make omni-node dev-mode once again compatible with older runtimes. The changes introduced in #6825 changed constraints that are enforced in the runtime. For normal chains this should work fine, since we have real parameters there, like relay chain slots and parachain slots. For these manual seal parameters we need to respect the constraints, while faking all the parameters. This PR should fix manual seal in omni-node to work with runtime build before and after #6825 (I tested that). In the future, we should look into improving the parameterization here, possibly by introducing proper aura pre-digests so that the parachain slot moves forward. This will require quite a bit of refactoring on the manual seal node side however. Issue: #7453 Also, the dev chain spec in parachain template is updated. This makes it work with stable2412-1 and master omni-node. Once the changes here are backported and in a release, all combinations will work again. fixes #7341 --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Copy pasted the `parachain-template-node` offchain worker setup to omni-node-lib for both aura and manual seal nodes. Closes #7447 ## Integration Enabled offchain workers for both `polkadot-omni-node` and `polkadot-parachain` nodes. This would allow executing offchain logic in the runtime and considering it on the node side. --------- Signed-off-by: Iulian Barbu <iulian.barbu@parity.io> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by pull[bot] (v2.0.0-alpha.1)
Can you help keep this open source service alive? 💖 Please sponsor : )