Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unbalanced and Balanced fungible conformance tests, and fungible fixes #1296

Merged
merged 34 commits into from
Jan 15, 2024

Conversation

liamaharon
Copy link
Contributor

@liamaharon liamaharon commented Aug 30, 2023

Original PR paritytech/substrate#14655


Partial #225

  • Adds conformance tests for Unbalanced
  • Adds conformance tests for Balanced
  • Several minor fixes to fungible default implementations and the Balances pallet
    • Unbalanced::decrease_balance can reap account when Preservation is Preserve
    • Balanced::pair can return pairs of imbalances which do not cancel each other out
    • Balances pallet active_issuance 'underflow'
    • Refactors the conformance test file structure to match the fungible file structure: tests for traits in regular.rs go into a test file named regular.rs, tests for traits in freezes.rs go into a test file named freezes.rs, etc.
  • Improve doc comments
  • Simplify macros

Fixes

Unbalanced::decrease_balance can reap account when called with Preservation::Preserve

There is a potential issue in the default implementation of Unbalanced::decrease_balance. The implementation can delete an account even when it is called with preservation: Preservation::Preserve. This seems to contradict the documentation of Preservation::Preserve:

	/// The account may not be killed and our provider reference must remain (in the context of
	/// tokens, this means that the account may not be dusted).
	Preserve,

I updated Unbalanced::decrease_balance to return Err(TokenError::BelowMinimum) when a withdrawal would cause the account to be reaped and preservation: Preservation::Preserve.

  • TODO Confirm with @gavofyork that this is correct behavior

Test for this behavior:

/// Tests [`Unbalanced::decrease_balance`] called with [`Preservation::Preserve`].
pub fn decrease_balance_preserve<T, AccountId>()
where
T: Unbalanced<AccountId>,
<T as Inspect<AccountId>>::Balance: AtLeast8BitUnsigned + Debug,
AccountId: AtLeast8BitUnsigned,
{
// Setup account with some balance
let account_0 = AccountId::from(0);
let account_0_initial_balance = T::minimum_balance() + 10.into();
T::increase_balance(&account_0, account_0_initial_balance, Precision::Exact).unwrap();
// Decreasing the balance below the minimum when Precision::Exact should fail.
let amount = 11.into();
assert_eq!(
T::decrease_balance(
&account_0,
amount,
Precision::Exact,
Preservation::Preserve,
Fortitude::Polite,
),
Err(TokenError::BelowMinimum.into()),
);
// Balance should not have changed.
assert_eq!(T::balance(&account_0), account_0_initial_balance);

Balanced::pair returning non-canceling pairs

Balanced::pair is supposed to create a pair of imbalances that cancel each other out. However this is not the case when the method is called with an amount greater than the total supply.

In the existing default implementation, Balanced::pair creates a pair by first rescinding the balance, creating Debt, and then issuing the balance, creating Credit.

When creating Debt, if the amount to create exceeds the total_supply, total_supply units of Debt are created instead of amount units of Debt. This can lead to non-canceling amount of Credit and Debt being created.

To address this, I create the credit and debt directly in the method instead of calling issue and rescind.

Test for this behavior:

/// Tests [`Balanced::pair`].
pub fn pair<T, AccountId>()
where
T: Balanced<AccountId>,
<T as Inspect<AccountId>>::Balance: AtLeast8BitUnsigned + Debug,
AccountId: AtLeast8BitUnsigned,
{
// Pair zero balance works
let (credit, debt) = T::pair(0.into());
assert_eq!(debt.peek(), 0.into());
assert_eq!(credit.peek(), 0.into());
// Pair with non-zero balance: the credit and debt cancel each other out
let balance = 10.into();
let (credit, debt) = T::pair(balance);
assert_eq!(credit.peek(), balance);
assert_eq!(debt.peek(), balance);
// Pair with max balance: the credit and debt still cancel each other out
let balance = T::Balance::max_value() - 1.into();
let (debt, credit) = T::pair(balance);
assert_eq!(debt.peek(), balance);
assert_eq!(credit.peek(), balance);
}

Balances pallet active_issuance 'underflow'

This PR resolves an issue in the Balances pallet that can lead to odd behavior of active_issuance.

Currently, the Balances pallet doesn't check if InactiveIssuance remains less than or equal to TotalIssuance when supply is deactivated. This allows InactiveIssuance to be greater than TotalIssuance, which can result in unexpected behavior from the perspective of the fungible API.

active_issuance is derived from TotalIssuance.saturating_sub(InactiveIssuance).

If an amount is deactivated that causes InactiveIssuance to become greater TotalIssuance, active_issuance will return 0. However once in that state, reactivating an amount will not increase active_issuance by the reactivated amount as expected.

Consider this test where the last assertion would fail due to this issue:

/// Tests [`Unbalanced::deactivate`] and [`Unbalanced::reactivate`].
pub fn deactivate_and_reactivate<T, AccountId>()
where
T: Unbalanced<AccountId>,
<T as Inspect<AccountId>>::Balance: AtLeast8BitUnsigned + Debug,
AccountId: AtLeast8BitUnsigned,
{
T::set_total_issuance(10.into());
assert_eq!(T::total_issuance(), 10.into());
assert_eq!(T::active_issuance(), 10.into());
T::deactivate(2.into());
assert_eq!(T::total_issuance(), 10.into());
assert_eq!(T::active_issuance(), 8.into());
// Saturates at total_issuance
T::reactivate(4.into());
assert_eq!(T::total_issuance(), 10.into());
assert_eq!(T::active_issuance(), 10.into());
// Decrements correctly after saturating at total_issuance
T::deactivate(1.into());
assert_eq!(T::total_issuance(), 10.into());
assert_eq!(T::active_issuance(), 9.into());
// Saturates at zero
T::deactivate(15.into());
assert_eq!(T::total_issuance(), 10.into());
assert_eq!(T::active_issuance(), 0.into());
// Increments correctly after saturating at zero
T::reactivate(1.into());
assert_eq!(T::total_issuance(), 10.into());
assert_eq!(T::active_issuance(), 1.into());
}
}

To address this, I've modified the deactivate function to ensure InactiveIssuance never surpasses TotalIssuance.

@liamaharon liamaharon added T1-FRAME This PR/Issue is related to core FRAME, the framework. T10-tests This PR/Issue is related to tests. labels Aug 30, 2023
liamaharon and others added 2 commits September 27, 2023 23:29
Co-authored-by: Muharem <ismailov.m.h@gmail.com>
…/fungible-conformance-tests-balanced-unbalanced
@liamaharon liamaharon requested review from a team October 16, 2023 01:10
@liamaharon
Copy link
Contributor Author

@muharem finally got around to addressing your comments. Thanks.

…/fungible-conformance-tests-balanced-unbalanced
@paritytech-review-bot paritytech-review-bot bot requested a review from a team November 2, 2023 16:42
@paritytech-review-bot paritytech-review-bot bot requested a review from a team November 3, 2023 10:29
@liamaharon liamaharon changed the title fungible conformance tests: Unbalanced and Balanced Unbalanced and Balanced fungible conformance tests, and fungible fixes Jan 15, 2024
@liamaharon liamaharon added this pull request to the merge queue Jan 15, 2024
Merged via the queue into master with commit 46090ff Jan 15, 2024
122 of 123 checks passed
@liamaharon liamaharon deleted the liam/fungible-conformance-tests-balanced-unbalanced branch January 15, 2024 13:20
ahmadkaouk pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this pull request Jan 21, 2024
paritytech#1296)

Original PR paritytech/substrate#14655

---

Partial paritytech#225

- [x] Adds conformance tests for Unbalanced
- [x] Adds conformance tests for Balanced
- Several minor fixes to fungible default implementations and the
Balances pallet
- [x] `Unbalanced::decrease_balance` can reap account when
`Preservation` is `Preserve`
- [x] `Balanced::pair` can return pairs of imbalances which do not
cancel each other out
   - [x] Balances pallet `active_issuance` 'underflow'
- [x] Refactors the conformance test file structure to match the
fungible file structure: tests for traits in regular.rs go into a test
file named regular.rs, tests for traits in freezes.rs go into a test
file named freezes.rs, etc.
 - [x] Improve doc comments
 - [x] Simplify macros
`Preservation::Preserve`
There is a potential issue in the default implementation of
`Unbalanced::decrease_balance`. The implementation can delete an account
even when it is called with `preservation: Preservation::Preserve`. This
seems to contradict the documentation of `Preservation::Preserve`:

```rust
	/// The account may not be killed and our provider reference must remain (in the context of
	/// tokens, this means that the account may not be dusted).
	Preserve,
```

I updated `Unbalanced::decrease_balance` to return
`Err(TokenError::BelowMinimum)` when a withdrawal would cause the
account to be reaped and `preservation: Preservation::Preserve`.

- [ ] TODO Confirm with @gavofyork that this is correct behavior

Test for this behavior:

https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L912-L937

`Balanced::pair` is supposed to create a pair of imbalances that cancel
each other out. However this is not the case when the method is called
with an amount greater than the total supply.

In the existing default implementation, `Balanced::pair` creates a pair
by first rescinding the balance, creating `Debt`, and then issuing the
balance, creating `Credit`.

When creating `Debt`, if the amount to create exceeds the
`total_supply`, `total_supply` units of `Debt` are created *instead* of
`amount` units of `Debt`. This can lead to non-canceling amount of
`Credit` and `Debt` being created.

To address this, I create the credit and debt directly in the method
instead of calling `issue` and `rescind`.

Test for this behavior:

https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1323-L1346

This PR resolves an issue in the `Balances` pallet that can lead to odd
behavior of `active_issuance`.

Currently, the Balances pallet doesn't check if `InactiveIssuance`
remains less than or equal to `TotalIssuance` when supply is
deactivated. This allows `InactiveIssuance` to be greater than
`TotalIssuance`, which can result in unexpected behavior from the
perspective of the fungible API.

`active_issuance` is derived from
`TotalIssuance.saturating_sub(InactiveIssuance)`.

If an `amount` is deactivated that causes `InactiveIssuance` to become
greater TotalIssuance, `active_issuance` will return 0. However once in
that state, reactivating an amount will not increase `active_issuance`
by the reactivated `amount` as expected.

Consider this test where the last assertion would fail due to this
issue:

https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1036-L1071

To address this, I've modified the `deactivate` function to ensure
`InactiveIssuance` never surpasses `TotalIssuance`.

---------

Co-authored-by: Muharem <ismailov.m.h@gmail.com>
ahmadkaouk pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this pull request Jan 29, 2024
paritytech#1296)

Original PR paritytech/substrate#14655

---

Partial paritytech#225

- [x] Adds conformance tests for Unbalanced
- [x] Adds conformance tests for Balanced
- Several minor fixes to fungible default implementations and the
Balances pallet
- [x] `Unbalanced::decrease_balance` can reap account when
`Preservation` is `Preserve`
- [x] `Balanced::pair` can return pairs of imbalances which do not
cancel each other out
   - [x] Balances pallet `active_issuance` 'underflow'
- [x] Refactors the conformance test file structure to match the
fungible file structure: tests for traits in regular.rs go into a test
file named regular.rs, tests for traits in freezes.rs go into a test
file named freezes.rs, etc.
 - [x] Improve doc comments
 - [x] Simplify macros

## Fixes

### `Unbalanced::decrease_balance` can reap account when called with
`Preservation::Preserve`
There is a potential issue in the default implementation of
`Unbalanced::decrease_balance`. The implementation can delete an account
even when it is called with `preservation: Preservation::Preserve`. This
seems to contradict the documentation of `Preservation::Preserve`:

```rust
	/// The account may not be killed and our provider reference must remain (in the context of
	/// tokens, this means that the account may not be dusted).
	Preserve,
```

I updated `Unbalanced::decrease_balance` to return
`Err(TokenError::BelowMinimum)` when a withdrawal would cause the
account to be reaped and `preservation: Preservation::Preserve`.

- [ ] TODO Confirm with @gavofyork that this is correct behavior

Test for this behavior:

https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L912-L937

### `Balanced::pair` returning non-canceling pairs

`Balanced::pair` is supposed to create a pair of imbalances that cancel
each other out. However this is not the case when the method is called
with an amount greater than the total supply.

In the existing default implementation, `Balanced::pair` creates a pair
by first rescinding the balance, creating `Debt`, and then issuing the
balance, creating `Credit`.

When creating `Debt`, if the amount to create exceeds the
`total_supply`, `total_supply` units of `Debt` are created *instead* of
`amount` units of `Debt`. This can lead to non-canceling amount of
`Credit` and `Debt` being created.

To address this, I create the credit and debt directly in the method
instead of calling `issue` and `rescind`.

Test for this behavior:

https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1323-L1346

### `Balances` pallet `active_issuance` 'underflow'

This PR resolves an issue in the `Balances` pallet that can lead to odd
behavior of `active_issuance`.

Currently, the Balances pallet doesn't check if `InactiveIssuance`
remains less than or equal to `TotalIssuance` when supply is
deactivated. This allows `InactiveIssuance` to be greater than
`TotalIssuance`, which can result in unexpected behavior from the
perspective of the fungible API.

`active_issuance` is derived from
`TotalIssuance.saturating_sub(InactiveIssuance)`.

If an `amount` is deactivated that causes `InactiveIssuance` to become
greater TotalIssuance, `active_issuance` will return 0. However once in
that state, reactivating an amount will not increase `active_issuance`
by the reactivated `amount` as expected.

Consider this test where the last assertion would fail due to this
issue:

https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1036-L1071

To address this, I've modified the `deactivate` function to ensure
`InactiveIssuance` never surpasses `TotalIssuance`.

---------

Co-authored-by: Muharem <ismailov.m.h@gmail.com>
(cherry picked from commit 46090ff)
bgallois pushed a commit to duniter/duniter-polkadot-sdk that referenced this pull request Mar 25, 2024
paritytech#1296)

Original PR paritytech/substrate#14655

---

Partial paritytech#225

- [x] Adds conformance tests for Unbalanced
- [x] Adds conformance tests for Balanced
- Several minor fixes to fungible default implementations and the
Balances pallet
- [x] `Unbalanced::decrease_balance` can reap account when
`Preservation` is `Preserve`
- [x] `Balanced::pair` can return pairs of imbalances which do not
cancel each other out
   - [x] Balances pallet `active_issuance` 'underflow'
- [x] Refactors the conformance test file structure to match the
fungible file structure: tests for traits in regular.rs go into a test
file named regular.rs, tests for traits in freezes.rs go into a test
file named freezes.rs, etc.
 - [x] Improve doc comments
 - [x] Simplify macros

## Fixes

### `Unbalanced::decrease_balance` can reap account when called with
`Preservation::Preserve`
There is a potential issue in the default implementation of
`Unbalanced::decrease_balance`. The implementation can delete an account
even when it is called with `preservation: Preservation::Preserve`. This
seems to contradict the documentation of `Preservation::Preserve`:

```rust
	/// The account may not be killed and our provider reference must remain (in the context of
	/// tokens, this means that the account may not be dusted).
	Preserve,
```

I updated `Unbalanced::decrease_balance` to return
`Err(TokenError::BelowMinimum)` when a withdrawal would cause the
account to be reaped and `preservation: Preservation::Preserve`.

- [ ] TODO Confirm with @gavofyork that this is correct behavior

Test for this behavior:


https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L912-L937

### `Balanced::pair` returning non-canceling pairs

`Balanced::pair` is supposed to create a pair of imbalances that cancel
each other out. However this is not the case when the method is called
with an amount greater than the total supply.

In the existing default implementation, `Balanced::pair` creates a pair
by first rescinding the balance, creating `Debt`, and then issuing the
balance, creating `Credit`.

When creating `Debt`, if the amount to create exceeds the
`total_supply`, `total_supply` units of `Debt` are created *instead* of
`amount` units of `Debt`. This can lead to non-canceling amount of
`Credit` and `Debt` being created.

To address this, I create the credit and debt directly in the method
instead of calling `issue` and `rescind`.

Test for this behavior:


https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1323-L1346

### `Balances` pallet `active_issuance` 'underflow'

This PR resolves an issue in the `Balances` pallet that can lead to odd
behavior of `active_issuance`.

Currently, the Balances pallet doesn't check if `InactiveIssuance`
remains less than or equal to `TotalIssuance` when supply is
deactivated. This allows `InactiveIssuance` to be greater than
`TotalIssuance`, which can result in unexpected behavior from the
perspective of the fungible API.

`active_issuance` is derived from
`TotalIssuance.saturating_sub(InactiveIssuance)`.

If an `amount` is deactivated that causes `InactiveIssuance` to become
greater TotalIssuance, `active_issuance` will return 0. However once in
that state, reactivating an amount will not increase `active_issuance`
by the reactivated `amount` as expected.

Consider this test where the last assertion would fail due to this
issue:


https://github.com/paritytech/polkadot-sdk/blob/e5c876dd6b59e2b7dbacaa4538cb42c802db3730/substrate/frame/support/src/traits/tokens/fungible/conformance_tests/regular.rs#L1036-L1071

To address this, I've modified the `deactivate` function to ensure
`InactiveIssuance` never surpasses `TotalIssuance`.

---------

Co-authored-by: Muharem <ismailov.m.h@gmail.com>
github-merge-queue bot pushed a commit that referenced this pull request Apr 4, 2024
Part of #226 
Related #1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by #1296,
needs the `Unbalanced::decrease_balance` fix
Ank4n pushed a commit that referenced this pull request Apr 9, 2024
Part of #226 
Related #1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by #1296,
needs the `Unbalanced::decrease_balance` fix
dharjeezy pushed a commit to dharjeezy/polkadot-sdk that referenced this pull request Apr 9, 2024
Part of paritytech#226 
Related paritytech#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech#1296,
needs the `Unbalanced::decrease_balance` fix
serban300 pushed a commit to serban300/parity-bridges-common that referenced this pull request Apr 9, 2024
Part of paritytech/polkadot-sdk#226
Related paritytech/polkadot-sdk#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech/polkadot-sdk#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75ac49786a7246531cf729b25c208cd38e6)
serban300 added a commit to paritytech/parity-bridges-common that referenced this pull request Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (#2292)

Part of paritytech/polkadot-sdk#226
Related paritytech/polkadot-sdk#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech/polkadot-sdk#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75ac49786a7246531cf729b25c208cd38e6)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0fcd37e4e8c14aeb83b5c9e680981e16079)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
serban300 added a commit to paritytech/parity-bridges-common that referenced this pull request Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (#2292)

Part of paritytech/polkadot-sdk#226
Related paritytech/polkadot-sdk#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech/polkadot-sdk#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75ac49786a7246531cf729b25c208cd38e6)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0fcd37e4e8c14aeb83b5c9e680981e16079)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
serban300 added a commit to serban300/polkadot-sdk that referenced this pull request Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292)

Part of paritytech#226
Related paritytech#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0f)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
serban300 added a commit to serban300/polkadot-sdk that referenced this pull request Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292)

Part of paritytech#226
Related paritytech#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0f)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
serban300 added a commit to serban300/polkadot-sdk that referenced this pull request Apr 9, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292)

Part of paritytech#226
Related paritytech#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0f)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
serban300 added a commit to serban300/polkadot-sdk that referenced this pull request Apr 10, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292)

Part of paritytech#226
Related paritytech#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0f)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
serban300 added a commit to serban300/polkadot-sdk that referenced this pull request Apr 10, 2024
* Migrate fee payment from `Currency` to `fungible` (paritytech#2292)

Part of paritytech#226
Related paritytech#1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by paritytech#1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (paritytech#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0f)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
bkchr pushed a commit that referenced this pull request Apr 10, 2024
* Migrate fee payment from `Currency` to `fungible` (#2292)

Part of #226
Related #1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by #1296,
needs the `Unbalanced::decrease_balance` fix

(cherry picked from commit bda4e75)

* Upgrade `trie-db` from `0.28.0` to `0.29.0` (#3982)

- What does this PR do?
1. Upgrades `trie-db`'s version to the latest release. This release
includes, among others, an implementation of `DoubleEndedIterator` for
the `TrieDB` struct, allowing to iterate both backwards and forwards
within the leaves of a trie.
2. Upgrades `trie-bench` to `0.39.0` for compatibility.
3. Upgrades `criterion` to `0.5.1` for compatibility.
- Why are these changes needed?
Besides keeping up with the upgrade of `trie-db`, this specifically adds
the functionality of iterating back on the leafs of a trie, with
`sp-trie`. In a project we're currently working on, this comes very
handy to verify a Merkle proof that is the response to a challenge. The
challenge is a random hash that (most likely) will not be an existing
leaf in the trie. So the challenged user, has to provide a Merkle proof
of the previous and next existing leafs in the trie, that surround the
random challenged hash.

Without having DoubleEnded iterators, we're forced to iterate until we
find the first existing leaf, like so:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        println!("RECONSTRUCTED TRIE {:#?}", trie);

        // Create an iterator over the leaf nodes.
        let mut iter = trie.iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        let mut prev_key = None;
        for element in &mut iter {
            if element.is_ok() {
                let (key, _) = element.unwrap();
                prev_key = Some(key);
                break;
            }
        }
        assert!(prev_key.is_some());

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        assert!(prev_key.unwrap() <= challenge_hash.to_vec());

        // The next element should exist (meaning there is no other existing leaf between the
        // previous and next leaf) and it should be greater than the challenged hash.
        let next_key = iter.next().unwrap().unwrap().0;
        assert!(next_key >= challenge_hash.to_vec());
```

With DoubleEnded iterators, we can avoid that, like this:
```rust
        // ************* VERIFIER (RUNTIME) *************
        // Verify proof. This generates a partial trie based on the proof and
        // checks that the root hash matches the `expected_root`.
        let (memdb, root) = proof.to_memory_db(Some(&root)).unwrap();
        let trie = TrieDBBuilder::<LayoutV1<RefHasher>>::new(&memdb, &root).build();

        // Print all leaf node keys and values.
        println!("\nPrinting leaf nodes of partial tree...");
        for key in trie.key_iter().unwrap() {
            if key.is_ok() {
                println!("Leaf node key: {:?}", key.clone().unwrap());

                let val = trie.get(&key.unwrap());

                if val.is_ok() {
                    println!("Leaf node value: {:?}", val.unwrap());
                } else {
                    println!("Leaf node value: None");
                }
            }
        }

        // println!("RECONSTRUCTED TRIE {:#?}", trie);
        println!("\nChallenged key: {:?}", challenge_hash);

        // Create an iterator over the leaf nodes.
        let mut double_ended_iter = trie.into_double_ended_iter().unwrap();

        // First element with a value should be the previous existing leaf to the challenged hash.
        double_ended_iter.seek(&challenge_hash.to_vec()).unwrap();
        let next_key = double_ended_iter.next_back().unwrap().unwrap().0;
        let prev_key = double_ended_iter.next_back().unwrap().unwrap().0;

        // Since hashes are `Vec<u8>` ordered in big-endian, we can compare them directly.
        println!("Prev key: {:?}", prev_key);
        assert!(prev_key <= challenge_hash.to_vec());

        println!("Next key: {:?}", next_key);
        assert!(next_key >= challenge_hash.to_vec());
```
- How were these changes implemented and what do they affect?
All that is needed for this functionality to be exposed is changing the
version number of `trie-db` in all the `Cargo.toml`s applicable, and
re-exporting some additional structs from `trie-db` in `sp-trie`.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
(cherry picked from commit 4e73c0f)

* Update polkadot-sdk refs

* Fix Cargo.lock

---------

Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
Co-authored-by: Facundo Farall <37149322+ffarall@users.noreply.github.com>
EgorPopelyaev pushed a commit that referenced this pull request May 27, 2024
Part of #226
Related #1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by #1296,
needs the `Unbalanced::decrease_balance` fix
EgorPopelyaev pushed a commit that referenced this pull request May 27, 2024
Part of #226
Related #1833

- Deprecate `CurrencyAdapter` and introduce `FungibleAdapter`
- Deprecate `ToStakingPot` and replace usage with `ResolveTo`
- Required creating a new `StakingPotAccountId` struct that implements
`TypedGet` for the staking pot account ID
- Update parachain common utils `DealWithFees`, `ToAuthor` and
`AssetsToBlockAuthor` implementations to use `fungible`
- Update runtime XCM Weight Traders to use `ResolveTo` instead of
`ToStakingPot`
- Update runtime Transaction Payment pallets to use `FungibleAdapter`
instead of `CurrencyAdapter`
- [x] Blocked by #1296,
needs the `Unbalanced::decrease_balance` fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T1-FRAME This PR/Issue is related to core FRAME, the framework. T10-tests This PR/Issue is related to tests.
Projects
Status: Audited
Development

Successfully merging this pull request may close these issues.

3 participants