This repository has been archived by the owner on Feb 3, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2
Add solo-to-para pallet to all runtimes #163
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Commit 9922119Crab Parachain
RuntimeVersion {
spec_name: "Crab Parachain",
impl_name: "Darwinia Crab Parachain",
authoring_version: 1,
- spec_version: 5350,
+ spec_version: 5351,
impl_version: 1,
transaction_version: 1,
}
+ Pallet: "SoloToPara" Darwinia Parachain
RuntimeVersion {
spec_name: "Darwinia Parachain",
impl_name: "Darwinia Parachain",
authoring_version: 1,
- spec_version: 5330,
+ spec_version: 5351,
impl_version: 1,
transaction_version: 1,
}
+ Pallet: "BridgeDarwiniaGrandpa"
+ Pallet: "BridgeDarwiniaMessages"
+ Pallet: "DarwiniaFeeMarket"
+ Pallet: "FromDarwiniaIssuing"
+ Pallet: "MessageRouter"
+ Pallet: "RemoteGovernance"
+ Pallet: "SoloToPara"
Pallet ParachainSystem
+ Entry: StorageEntryMetadata { name: "AuthorizedUpgrade", modifier: Optional, ty: Plain(UntrackedSymbol { id: 9, marker: PhantomData }), default: [0], docs: [" The next authorized upgrade, if there is one."] }
- Entry: StorageEntryMetadata { name: "AuthorizedUpgrade", modifier: Optional, ty: Plain(UntrackedSymbol { id: 9, marker: PhantomData }), default: [0], docs: [" The next authorized upgrade, if there is one."] }
+ Entry: StorageEntryMetadata { name: "LastDmqMqcHead", modifier: Default, ty: Plain(UntrackedSymbol { id: 151, marker: PhantomData }), default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" The last downward message queue chain head we have observed.", "", " This value is loaded before and saved after processing inbound downward messages carried", " by the system inherent."] }
- Entry: StorageEntryMetadata { name: "LastDmqMqcHead", modifier: Default, ty: Plain(UntrackedSymbol { id: 125, marker: PhantomData }), default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" The last downward message queue chain head we have observed.", "", " This value is loaded before and saved after processing inbound downward messages carried", " by the system inherent."] }
+ Entry: StorageEntryMetadata { name: "LastHrmpMqcHeads", modifier: Default, ty: Plain(UntrackedSymbol { id: 152, marker: PhantomData }), default: [0], docs: [" The message queue chain heads we have observed per each channel incoming channel.", "", " This value is loaded before and saved after processing inbound downward messages carried", " by the system inherent."] }
- Entry: StorageEntryMetadata { name: "LastHrmpMqcHeads", modifier: Default, ty: Plain(UntrackedSymbol { id: 126, marker: PhantomData }), default: [0], docs: [" The message queue chain heads we have observed per each channel incoming channel.", "", " This value is loaded before and saved after processing inbound downward messages carried", " by the system inherent."] }
+ Entry: StorageEntryMetadata { name: "RelevantMessagingState", modifier: Optional, ty: Plain(UntrackedSymbol { id: 146, marker: PhantomData }), default: [0], docs: [" The snapshot of some state related to messaging relevant to the current parachain as per", " the relay parent.", "", " This field is meant to be updated each block with the validation data inherent. Therefore,", " before processing of the inherent, e.g. in `on_initialize` this data may be stale.", "", " This data is also absent from the genesis."] }
- Entry: StorageEntryMetadata { name: "RelevantMessagingState", modifier: Optional, ty: Plain(UntrackedSymbol { id: 120, marker: PhantomData }), default: [0], docs: [" The snapshot of some state related to messaging relevant to the current parachain as per", " the relay parent.", "", " This field is meant to be updated each block with the validation data inherent. Therefore,", " before processing of the inherent, e.g. in `on_initialize` this data may be stale.", "", " This data is also absent from the genesis."] }
+ Entry: StorageEntryMetadata { name: "ValidationData", modifier: Optional, ty: Plain(UntrackedSymbol { id: 140, marker: PhantomData }), default: [0], docs: [" The [`PersistedValidationData`] set for this block.", " This value is expected to be set only once per block and it's never stored", " in the trie."] }
- Entry: StorageEntryMetadata { name: "ValidationData", modifier: Optional, ty: Plain(UntrackedSymbol { id: 114, marker: PhantomData }), default: [0], docs: [" The [`PersistedValidationData`] set for this block.", " This value is expected to be set only once per block and it's never stored", " in the trie."] }
Pallet PolkadotXcm
+ Entry: StorageEntryMetadata { name: "AssetTraps", modifier: Default, ty: Map { hashers: [Identity], key: UntrackedSymbol { id: 9, marker: PhantomData }, value: UntrackedSymbol { id: 4, marker: PhantomData } }, default: [0, 0, 0, 0], docs: [" The existing asset traps.", "", " Key is the blake2 256 hash of (origin, versioned `MultiAssets`) pair. Value is the number of", " times this pair has been trapped (usually just 1 if it exists at all)."] }
- Entry: StorageEntryMetadata { name: "AssetTraps", modifier: Default, ty: Map { hashers: [Identity], key: UntrackedSymbol { id: 9, marker: PhantomData }, value: UntrackedSymbol { id: 4, marker: PhantomData } }, default: [0, 0, 0, 0], docs: [" The existing asset traps.", "", " Key is the blake2 256 hash of (origin, versioned `MultiAssets`) pair. Value is the number of", " times this pair has been trapped (usually just 1 if it exists at all)."] }
Pallet Proxy
+ Entry: StorageEntryMetadata { name: "Announcements", modifier: Default, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData }, value: UntrackedSymbol { id: 326, marker: PhantomData } }, default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" The announcements made by the proxy (key)."] }
- Entry: StorageEntryMetadata { name: "Announcements", modifier: Default, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData }, value: UntrackedSymbol { id: 266, marker: PhantomData } }, default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" The announcements made by the proxy (key)."] }
Pallet Session
+ Entry: StorageEntryMetadata { name: "NextKeys", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData }, value: UntrackedSymbol { id: 197, marker: PhantomData } }, default: [0], docs: [" The next session keys for a validator."] }
- Entry: StorageEntryMetadata { name: "NextKeys", modifier: Optional, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 0, marker: PhantomData }, value: UntrackedSymbol { id: 172, marker: PhantomData } }, default: [0], docs: [" The next session keys for a validator."] }
+ Entry: StorageEntryMetadata { name: "QueuedKeys", modifier: Default, ty: Plain(UntrackedSymbol { id: 195, marker: PhantomData }), default: [0], docs: [" The queued keys for the next session. When the next session begins, these keys", " will be used to determine the validator's session keys."] }
- Entry: StorageEntryMetadata { name: "QueuedKeys", modifier: Default, ty: Plain(UntrackedSymbol { id: 170, marker: PhantomData }), default: [0], docs: [" The queued keys for the next session. When the next session begins, these keys", " will be used to determine the validator's session keys."] }
Pallet System
+ Entry: StorageEntryMetadata { name: "BlockHash", modifier: Default, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 4, marker: PhantomData }, value: UntrackedSymbol { id: 9, marker: PhantomData } }, default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" Map of block numbers to block hashes."] }
- Entry: StorageEntryMetadata { name: "BlockHash", modifier: Default, ty: Map { hashers: [Twox64Concat], key: UntrackedSymbol { id: 4, marker: PhantomData }, value: UntrackedSymbol { id: 9, marker: PhantomData } }, default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" Map of block numbers to block hashes."] }
+ Entry: StorageEntryMetadata { name: "EventTopics", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 9, marker: PhantomData }, value: UntrackedSymbol { id: 118, marker: PhantomData } }, default: [0], docs: [" Mapping between a topic (represented by T::Hash) and a vector of indexes", " of events in the `<Events<T>>` list.", "", " All topic vectors have deterministic storage locations depending on the topic. This", " allows light-clients to leverage the changes trie storage tracking mechanism and", " in case of changes fetch the list of events of interest.", "", " The value has the type `(T::BlockNumber, EventIndex)` because if we used only just", " the `EventIndex` then in case if the topic has the same contents on the next block", " no notification will be triggered thus the event might be lost."] }
- Entry: StorageEntryMetadata { name: "EventTopics", modifier: Default, ty: Map { hashers: [Blake2_128Concat], key: UntrackedSymbol { id: 9, marker: PhantomData }, value: UntrackedSymbol { id: 92, marker: PhantomData } }, default: [0], docs: [" Mapping between a topic (represented by T::Hash) and a vector of indexes", " of events in the `<Events<T>>` list.", "", " All topic vectors have deterministic storage locations depending on the topic. This", " allows light-clients to leverage the changes trie storage tracking mechanism and", " in case of changes fetch the list of events of interest.", "", " The value has the type `(T::BlockNumber, EventIndex)` because if we used only just", " the `EventIndex` then in case if the topic has the same contents on the next block", " no notification will be triggered thus the event might be lost."] }
+ Entry: StorageEntryMetadata { name: "Events", modifier: Default, ty: Plain(UntrackedSymbol { id: 15, marker: PhantomData }), default: [0], docs: [" Events deposited for the current block.", "", " NOTE: The item is unbound and should therefore never be read on chain.", " It could otherwise inflate the PoV size of a block.", "", " Events have a large in-memory size. Box the events to not go out-of-memory", " just in case someone still reads them from within the runtime."] }
- Entry: StorageEntryMetadata { name: "Events", modifier: Default, ty: Plain(UntrackedSymbol { id: 15, marker: PhantomData }), default: [0], docs: [" Events deposited for the current block.", "", " NOTE: The item is unbound and should therefore never be read on chain.", " It could otherwise inflate the PoV size of a block.", "", " Events have a large in-memory size. Box the events to not go out-of-memory", " just in case someone still reads them from within the runtime."] }
+ Entry: StorageEntryMetadata { name: "ParentHash", modifier: Default, ty: Plain(UntrackedSymbol { id: 9, marker: PhantomData }), default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" Hash of the previous block."] }
- Entry: StorageEntryMetadata { name: "ParentHash", modifier: Default, ty: Plain(UntrackedSymbol { id: 9, marker: PhantomData }), default: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], docs: [" Hash of the previous block."] } |
38 tasks
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.