-
Notifications
You must be signed in to change notification settings - Fork 2.6k
frame_support::storage
: Add StorageStreamIter
#12721
Conversation
} | ||
} | ||
|
||
/// An iterator that streams values directly from storage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also document that the value should not be modified as long as at least one Stream iter operates on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could still make the errors and the code a little nicer, but functionally looks good to me.
if self.read >= self.length { | ||
None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could reduce one level of indentation by doing an early exit.
if self.read >= self.length {
return None;
}
let num_cached = self.buffer.len() - self.buffer_pos; | ||
|
||
into[..num_cached].copy_from_slice(&self.buffer[self.buffer_pos..]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could simplify this a little bit with split_at_mut
, e.g.
let (out_already_read, out_remaining) = into.split_at_mut(self.buffer.len() - self.buffer_pos);
out_already_read.copy_from_slice(&self.buffer[self.buffer_pos..]);
And then later:
if let Some(length_minus_offset) =
sp_io::storage::read(&self.key, &mut out_remaining, self.offset)
{
if length_minus_offset as usize < out_remaining.len() {
return Err("Not enough data to fill the buffer".into())
}
self.ensure_total_length_did_not_change(length_minus_offset)?;
self.offset += out_remaining.len() as u32;
Ok(())
} else {
Co-authored-by: Koute <koute@users.noreply.github.com>
Co-authored-by: Koute <koute@users.noreply.github.com>
bot merge |
Waiting for commit status. |
Merge cancelled due to error. Error: Statuses failed for a0302c5 |
bot merge |
Waiting for commit status. |
Merge cancelled due to error. Error: Statuses failed for 10629aa |
bot merge |
Waiting for commit status. |
This pull request has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/polkadot-release-analysis-v0-9-37/1736/1 |
## fixes KILTProtocol/ticket#2392 ## Breaking Changes for us ~~None! 🥳~~ Edit: Forgot to also check with try-runtime feature enabled. There is a small tweak necessary because of [This PR about on-runtime-upgrade](paritytech/substrate#13045) No database migrations, no runtime migrations and no new host functions. ## Polkadot Release Link https://github.com/paritytech/polkadot/releases/tag/v0.9.37 ## Release Analysis Forum Post https://forum.polkadot.network/t/polkadot-release-analysis-v0-9-37/1736 ## Cool new stuff that might be useful (or not) * [frame_support::storage: Add StorageStreamIter](paritytech/substrate#12721) * If we have a StorageValue that contains something iterable, we can directly iterate over it, without copying the memory first by a regular get() call. * [Add ensure_* mathematical methods](paritytech/substrate#12754) * The checked_* family of calls returns an Option which is in 99% of the cases mapped to an error * ensure_* calls directly return an error which can be propagated using questionmark operator more easily * [Kusama shows how to express complex local origins in XCM messages](paritytech/polkadot#6273) * Perhaps the most interesting one in this release, would be a good idea for @weichweich and @ntn-x2 to have a look into this * [pallet_uniques successor NFTv2 is out! 🥳 😄 ](paritytech/substrate#12765) * Finally we can have NFTs with owner controlled metadata on our chain. * They even literally mention that this way users can write DIDs directly on their NFT!
* Save * Add some test * Yep * Move to its own file * More work * More * Finish implementation * Start resolving comments and fixing bugs * Fix all review comments * Update frame/support/src/storage/stream_iter.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update frame/support/src/storage/stream_iter.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Review feedback * FMT * Okay, let's initialize the values... * Fix... Co-authored-by: Koute <koute@users.noreply.github.com>
* Save * Add some test * Yep * Move to its own file * More work * More * Finish implementation * Start resolving comments and fixing bugs * Fix all review comments * Update frame/support/src/storage/stream_iter.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Update frame/support/src/storage/stream_iter.rs Co-authored-by: Koute <koute@users.noreply.github.com> * Review feedback * FMT * Okay, let's initialize the values... * Fix... Co-authored-by: Koute <koute@users.noreply.github.com>
This pr adds the
StorageStreamIter
trait. This trait is currently only implemented forStorageValue
which store "SCALE container type" e.g. Vec, BTreeMap etc. A SCALE container type is a type that follows the following encoding structureCompact<u32>(len) ++ #( item.encode() )*
. The streaming iterator is a way to decode container values using an almost constant memory usage in contrast to decoding the entire value once into memory. The memory of a runtime is constrained, especially when it comes to a single allocation. This can be used for stuff like decoding events in an offchain worker or other huge data. It could probably also used for on chain operations, but will may require some more performance optimizations.Internally this works by using
sp_io::storage::read
to read chunks from the state as requested by the decoder. As each call toread
is a host function call there is some internal cache that currently stores 2048 bytes. This reduces the number of host calls and improves the performance of the iterator.