Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Bound pallet-scheduler #11589

Closed
wants to merge 10 commits into from
Closed

Bound pallet-scheduler #11589

wants to merge 10 commits into from

Conversation

ggwpez
Copy link
Member

@ggwpez ggwpez commented Jun 3, 2022

Fixes #11565

WIP:

  • MaxEncodedLen on Call and OriginCaller
  • If a call gets rescheduled because the preimage was not found, the rescheduling can fail if the agenda of that block is already full
  • rescheduling does not remove empty schedules vectors

Changes:

  • MaxScheduledPerBlock is now a hard instead of soft limit.

Problems:

  • A schedule call is currently guaranteed to succeed if called with valid arguments. If we now bound the number of agendas per block because of MaxEncodedLen, then someone could pre-fill a block with agendas and the governance schedule call would fail. Currently this is not a problem since the number of schedules per block is unbound.

Special attention from reviews please:

  • Changed current_storage_version to on_chain_current_storage_version as it otherwise does not verify that the migration updated the storage version
  • I put some relations between the new constants into the doc. Needs double checking.
  • Event::CallLookupFailed is unused, can I just remove it?

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@github-actions github-actions bot added the A3-in_progress Pull request is in progress. No review needed at this stage. label Jun 3, 2022
ggwpez added 4 commits June 3, 2022 18:19
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
ggwpez added 3 commits June 5, 2022 14:08
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@@ -680,21 +898,23 @@ impl<T: Config> Pallet<T> {
) -> Result<TaskAddress<T::BlockNumber>, DispatchError> {
let when = Self::resolve_time(when)?;
call.ensure_requested::<T::PreimageProvider>();
let call = EncodedCallOrHashOf::<T>::new(call)?;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#[derive(MaxEncodedLen, Debug, Decode, Clone, Encode, PartialEq, Eq, scale_info::TypeInfo)]
#[codec(mel_bound(T: Config))]
#[scale_info(skip_type_params(T))]
pub struct EncodedCallOrHashOf<T: Config>(pub BoundedVec<u8, <T as Config>::MaxCallLen>);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defined here @shawntabrizi @KiChjang

@@ -93,12 +123,49 @@ struct ScheduledV1<Call, BlockNumber> {
maybe_periodic: Option<schedule::Period<BlockNumber>>,
}

// NOTE: use `Derivative` here until <https://github.com/rust-lang/rust/issues/26925> is fixed.
// The problem is that `#[derive(Clone)]` requires `Clone` for ALL its generic types,
// which does not make sense for `Get<>`.
Copy link
Contributor

@KiChjang KiChjang Jun 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means you should derive CloneNoBound.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was looking for something like this. Thank you, sir!

@@ -231,11 +310,37 @@ pub mod pallet {
/// be used. This will only check if two given origins are equal.
type OriginPrivilegeCmp: PrivilegeCmp<Self::PalletsOrigin>;

/// Maximum number of tasks that an be scheduled in total.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Maximum number of tasks that an be scheduled in total.
/// Maximum number of tasks that can be scheduled in total.


if Agenda::<T>::try_append(until, s).is_err() {
// There is not enough room in the future block where the call
// should be re-scheduled. Nothing that we can do about it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Log a warning perhaps?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will maybe emit a Canceled event here, last time I tried ti somehow did not work.
But yea, a log would also be useful.

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez
Copy link
Member Author

ggwpez commented Jun 20, 2022

Superseded by #11649

@ggwpez ggwpez closed this Jun 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A3-in_progress Pull request is in progress. No review needed at this stage.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Scheduler pallet bloats Agenda too easily when there's lots of insertions/removals
3 participants