-
Notifications
You must be signed in to change notification settings - Fork 683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue for bounding pallet-staking
storage items
#255
Comments
Hello @kianenigma I'd like to keep working on As for the trivial ones, I'll look into that as well if that's alright with you. |
Then you can close the existing PRs, and build on top of paritytech/substrate@4682f3f, thanks! |
@Doordashcon please note that @Ank4n will be looking into |
Dropping this |
Updated list of unbounded storage items
Lot of the bounds that we need are storage items. We should probably convert them to config constants. |
This PR updates litep2p to the latest release. - `KademliaEvent::PutRecordSucess` is renamed to fix word typo - `KademliaEvent::GetProvidersSuccess` and `KademliaEvent::IncomingProvider` are needed for bootnodes on DHT work and will be utilized later ### Added - kad: Providers part 8: unit, e2e, and `libp2p` conformance tests ([#258](paritytech/litep2p#258)) - kad: Providers part 7: better types and public API, public addresses & known providers ([#246](paritytech/litep2p#246)) - kad: Providers part 6: stop providing ([#245](paritytech/litep2p#245)) - kad: Providers part 5: `GET_PROVIDERS` query ([#236](paritytech/litep2p#236)) - kad: Providers part 4: refresh local providers ([#235](paritytech/litep2p#235)) - kad: Providers part 3: publish provider records (start providing) ([#234](paritytech/litep2p#234)) ### Changed - transport_service: Improve connection stability by downgrading connections on substream inactivity ([#260](paritytech/litep2p#260)) - transport: Abort canceled dial attempts for TCP, WebSocket and Quic ([#255](paritytech/litep2p#255)) - kad/executor: Add timeout for writting frames ([#277](paritytech/litep2p#277)) - kad: Avoid cloning the `KademliaMessage` and use reference for `RoutingTable::closest` ([#233](paritytech/litep2p#233)) - peer_state: Robust state machine transitions ([#251](paritytech/litep2p#251)) - address_store: Improve address tracking and add eviction algorithm ([#250](paritytech/litep2p#250)) - kad: Remove unused serde cfg ([#262](paritytech/litep2p#262)) - req-resp: Refactor to move functionality to dedicated methods ([#244](paritytech/litep2p#244)) - transport_service: Improve logs and move code from tokio::select macro ([#254](paritytech/litep2p#254)) ### Fixed - tcp/websocket/quic: Fix cancel memory leak ([#272](paritytech/litep2p#272)) - transport: Fix pending dials memory leak ([#271](paritytech/litep2p#271)) - ping: Fix memory leak of unremoved `pending_opens` ([#274](paritytech/litep2p#274)) - identify: Fix memory leak of unused `pending_opens` ([#273](paritytech/litep2p#273)) - kad: Fix not retrieving local records ([#221](paritytech/litep2p#221)) See release changelog for more details: https://github.com/paritytech/litep2p/releases/tag/v0.8.0 cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Dmitry Markin <dmitry@markin.tech>
related to #323 and paritytech/substrate#9724
If we simply remove the
pallet::without_storage_info
, we get an error about the following items:Full Errors
Invulnerables
StakingLedger
Payee
Validators
/ValidatorPrefs
Nominators
/Nominations
ErasStakers
/ErasStakersClipped
/Exposure
ActiveEraInfo
ErasRewardPoints
Forcing
UnappliedSlashes
BondedEras
SlashingSpans
OffendingValidators
Releases
I categorize these into 4 groups:
derive(MaxEncodedLen)
. This list includes:HistoryDepth
. This includes onlyStakingLedger
. Also, a number of storage maps and double maps have an invariant that the number of their keys should never exceedHistoryDepth
. We should detect these, and make sure they are always respected (usetry_state
).MaxActiveValidators
. These are storage item who's bound is missing because we don't enforce a bound on the maximum number of active validators.election-provider-mutli-phase
, and staking would express: typeElectionProvider<MaxWinners = Self::MaxActiveValidators>
RewardPoints
in pallet-staking substrate#12125, but I think we should start from scratch based on my draft in https://github.com/paritytech/substrate/tree/kiz-properly-bound-staking-validators.MaxBackersPerWinner
. @ggwpez started doing this in Staking: IntroduceMaxBackersPerWinner
substrate#11935, we need someone to finish it.So, these are the bounds we know:
MaxActiveValidators
:Invulnerables
ErasRewardPoints
MaxBackersPerWinner
ErasStakers
/ErasStakersClipped
/Exposure
HistoryDepth
StakingLedger
ActiveEraInfo
Forcing
Payee
Validators
/ValidatorPrefs
Nominators
/Nominations
-
Releases
UnappliedSlashes
BondedEras
SlashingSpans
OffendingValidators
MaxActiveValidators
. I might prefer doing it myself since it is rather important and touches a few pallets. Nonetheless, I am grateful for @Doordashcon's effort here so far and will acknowledge it.MaxBackersPerWinner
needs completion by me and @ggwpezThe text was updated successfully, but these errors were encountered: