Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

safe multi-era slashing for NPoS #3846

Merged
merged 56 commits into from
Nov 27, 2019
Merged

safe multi-era slashing for NPoS #3846

merged 56 commits into from
Nov 27, 2019

Conversation

rphmeier
Copy link
Contributor

@rphmeier rphmeier commented Oct 17, 2019

Based on https://research.web3.foundation/en/latest/polkadot/slashing/npos/

This PR implements safe multi-era slashing for NPoS. The fundamental problem that this tries to solve is that any particular validator or nominator's stake is largely shared across many eras. However, those participants are liable for up to 100% of their stake in each era. When performing multi-era slashing, we have a careful balance of ensuring that 100% slashes from multiple eras don't overslash (i.e. punish more than the validator or nominator was ever on the hook for in that period of time) without placing too low of a cap on the amount of stake that can be slashed.

For the purposes of the economic model, it is easiest to think of each validator
of a nominator which nominates only its own identity.

The act of nomination signals intent to unify economic identity with the validator - to take part in the
rewards of a job well done, and to take part in the punishment of a job done badly.

There are 3 main difficulties to account for with slashing in NPoS:

  • A nominator can nominate multiple validators and be slashed via any of them.
  • Until slashed, stake is reused from era to era. Nominating with N coins for E eras in a row
    does not mean you have N*E coins to be slashed - you've only ever had N.
  • Slashable offences can be found after the fact and out of order.

The algorithm implemented in this module tries to balance these 3 difficulties.

First, we only slash participants for the maximum slash they receive in some time period,
rather than the sum. This ensures a protection from overslashing.

Second, we do not want the time period (or "span") that the maximum is computed
over to last indefinitely. That would allow participants to begin acting with
impunity after some point, fearing no further repercussions. For that reason, we
automatically "chill" validators and withdraw a nominator's nomination after a slashing event,
requiring them to re-enlist voluntarily (acknowledging the slash) and begin a new
slashing span.

Typically, you will have a single slashing event per slashing span. Only in the case
where a validator releases many misbehaviors at once, or goes "back in time" to misbehave in
eras that have already passed, would you encounter situations where a slashing span
has multiple misbehaviors. However, accounting for such cases is necessary
to deter a class of "rage-quit" attacks.

major TODOs:

  • Better description :)
  • Rewards to reporters
  • Pruning of slashing-related metadata
  • integration of the new slashing logic
  • Testing

@rphmeier rphmeier added the A3-in_progress Pull request is in progress. No review needed at this stage. label Oct 17, 2019
@rphmeier rphmeier added A0-please_review Pull request needs code review. and removed A3-in_progress Pull request is in progress. No review needed at this stage. labels Oct 27, 2019
@rphmeier rphmeier marked this pull request as ready for review October 27, 2019 22:18
@rphmeier rphmeier requested a review from kianenigma as a code owner October 27, 2019 22:18

/// Records information about the maximum slash of a stash within a slashing span,
/// as well as how much reward has been paid out.
SpanSlash: map (T::AccountId, slashing::SpanIndex)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is something that's fairly likely to change in the near future. what's the typical approach in SRML for making storage entries versionable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, if it is deployed then it needs manual, one-time migration code on_initialised.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This https://github.com/paritytech/substrate/pull/3947/files should make it at least less of an annoyance.

@rphmeier
Copy link
Contributor Author

rphmeier commented Nov 19, 2019

One of the later additions to this PR is that slashes are immediately computed and not-so-immediately applied. Slashes are deferred in a FIFO queue (by era of detection) where any root origin can intervene and prevent the slash from actually being applied for SlashDeferDuration eras by calling Staking::cancel_deferred_slash(era, indices: Vec<u32>).

@gavofyork
Copy link
Member

where any root origin

there's only one :)

but it would be good to get this done with EnsureOrigin so that the original can be configured as part of the config trait. see treasury pallet for a simple example of how this can be done

@gavofyork gavofyork added this to the polkadot-0.6.18 milestone Nov 22, 2019
@rphmeier
Copy link
Contributor Author

it would be good to get this done with EnsureOrigin so that the original can be configured as part of the config trait

Done - I set this in node-runtime to be supermajority of council.

@@ -265,6 +265,8 @@ impl staking::Trait for Runtime {
type SessionsPerEra = SessionsPerEra;
type BondingDuration = BondingDuration;
type SlashDeferDuration = SlashDeferDuration;
/// A super-majority of the council can cancel the slash.
type SlashCancelOrigin = collective::EnsureProportionAtLeast<_3, _4, AccountId, CouncilCollective>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@gavofyork
Copy link
Member

at least the slash-deferring logic seems reasonable.

@gavofyork gavofyork closed this Nov 22, 2019
@gavofyork gavofyork reopened this Nov 22, 2019
use `ensure!`

Co-Authored-By: Gavin Wood <gavin@parity.io>
Copy link
Contributor

@andresilva andresilva left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very light review, but lgtm.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants