Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RANDAO reveal slashing, custody period staggering and integration of custody and RANDAO reveals #880

Merged
merged 46 commits into from
May 3, 2019

Conversation

dankrad
Copy link
Contributor

@dankrad dankrad commented Apr 5, 2019

Currently, there is no mechanism to punish validators for revealing their Randao reveals. This could lead to a situation where these can easily bought and sold without risk for a long time in the future, which aggravates the attacks on Randao.

The logic is very similar to the early custody reveal logic.

Copy link
Contributor

@djrtwo djrtwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some tests and have a couple of questions.

specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
@vbuterin
Copy link
Contributor

vbuterin commented Apr 6, 2019

Can we think about this a lot more before we put something like this in?

I'm very worried about this approach because it means that if you propose a block, you are making yourself vulnerable to slashings if the chain forks, even if you did nothing wrong. In all other cases, the chain would need to reorg at least a week for that to happen.

I would propose this alternative: if you make an early randao reveal, you face a small penalty (eg. a week's worth of rewards), with a rule that this penalty can be applied a maximum of once every 72 days.

@vbuterin
Copy link
Contributor

vbuterin commented Apr 6, 2019

Also want to think more about combining the two kinds of slashing reveals as a single message type and a single verification function.

@vbuterin
Copy link
Contributor

vbuterin commented Apr 6, 2019

Also want to think more about combining the two kinds of slashing reveals as a single message type and a single verification function.

I have an idea in that regard: the custody key for period N is also the randao subkey for the slot at the start of period N+2. Then, we could have a rule that revealing a randao subkey early (ie. any time before the start of period N+2) would only have a small penalty, but revealing any randao subkey more than one full period early (which coincides with when the randao subkey would be used as a PoC key if it's used that way) gets you slashed. So we get both covered with one message type and one function.

@dankrad
Copy link
Contributor Author

dankrad commented Apr 7, 2019

I have an idea in that regard: the custody key for period N is also the randao subkey for the slot at the start of period N+2. Then, we could have a rule that revealing a randao subkey early (ie. any time before the start of period N+2) would only have a small penalty, but revealing any randao subkey more than one full period early (which coincides with when the randao subkey would be used as a PoC key if it's used that way) gets you slashed. So we get both covered with one message type and one function.

Very interesting. I like this solution. So actually the custody key for the custody period starting at epoch N would then be the Randao reveal starting at epoch N+EPOCHS_PER_CUSTODY_PERIOD+2, right?

I would propose this alternative: if you make an early randao reveal, you face a small penalty (eg. a week's worth of rewards), with a rule that this penalty can be applied a maximum of once every 72 days.

What's the reason for the 72 day rule?

@dankrad
Copy link
Contributor Author

dankrad commented Apr 10, 2019

I now unified this with the Custody reveal and removed the masking logic from the Custody Reveal scheme, since all slashing can now be performed through the RANDAO reveal logic. I had to introduce another constant to do this:

  • The custody reveal is now the RANDAO reveal at period is now period * EPOCHS_PER_CUSTODY_PERIOD + CUSTODY_PERIOD_TO_RANDAO_PADDING (with Custody punishments for late reveals & deterministic staggering #872 this will have to be adapted)
  • if the reveal is less than RANDAO_SLASHING_EPOCHS (=2) early, it is invalid
  • If the reveal is more than CUSTODY_PERIOD_TO_RANDAO_PADDING early, then the validator gets slashed
  • otherwise, a minor penalty of 1 / RANDAO_REVEAL_PENALTY_QUOTIENT (1/32) is applied

@dankrad
Copy link
Contributor Author

dankrad commented Apr 12, 2019

Consensus of today was that we will merge this. Vitalik had some reservation about the length of CUSTODY_PERIOD_TO_RANDAO_PADDING and suggested possibly setting CUSTODY_PERIOD_TO_RANDAO_PADDING = EPOCHS_PER_CUSTODY_PERIOD. I prefer it being shorter, the code is written so that we can adjust this constant as we see fit.

@dankrad dankrad changed the title Adding a mechanism to slash for an early Randao reveal RANDAO reveal slashing, custody period staggering and integration of custody and RANDAO reveals Apr 29, 2019
@dankrad
Copy link
Contributor Author

dankrad commented Apr 29, 2019

I merged #934 into this, so now this implements:

  • Merging of RANDAO and custody reveals
  • Punishing/slashing for early RANDAO reveals
  • Staggering of custody periods

state=state,
index=challenge.responder_index,
epoch=slot_to_epoch(attestation.data.slot)),
challenge.responder_index),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we don't even need the verify_custody_key function here; just use the one-liner we use in phase 0:

    epoch=get_randao_epoch_for_custody_period(
            get_validators_custody_reveal_period(
                state=state,
                index=challenge.responder_index,
                epoch=slot_to_epoch(attestation.data.slot)),
            challenge.responder_index)
    assert bls_verify(responder.pubkey, hash_tree_root(epoch), challenge.responder_key, get_domain(state, DOMAIN_RANDAO, epoch))

This way we can remove the complexity of wrapping the custody key in a pointless reveal object with extraneous fields (the empty mask) and furthermore reduce the number of verify_custody_key calls to 1, which allows the function to be removed entirely (see comment above).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, done


# Case 2: masked punitive early reveal
# Handle the case of a regular custody reveal
process_custody_reveal(state, reveal)
Copy link
Contributor

@vbuterin vbuterin Apr 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe for symmetry put the logic below this into a separate function process_randao_key_punitive_reveal and have the top-level function just be:

    revealed_validator = state.validator_registry[reveal.revealed_index]

    if reveal.mask == ZERO_HASH: 
        pubkeys = [revealed_validator.pubkey]
        message_hashes = [hash_tree_root(reveal.epoch)]
    else:
        masker = state.validator_registry[reveal.masker_index]
        pubkeys = [revealed_validator.pubkey, masker.pubkey]
        message_hashes = [
            hash_tree_root(reveal.epoch),
            reveal.mask,
        ]

    assert bls_verify_multiple(
        pubkeys=pubkeys,
        message_hashes=message_hashes,
        signature=reveal.reveal,
        domain=get_domain(
            state=state,
            domain_type=DOMAIN_RANDAO,
            message_epoch=reveal.epoch,
        ),
    )
    if reveal.mask == ZERO_HASH:
        process_custody_reveal(state, reveal)
    else:
        process_randao_key_punitive_reveal(state, reveal)

This, plus the suggestion below, would allow the verify_custody_key function to be cut out entirely.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though honestly if we have this parallel structure the right approach may be to just have two objects, a CustodyKeyReveal and a EarlyReveal, where only the latter has the masking mechanism and the former does not (the former would just be a thin wrapper {validator_index: uint64, custody_key: bytes96}). This way we don't need the empty mask fields in legitimate key reveals, and the code can be separated out more instead of being an if statement that runs totally different code depending on what role the object is serving.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did this using two objects now. Pls review and check if this is your preferred method :)

vbuterin and others added 3 commits April 30, 2019 09:21
Co-Authored-By: dankrad <dankrad@ethereum.org>
…eals; append the cleanup of RANDAO reveals instead of adding a new function
@dankrad dankrad requested a review from vbuterin April 30, 2019 12:53
@dankrad
Copy link
Contributor Author

dankrad commented Apr 30, 2019

TBD after call:
Randao key reveal -> Early derived secret reveal
Make penalty proportional to the number of validators already punished for epoch

@dankrad
Copy link
Contributor Author

dankrad commented Apr 30, 2019

The penalty for the early RANDAO reveal (that is not valid as a custody key anymore) is now proportional to the number of secrets already exposed. This is the simplest way of increasing punishment for a coordinated attack, and probably quite good because RANDAO reveals are useful to an attacker if they are all for the same epoch by different validators.
Possible changes:

  • This is a bit unfair because the first one to get exposed gets the smallest penalty. We could extend it to also work retroactively on the ones which are already exposed, at the cost of a bit of complexity [and having to think if we want to let them exit if a validator has a secret exposed]
  • Another approach would be to make it proportional to all reveals (not just from the same epoch)

# Conflicts:
#	test_libs/pyspec/tests/helpers.py
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
@@ -234,6 +236,9 @@ These configurations are updated for releases, but may be out of sync during `de
| `MIN_VALIDATOR_WITHDRAWABILITY_DELAY` | `2**8` (= 256) | epochs | ~27 hours |
| `PERSISTENT_COMMITTEE_PERIOD` | `2**11` (= 2,048) | epochs | 9 days |
| `MAX_CROSSLINK_EPOCHS` | `2**6` (= 64) | epochs | ~7 hours |
| `RANDAO_PENALTY_EPOCHS` | `2` | epochs | 12.8 minutes |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Formatting here is inconsistent

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now in 1_custody-game -- looks consistent to me? What is the problem?

specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
specs/core/0_beacon-chain.md Outdated Show resolved Hide resolved
Co-Authored-By: dankrad <dankrad@ethereum.org>
Copy link
Contributor

@CarlBeek CarlBeek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes the formatting consistent

specs/core/1_custody-game.md Outdated Show resolved Hide resolved
specs/core/1_custody-game.md Show resolved Hide resolved
specs/core/1_custody-game.md Show resolved Hide resolved
specs/core/1_custody-game.md Outdated Show resolved Hide resolved
CarlBeek and others added 3 commits May 3, 2019 11:22
Co-Authored-By: dankrad <dankrad@ethereum.org>
Co-Authored-By: dankrad <dankrad@ethereum.org>
# Conflicts:
#	specs/core/1_custody-game.md
@vbuterin vbuterin merged commit c0f3453 into dev May 3, 2019
@hwwhww hwwhww deleted the dankrad-patch-1 branch May 9, 2019 07:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants