Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-laziness hashes #639

Closed
Sophia-Gold opened this issue May 14, 2023 · 6 comments
Closed

Non-laziness hashes #639

Sophia-Gold opened this issue May 14, 2023 · 6 comments

Comments

@Sophia-Gold
Copy link
Contributor

We need proof that validators are actually checking parablocks rather than being lazy and just always backing and approving them. This will be particularly important when we implement rewards for parachain consensus.

A general format is to have backers and approval checkers take some succinct "proof of work" data that is not posted on-chain, hash it with their public key, and include it with their backing statement or approval vote. It can then be checked at a later time by anyone with the parablock data.

An easy option is to just prove that validators recovered the parablock data by using a hash of the data as the non-laziness receipt. It would be ideal, though, to do better and actually prove execution. For this, we want an output of the wasm compiler that has the following properties:

  • Deterministic
  • Unique to both the PVF and parablock data
  • Succinct
  • Cheap to generate

From discussions with @s0me0ne-unkn0wn, one option that fits these criteria might be a hash of a particular memory page.

Finally, there's the question of how we check the non-laziness hashes. We'd ideally like to do this as part of the protocol, but can't include it in approval checking since it would require approval checkers who distributed their votes earlier to check the hashes of approval checkers who distributed votes for the same parablock after them. Instead, we should check non-laziness hashes at a later time -- anytime before the parablock data is pruned, but likely at the end of each era if we assume parachain consensus rewards will be distributed at the same time as those for relay chain consensus -- and only randomly sample a portion of them that provides a sufficiently high chance lazy validators will eventually be caught. We can then deny their rewards and probably also slash and disable them.

@rphmeier
Copy link
Contributor

Is this strictly necessary?

The risk if all validators are lazy is that they will not be able to detect bad blocks. However, if dispute initiators earn a proportion of the reward then there is an incentive never to be the last lazy validator. Also, the downsides of being slashed are so large that it should dissuade validators from being lazy in the first place.

@Sophia-Gold
Copy link
Contributor Author

Is this strictly necessary?

The risk if all validators are lazy is that they will not be able to detect bad blocks. However, if dispute initiators earn a proportion of the reward then there is an incentive never to be the last lazy validator. Also, the downsides of being slashed are so large that it should dissuade validators from being lazy in the first place.

I think it's just nice to have. This was @burdges's idea that I resurrected when discussing time overruns because @eskimor suggested using them to catch and penalize lazy validators, e.g. if the median execution time is 20s and you report 2s you probably didn't execute it.

@burdges
Copy link

burdges commented May 15, 2023

We've two costly development steps here:

  1. Implementing logic that responds to incorrect non-laziness hashes -- This could clearly wait until we believe this matters.
  2. Adjusting the wasm compiler to export some intermediate states -- Imho, this should wait until after both the riscv32e experiment and the more deterministic wasm compiler experiment by @s0me0ne-unkn0wn. It's changes the wasm abstraction too.

What is left that could happen now? We could put H(authority_key || alt_block_hash) into approval votes where alt_block_hash is either:

  • H("alternative hash" || block_header) if you only want to prove they downloded the data.
  • H( erasure_chunk[3f+1 .. 4f] ) if you want to prove they reconstructed the the erasure chunks. We only send erasure_chunk[0 .. 3f] over the wire, so if you hash higher erasure chunks then you prove they nearly checked the erasure code commitments. It changes the abstraction of the erasure coding too, but not technically avoidable if you want a complete non-laziness hash.

We'd need a migration path for this hash in the approval votes for whenever the wasm changes, so another option is to simply to prepare this migration path in the approval votes.

@eskimor
Copy link
Member

eskimor commented May 15, 2023

I think, I found a suitable slashing strategy that should also resolve the issue and should be pretty trivial to implement.

@Sophia-Gold
Copy link
Contributor Author

This is very old, but I just came across the idea of “attention challenges”: https://medium.com/offchainlabs/cheater-checking-how-attention-challenges-solve-the-verifiers-dilemma-681a92d9948e. Not sure whether Arbitrum ended up using this.

Each Checker has a private key k and corresponding public key gᵏ, defined in a suitable group. Each Checker’s public key is known to everyone. We will also rely on a suitable hash function H.
To issue a challenge for the computation of f(x), where the function f is known to everyone in advance, the Asserter generates a random value r, then publishes (x, gʳ) as the challenge.
A Checker who has private key k should respond to the challenge by posting a tiny transaction on-chain if and only if H(gʳᵏ, f(x)) < T, where T is a suitably chosen threshold value. Note that only the Asserter (who knows r) and that specific Checker (who knows its private key k) can compute the hash, because they are the only ones who can compute gʳᵏ. Note also that the computing the hash requires knowledge of f(x).

f(x) would be whatever we extract from PVF execution and r chosen by the backer. It requires a bit more computation, but most of the time doesn’t require including the non-laziness hashes with approval votes nor checking a sample of them before paying paravalidator rewards.

@Sophia-Gold
Copy link
Contributor Author

Near’s Nightshade sharding design uses an availability protocol based on Polkadot’s with proof of non-laziness for availability recovery. Note that here “chunks” are like parachain blocks and “onepart messages” are like chunks:

3.4.1 Dealing with lazy block producers
If a block producer has a block for which a onepart message is missing, they might choose to still sign on it, because if the block ends up being on chain it will maximize the reward for the block producer. There’s no risk for the block producer since it is impossible to prove later that the block producer didn’t have the onepart message.
To address it we make each chunk producer when creating the chunk to choose a color (red or blue) for each part of the future encoded chunk, and store the bitmask of assigned color in the chunk before it is encoded. Each onepart message then contains the color assigned to the part, and the color is used when computing the merkle root of the encoded parts. If the chunk producer deviates from the protocol, it can be easily proven, since either the merkle root will not correspond to onepart messages, or the colors in the onepart messages that correspond to the merkle root will not match the mask in the chunk.
When a block producer signs on a block, they include a bitmask of all the red parts they received for the chunks included in the block. Publishing an incorrect bitmask is a slashable behavior. If a block producer hasn’t received a onepart message, they have no way of knowing the color of the message, and thus have a 50% chance of being slashed if they attempt to blidnly sign the block.

@Sophia-Gold Sophia-Gold transferred this issue from paritytech/polkadot Aug 24, 2023
@tdimitrov tdimitrov mentioned this issue Oct 20, 2023
4 tasks
@eskimor eskimor closed this as completed Jan 30, 2024
@github-project-automation github-project-automation bot moved this from Backlog to Completed in parachains team board Jan 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Completed
Development

No branches or pull requests

4 participants