From 57386724b6248a018a16924f5d16e460d0282b71 Mon Sep 17 00:00:00 2001 From: Joe Date: Thu, 14 Apr 2022 16:30:48 +0100 Subject: [PATCH 01/26] first round of updates to pos page --- .../docs/consensus-mechanisms/pos/index.md | 66 ++++++------------- 1 file changed, 19 insertions(+), 47 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index b3dc30e9cd0..c867517bd67 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -6,7 +6,7 @@ sidebar: true incomplete: true --- -Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan as it's a key part in the community's strategy to scale Ethereum via [upgrades](/upgrades/). However getting PoS right is a big technical challenge and not as straightforward as using PoW to reach consensus across the network. +Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan because PoS is demonstrably more secure than PoW, uses drastically less energy and enables new scaling solutions to be implemented. However PoS is also more complex than PoW. Refining the PoS mechanism has taken years of research and development, and the challenge now is implementing it on the live Ethereum network - a process known as ["the merge"](/upgrades/merge/). ## Prerequisites {#prerequisites} @@ -14,73 +14,45 @@ To better understand this page, we recommend you first read up on [consensus mec ## What is proof-of-stake (PoS)? {#what-is-pos} -Proof-of-stake is a type of [consensus mechanism](/developers/docs/consensus-mechanisms/) used by blockchain networks to achieve distributed consensus. - -It requires users to stake their ETH to become a validator in the network. Validators are responsible for the same thing as miners in [proof-of-work](/developers/docs/consensus-mechanisms/pow/): ordering transactions and creating new blocks so that all nodes can agree on the state of the network. +Proof-of-stake is a type of [consensus mechanism](/developers/docs/consensus-mechanisms/) used by blockchains to achieve distributed consensus. Whereas in PoW miners prove they have capital at risk by expending energy, in PoS validators explicitly stake capital in the form of ether into a smart contract on Ethereum. This staked ether then acts as collateral that can be destroyed if the validator behaves dishonestly or lazily. The validator is then responsible for checking that new blocks propagated over the network are valid and occasionally creating and propagating new blocks themselves. Proof-of-stake comes with a number of improvements to the proof-of-work system: -- better energy efficiency – you don't need to use lots of energy mining blocks +- better energy efficiency – you don't need to use lots of energy on PoW computations - lower barriers to entry, reduced hardware requirements – you don't need elite hardware to stand a chance of creating new blocks -- stronger immunity to centralization – proof-of-stake should lead to more nodes in the network -- stronger support for [shard chains](/upgrades/shard-chains/) – a key upgrade in scaling the Ethereum network - -## Proof-of-stake, staking, and validators {#pos-staking-validators} - -Proof-of-stake is the underlying mechanism that activates validators upon receipt of enough stake. For Ethereum, users will need to stake 32 ETH to become a validator. Validators are chosen at random to create blocks and are responsible for checking and confirming blocks they don't create. A user's stake is also used as a way to incentivise good validator behavior. For example, a user can lose a portion of their stake for things like going offline (failing to validate) or their entire stake for deliberate collusion. - -## How does Ethereum's proof-of-stake work? {#how-does-pos-work} - -Unlike proof-of-work, validators don't need to use significant amounts of computational power because they're selected at random and aren't competing. They don't need to mine blocks; they just need to create blocks when chosen and validate proposed blocks when they're not. This validation is known as attesting. You can think of attesting as saying "this block looks good to me." Validators get rewards for proposing new blocks and for attesting to ones they've seen. - -If you attest to malicious blocks, you lose your stake. - -### The beacon chain {#the-beacon-chain} - -When Ethereum replaces proof-of-work with proof-of-stake, there will be the added complexity of [shard chains](/upgrades/shard-chains/). These are separate blockchains that will need validators to process transactions and create new blocks. The plan is to have 64 shard chains, with each having a shared understanding of the state of the network. As a result, extra coordination is necessary and will be done by [the beacon chain](/upgrades/beacon-chain/). - -The beacon chain receives state information from shards and makes it available for other shards, allowing the network to stay in sync. The beacon chain will also manage the validators from registering their stake deposits to issuing their rewards and penalties. +- reduced centralization risk – proof-of-stake should lead to more nodes in the network +- because of the low energy requirement less ETH issuance is required to incentivize participation +- economic penalties for misbehaviour make 51% style attacks much more costly for an attacker compared to PoW +- the community can resort to social recovery of an honest chain if a 51% attack were to overcome the crypto-economic defenses. -Here's how that process works. +## Proof-of-stake, staking, and validators {#staking-validators} -### How validation works {#how-does-validation-work} +To participate as a validator, a user must deposit 32 ETH into the deposit contract and run three separate pieces of software: an execution client, a consensus client and a validator. On depositing their ether, the user joins an activation queue that limits the rate at which new validators join the network. Once activated, validators receive new blocks from peers on the Ethereum network. The transactions delivered in the block are re-executed and the block signature is checked to ensure the block is valid. The validator then sends a vote (called an attestation) in favour of that block across the network. -When you submit a transaction on a shard, a validator will be responsible for adding your transaction to a shard block. Validators are algorithmically chosen by the beacon chain to propose new blocks. +Whereas under PoW the timing of blocks is determined byt he mining difficulty, in PoS the tempo is fixed. Time in PoS Ethereum is divided into slots (12 seconds) and epochs (32 slots). In every slot a committee of validators is randomly chosen whose votes are used to determine the validity of the block proposed in that slot. Also in every slot one validator is randomly selected to be a block proposer. That validator is responsible for creating a new block and sending it out to other nodes on the network. -#### Attestation {#attestation} +## Crypto-economic security {#crypto-economic-security} -If a validator isn't chosen to propose a new shard block, they'll have to attest to another validator's proposal and confirm that everything looks as it should. It's the attestation that is recorded in the beacon chain rather than the transaction itself. +Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for a user to maliciously attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviours that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on various network conditions, as we will explain later. -At least 128 validators are required to attest to each shard block – this is known as a "committee." +## Finality {#finality} -The committee has a time-frame in which to propose and validate a shard block. This is known as a "slot." Only one valid block is created per slot, and there are 32 slots in an "epoch." After each epoch, the committee is disbanded and reformed with different, random participants. This helps keep shards safe from committees of bad actors. - -#### Crosslinks {#rewards-and-penalties} - -Once a new shard block proposal has enough attestations, a "crosslink" is created which confirms the inclusion of the block and your transaction in the beacon chain. - -Once there's a crosslink, the validator who proposed the block gets their reward. - -#### Finality {#finality} - -In distributed networks, a transaction has "finality" when it's part of a block that can't change. - -To do this in proof-of-stake, Casper, a finality protocol, gets validators to agree on the state of a block at certain checkpoints. So long as 2/3 of the validators agree, the block is finalised. Validators will lose their entire stake if they try and revert this later on via a 51% attack. - -As Vlad Zamfir put it, this is like a miner participating in a 51% attack, causing their mining hardware to immediately burn down. +In distributed networks, a transaction has "finality" when it's part of a block that can't change without a large amounf to ether being burned. On PoS Ethereum this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least 2/3 of the total staked ether the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least 1/3 of the total supply of staked ether (currently around $10,000,000,000 USD). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). ## Proof-of-stake and security {#pos-and-security} -The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists in proof-of-stake, but it's even more risky for the attackers. To do so, you'd need to control 51% of the staked ETH. Not only is this a lot of money, but it would probably cause ETH's value to drop. There's very little incentive to destroy the value of a currency you have a majority stake in. There are stronger incentives to keep the network secure and healthy. +The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists in PoS as it does in PoW, but it's even more risky for the attackers. To do so, a attacker would need 51% of the staked ETH (about $15,000,000,000 USD). They could then use their own attestations to enmsure their preferred fork was the one with the most accumulated attestations. The 'weight' of accumulated attestations is what consensus clients use to determine the correct chain, so this attacker would be able to make their fork the canonical one. However, a strength of PoS over PoW is that the community has flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain and ignore the attacker's fork. They could also decide to forcibly remove the attacker from the network and destroy their staked ether. These are strong economic defenses against a 51% attack. + +51% attacks are just one flavour of malicious activity. Bad actors could attempt long-range attacks (although the finality gadget neutralizes this attack vector), short range 'reorgs' (although proposer boosting and attestation deadlines mitigate this), bouncing and balancing attacks (also mitigated by proposer boosting, and these attacks have anyway only been demonstrated under idealized network conditions) or avalanche attacks (neutralized by the fork choice algorithms rule of only considering the latest message). -Stake slashings, ejections, and other penalties, coordinated by the beacon chain, will exist to prevent other acts of bad behavior. Validators will also be responsible for flagging these incidents. +Overall, PoS as it is implemented on Ethereum has been demonstrated to be more economically secure than PoW. ## Pros and cons {#pros-and-cons} | Pros | Cons | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | | Staking makes it easier for you to run a node. It doesn't require huge investments in hardware or energy, and if you don't have enough ETH to stake, you can join staking pools. | Proof-of-stake is still in its infancy, and less battle-tested, compared to proof-of-work | -| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, like with mining. | | +| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, like with mining. | test test | | Staking allows for secure sharding. Shard chains allow Ethereum to create multiple blocks at the same time, increasing transaction throughput. Sharding the network in a proof-of-work system would simply lower the power needed to compromise a portion of the network. | | ## Further reading {#further-reading} From e76a9664c07e9bd4388088de12d002b2f7424d54 Mon Sep 17 00:00:00 2001 From: Joe Date: Thu, 14 Apr 2022 16:50:49 +0100 Subject: [PATCH 02/26] update pros/cons list --- .../docs/consensus-mechanisms/pos/index.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index c867517bd67..bc007ed4ced 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -35,6 +35,10 @@ Whereas under PoW the timing of blocks is determined byt he mining difficulty, i Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for a user to maliciously attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviours that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on various network conditions, as we will explain later. +## Fork choice {#fork-choice} + +When the network performs optimally and honestly, there is only ever one new block at the head of the chain and all validators attest to it. However, it is possible for validators to have different views of the head of the chain due to network latency or because a block proposer has equivocated. Therefore, consensus clients require an algorithm to decide which one to favor. The algorithm used in PoS Ethereum is called LMD-GHOST and it works by identifying the fork that has the greatest weight of attestations in its history. + ## Finality {#finality} In distributed networks, a transaction has "finality" when it's part of a block that can't change without a large amounf to ether being burned. On PoS Ethereum this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least 2/3 of the total staked ether the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least 1/3 of the total supply of staked ether (currently around $10,000,000,000 USD). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). @@ -49,11 +53,11 @@ Overall, PoS as it is implemented on Ethereum has been demonstrated to be more e ## Pros and cons {#pros-and-cons} -| Pros | Cons | -| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | -| Staking makes it easier for you to run a node. It doesn't require huge investments in hardware or energy, and if you don't have enough ETH to stake, you can join staking pools. | Proof-of-stake is still in its infancy, and less battle-tested, compared to proof-of-work | -| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, like with mining. | test test | -| Staking allows for secure sharding. Shard chains allow Ethereum to create multiple blocks at the same time, increasing transaction throughput. Sharding the network in a proof-of-work system would simply lower the power needed to compromise a portion of the network. | | +| Pros | Cons | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | +| Staking makes it easier for individuals to participate in securing the network, promoting decentralization. validator node can be run on a normal laptop. Staking pools allow users to stake without having 32 ETH. | PoS is younger and less battle-tested, compared to PoW | +| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, whereas more hashpower means higher % returns with PoW mining. | PoS is more complex to implement than PoW | +| PoS offers greater crypto-economic security than PoW | Users currently need to run three pieces of software to participate in Ethereum's PoS compared to one for PoW. | ## Further reading {#further-reading} From e0eaac2b79baf4bcfcc26f353961fbbe4ecc6419 Mon Sep 17 00:00:00 2001 From: Joe Date: Thu, 14 Apr 2022 17:04:16 +0100 Subject: [PATCH 03/26] add new pages --- .../developers/docs/consensus-mechanisms/pos/casper-ffg/index.md | 0 .../developers/docs/consensus-mechanisms/pos/fork-choice/index.md | 0 .../docs/consensus-mechanisms/pos/weak-subjectivity/index.md | 0 3 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md create mode 100644 src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md create mode 100644 src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md diff --git a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md b/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md new file mode 100644 index 00000000000..e69de29bb2d diff --git a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md new file mode 100644 index 00000000000..e69de29bb2d diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md new file mode 100644 index 00000000000..e69de29bb2d From 9edaf4e9b88ea8d2abf82d2ffa414454ed7ae432 Mon Sep 17 00:00:00 2001 From: Joe Date: Fri, 15 Apr 2022 11:23:11 +0100 Subject: [PATCH 04/26] title bar in new pages, fix table in pos/index.md --- .../pos/casper-ffg/index.md | 7 +++++ .../pos/fork-choice/index.md | 7 +++++ .../docs/consensus-mechanisms/pos/index.md | 29 ++++++++++--------- .../pos/weak-subjectivity/index.md | 22 ++++++++++++++ 4 files changed, 52 insertions(+), 13 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md b/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md index e69de29bb2d..1ebd19619df 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md @@ -0,0 +1,7 @@ +--- +title: Casper-FFG +description: An explanation of the Casper-FFG mechanism. +lang: en +sidebar: true +incomplete: true +--- diff --git a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md index e69de29bb2d..66fbab1a96f 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md @@ -0,0 +1,7 @@ +--- +title: Fork choice +description: An explanation of the fork chopice algorithm implemented in proof-of-stake Ethereum. +lang: en +sidebar: true +incomplete: true +--- diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index bc007ed4ced..5bbce886b5f 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -3,7 +3,7 @@ title: Proof-of-stake (PoS) description: An explanation of the proof-of-stake consensus protocol and its role in Ethereum. lang: en sidebar: true -incomplete: true +incomplete: false --- Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan because PoS is demonstrably more secure than PoW, uses drastically less energy and enables new scaling solutions to be implemented. However PoS is also more complex than PoW. Refining the PoS mechanism has taken years of research and development, and the challenge now is implementing it on the live Ethereum network - a process known as ["the merge"](/upgrades/merge/). @@ -25,27 +25,27 @@ Proof-of-stake comes with a number of improvements to the proof-of-work system: - economic penalties for misbehaviour make 51% style attacks much more costly for an attacker compared to PoW - the community can resort to social recovery of an honest chain if a 51% attack were to overcome the crypto-economic defenses. -## Proof-of-stake, staking, and validators {#staking-validators} +## Validators {#validators} To participate as a validator, a user must deposit 32 ETH into the deposit contract and run three separate pieces of software: an execution client, a consensus client and a validator. On depositing their ether, the user joins an activation queue that limits the rate at which new validators join the network. Once activated, validators receive new blocks from peers on the Ethereum network. The transactions delivered in the block are re-executed and the block signature is checked to ensure the block is valid. The validator then sends a vote (called an attestation) in favour of that block across the network. Whereas under PoW the timing of blocks is determined byt he mining difficulty, in PoS the tempo is fixed. Time in PoS Ethereum is divided into slots (12 seconds) and epochs (32 slots). In every slot a committee of validators is randomly chosen whose votes are used to determine the validity of the block proposed in that slot. Also in every slot one validator is randomly selected to be a block proposer. That validator is responsible for creating a new block and sending it out to other nodes on the network. +## Finality {#finality} + +In distributed networks, a transaction has "finality" when it's part of a block that can't change without a large amount to ether being burned. On PoS Ethereum this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least 2/3 of the total staked ether the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least 1/3 of the total supply of staked ether (currently around $10,000,000,000 USD). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). Since finality requires a 2/3 majority, an attacker could prevent the network reaching finality by voting with 1/3 of the total stake. There is a mechanism to defend against this: the inactivity leak. This activates whenever the chain fails to finalize for more than 4 epochs. The inactivity leak bleeds away the staked ether from validators voting against the majority, allowing the majority to regain a 2/3 majority and finalize the chain. + ## Crypto-economic security {#crypto-economic-security} -Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for a user to maliciously attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviours that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on various network conditions, as we will explain later. +Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for a user to maliciously attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviours that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on how many validators are also being slashed at aroud the same time. This is known as the "correlation penalty" and it can be minor (~1% stake for a single validator slashed on their own) or can result in 100% of the validators stake being destroyed (mass slashing event). It is imposed half way through a forced exit period that begins with an immediate penalty (up to 0.5 ETH) on Day 1, the correlation penalty on Day 18 and finally ejection from the network on Day 36. They receive small attestation penalties every day because they are present on the network but not submitting votes. This all means a coordinated attack would be very costly for the attacker. ## Fork choice {#fork-choice} When the network performs optimally and honestly, there is only ever one new block at the head of the chain and all validators attest to it. However, it is possible for validators to have different views of the head of the chain due to network latency or because a block proposer has equivocated. Therefore, consensus clients require an algorithm to decide which one to favor. The algorithm used in PoS Ethereum is called LMD-GHOST and it works by identifying the fork that has the greatest weight of attestations in its history. -## Finality {#finality} - -In distributed networks, a transaction has "finality" when it's part of a block that can't change without a large amounf to ether being burned. On PoS Ethereum this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least 2/3 of the total staked ether the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least 1/3 of the total supply of staked ether (currently around $10,000,000,000 USD). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). - ## Proof-of-stake and security {#pos-and-security} -The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists in PoS as it does in PoW, but it's even more risky for the attackers. To do so, a attacker would need 51% of the staked ETH (about $15,000,000,000 USD). They could then use their own attestations to enmsure their preferred fork was the one with the most accumulated attestations. The 'weight' of accumulated attestations is what consensus clients use to determine the correct chain, so this attacker would be able to make their fork the canonical one. However, a strength of PoS over PoW is that the community has flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain and ignore the attacker's fork. They could also decide to forcibly remove the attacker from the network and destroy their staked ether. These are strong economic defenses against a 51% attack. +The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists in PoS as it does in PoW, but it's even more risky for the attackers. To do so, a attacker would need 51% of the staked ETH (about $15,000,000,000 USD). They could then use their own attestations to ensure their preferred fork was the one with the most accumulated attestations. The 'weight' of accumulated attestations is what consensus clients use to determine the correct chain, so this attacker would be able to make their fork the canonical one. However, a strength of PoS over PoW is that the community has flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain and ignore the attacker's fork. They could also decide to forcibly remove the attacker from the network and destroy their staked ether. These are strong economic defenses against a 51% attack. 51% attacks are just one flavour of malicious activity. Bad actors could attempt long-range attacks (although the finality gadget neutralizes this attack vector), short range 'reorgs' (although proposer boosting and attestation deadlines mitigate this), bouncing and balancing attacks (also mitigated by proposer boosting, and these attacks have anyway only been demonstrated under idealized network conditions) or avalanche attacks (neutralized by the fork choice algorithms rule of only considering the latest message). @@ -53,11 +53,14 @@ Overall, PoS as it is implemented on Ethereum has been demonstrated to be more e ## Pros and cons {#pros-and-cons} -| Pros | Cons | -| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | -| Staking makes it easier for individuals to participate in securing the network, promoting decentralization. validator node can be run on a normal laptop. Staking pools allow users to stake without having 32 ETH. | PoS is younger and less battle-tested, compared to PoW | -| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, whereas more hashpower means higher % returns with PoW mining. | PoS is more complex to implement than PoW | -| PoS offers greater crypto-economic security than PoW | Users currently need to run three pieces of software to participate in Ethereum's PoS compared to one for PoW. | +| Pros | Cons | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | +| Staking makes it easier for individuals to participate in securing the network, promoting decentralization. validator node can be run on a normal laptop. Staking pools allow users to stake without having 32 ETH. | PoS is younger and less battle-tested, compared to PoW | +| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, whereas more hashpower means higher % returns with PoW mining. | PoS is more complex to implement than PoW | +| PoS offers greater crypto-economic security than PoW | Users need to run three pieces of software to participate in Ethereum's PoS compared to one for PoW. | +| Less issuance of new ether is required to incentivize network participants | | + +More information about the PoS design rationale is available in this [blog post by Vitalik](https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51). ## Further reading {#further-reading} diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index e69de29bb2d..ac6067f1cb9 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -0,0 +1,22 @@ +--- +title: Weak subjectivity +description: An explanation of weak subjectivity and its role in PoS Ethereum. +lang: en +sidebar: true +incomplete: false +--- + +# What is “weak subjectivity”? + +It is important to note that the mechanism of using deposits to ensure there is “something at stake” does lead to one change in the security model. Suppose that deposits are locked for four months, and can later be withdrawn. Suppose that an attempted 51% attack happens that reverts 10 days worth of transactions. The blocks created by the attackers can simply be imported into the main chain as proof-of-malfeasance (or “dunkles”) and the validators can be punished. However, suppose that such an attack happens after six months. Then, even though the blocks can certainly be re-imported, by that time the malfeasant validators will be able to withdraw their deposits on the main chain, and so they cannot be punished. + +To solve this problem, we introduce a “revert limit” - a rule that nodes must simply refuse to revert further back in time than the deposit length (ie. in our example, four months), and we additionally require nodes to log on at least once every deposit length to have a secure view of the chain. Note that this rule is different from every other consensus rule in the protocol, in that it means that nodes may come to different conclusions depending on when they saw certain messages. The time that a node saw a given message may be different between different nodes; hence we consider this rule “subjective” (alternatively, one well-versed in Byzantine fault tolerance theory may view it as a kind of synchrony assumption). + +However, the “subjectivity” here is very weak: in order for a node to get on the “wrong” chain, they must receive the original message four months later than they otherwise would have. This is only possible in two cases: + +When a node connects to the blockchain for the first time. +If a node has been offline for more than four months. + +We can solve (1) by making it the user’s responsibility to authenticate the latest state out of band. They can do this by asking their friends, block explorers, businesses that they interact with, etc. for a recent block hash in the chain that they see as the canonical one. In practice, such a block hash may well simply come as part of the software they use to verify the blockchain; an attacker that can corrupt the checkpoint in the software can arguably just as easily corrupt the software itself, and no amount of pure cryptoeconomic verification can solve that problem. (2) does genuinely add an additional security requirement for nodes, though note once again that the possibility of hard forks and security vulnerabilities, and the requirement to stay up to date to know about them and install any needed software updates, exists in proof of work too. + +Note that all of this is a problem only in the very limited case where a majority of previous stakeholders from some point in time collude to attack the network and create an alternate chain; most of the time we expect there will only be one canonical chain to choose from. From 931c6918dda210dde1077a222667be35b2cefa53 Mon Sep 17 00:00:00 2001 From: Joe Date: Fri, 22 Apr 2022 09:56:19 +0100 Subject: [PATCH 05/26] update weak subjectivity page --- .../docs/consensus-mechanisms/pos/index.md | 2 +- .../pos/weak-subjectivity/index.md | 21 ++++++++++++------- 2 files changed, 14 insertions(+), 9 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index 5bbce886b5f..7d808e8f454 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -45,7 +45,7 @@ When the network performs optimally and honestly, there is only ever one new blo ## Proof-of-stake and security {#pos-and-security} -The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists in PoS as it does in PoW, but it's even more risky for the attackers. To do so, a attacker would need 51% of the staked ETH (about $15,000,000,000 USD). They could then use their own attestations to ensure their preferred fork was the one with the most accumulated attestations. The 'weight' of accumulated attestations is what consensus clients use to determine the correct chain, so this attacker would be able to make their fork the canonical one. However, a strength of PoS over PoW is that the community has flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain and ignore the attacker's fork. They could also decide to forcibly remove the attacker from the network and destroy their staked ether. These are strong economic defenses against a 51% attack. +The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists in PoS as it does in PoW, but it's even more risky for the attackers. To do so, a attacker would need 51% of the staked ETH (about $15,000,000,000 USD). They could then use their own attestations to ensure their preferred fork was the one with the most accumulated attestations. The 'weight' of accumulated attestations is what consensus clients use to determine the correct chain, so this attacker would be able to make their fork the canonical one. However, a strength of PoS over PoW is that the community has flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain and ignore the attacker's fork, while aso encouraging apps, exchanges and pools to do the same. They could also decide to forcibly remove the attacker from the network and destroy their staked ether. These are strong economic defenses against a 51% attack. 51% attacks are just one flavour of malicious activity. Bad actors could attempt long-range attacks (although the finality gadget neutralizes this attack vector), short range 'reorgs' (although proposer boosting and attestation deadlines mitigate this), bouncing and balancing attacks (also mitigated by proposer boosting, and these attacks have anyway only been demonstrated under idealized network conditions) or avalanche attacks (neutralized by the fork choice algorithms rule of only considering the latest message). diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index ac6067f1cb9..ec0f1145ee0 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -6,17 +6,22 @@ sidebar: true incomplete: false --- -# What is “weak subjectivity”? +## Prerequisites -It is important to note that the mechanism of using deposits to ensure there is “something at stake” does lead to one change in the security model. Suppose that deposits are locked for four months, and can later be withdrawn. Suppose that an attempted 51% attack happens that reverts 10 days worth of transactions. The blocks created by the attackers can simply be imported into the main chain as proof-of-malfeasance (or “dunkles”) and the validators can be punished. However, suppose that such an attack happens after six months. Then, even though the blocks can certainly be re-imported, by that time the malfeasant validators will be able to withdraw their deposits on the main chain, and so they cannot be punished. +To understand this page it is necessary to first understand the fundamentals of [proof-of-stake](/developers/docs/consensus-mechanisms/pos/). -To solve this problem, we introduce a “revert limit” - a rule that nodes must simply refuse to revert further back in time than the deposit length (ie. in our example, four months), and we additionally require nodes to log on at least once every deposit length to have a secure view of the chain. Note that this rule is different from every other consensus rule in the protocol, in that it means that nodes may come to different conclusions depending on when they saw certain messages. The time that a node saw a given message may be different between different nodes; hence we consider this rule “subjective” (alternatively, one well-versed in Byzantine fault tolerance theory may view it as a kind of synchrony assumption). +## Weak Subjectivity -However, the “subjectivity” here is very weak: in order for a node to get on the “wrong” chain, they must receive the original message four months later than they otherwise would have. This is only possible in two cases: +At the very beginning of a blockchain, every node on the network agrees on a specific first block - a "genesis block". Any nodes that enter the network later on are required to download that genesis block and every block that came after it and re-execute the transactions inside them. The entire blockchain is built on top of the genesis block - it is the universal "ground truth". This means that the genesis block has to be irreversible and always present in the canonical chain. -When a node connects to the blockchain for the first time. -If a node has been offline for more than four months. +In proof-of-work blockchains there is a single canonical chain that all honest nodes agree upon (e.g. the one that has taken the most energy, measured in proof-of-work difficulty, to mine). This is "objective". Alternatively, in "subjective" blockchains there are multiple valid states and nodes choose between them based on social information from their peers. Ethereum PoS is "weakly subjective" because there is one correct chain that all honest nodes agree upon. Nodes that are continuously connected are effectively objective because they simply follow the consensus mechanism to the head of the chain. However, nodes entering the network for the first time or after some long delay rely upon information about the state of the blockchain gathered from some trusted source such as a friend's node or a block explorer. Given an honest recent state and the full set of blocks, any new node entering the network will independently arrive at the correct head. -We can solve (1) by making it the user’s responsibility to authenticate the latest state out of band. They can do this by asking their friends, block explorers, businesses that they interact with, etc. for a recent block hash in the chain that they see as the canonical one. In practice, such a block hash may well simply come as part of the software they use to verify the blockchain; an attacker that can corrupt the checkpoint in the software can arguably just as easily corrupt the software itself, and no amount of pure cryptoeconomic verification can solve that problem. (2) does genuinely add an additional security requirement for nodes, though note once again that the possibility of hard forks and security vulnerabilities, and the requirement to stay up to date to know about them and install any needed software updates, exists in proof of work too. +## Weak subjectivity checkpoints -Note that all of this is a problem only in the very limited case where a majority of previous stakeholders from some point in time collude to attack the network and create an alternate chain; most of the time we expect there will only be one canonical chain to choose from. +The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are blocks that all nodes on the network agree belong in the canonical chain. They serve a similar purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm executed by each node treats the weak subjectivity checkpoints as a genesis block, trusting that the blockchain state defined in that checkpoint to be correct, and then independently verifying the chain from that point onwards. The fork choice algorithm automatically rejects any block that does not build upon the most recent weak subjectivity checkpoint. + +## How weak is weak? + +" +However, arguably this is a very weak requirement; in fact, users need to trust client developers and/or "the community" to about this extent already. At the very least, users need to trust someone (usually client developers) to tell them what the protocol is and what any updates to the protocol have been. This is unavoidable in any software application. Hence, the marginal additional trust requirement that PoS imposes is still quite low. +" From 3a2f4a667f0e3ffcf9fecde15a49f0f8ec28ef8e Mon Sep 17 00:00:00 2001 From: Joe Date: Mon, 25 Apr 2022 12:12:38 +0100 Subject: [PATCH 06/26] placeholders for casper, fork-choice & subj pgs --- .../consensus-mechanisms/pos/casper-ffg/index.md | 16 ++++++++++++++++ .../pos/fork-choice/index.md | 10 ++++++++++ .../pos/weak-subjectivity/index.md | 10 ++++++---- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md b/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md index 1ebd19619df..ca495fe5e4e 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md @@ -5,3 +5,19 @@ lang: en sidebar: true incomplete: true --- + +# Consensus mechanism + +Casper-FFG (Casper the friendly finalty gadget) is the mechanism used to ossify the chain at specific intervals. This is a process of upgrading blocks to "justified" if they are voted for by at least 2/3 of the total staked ether, and "finalized" is another block is justified on top of it. + +## Justification + +Justified blocks are essentially candidates for finalization. They probably wont be reorg'd, but they technically could be. + +## Finalization + +Finalization guarantees that the block will not be reverted unless the chain has suffered a critical consensus failure. + +## Economic Finality + +Finalized blocks then cannot be reverted unless an attacker has burned at least 33% of the total staked ether (because they must have created two competing chains each with 2/3 attestations in order to create competing finalized blocks, meaning at least 1/3 of validators are contradicting themselves - they will be slashed maximally meaning 33% of the total stake is destroyed). diff --git a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md index 66fbab1a96f..78fffdd05ed 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md @@ -5,3 +5,13 @@ lang: en sidebar: true incomplete: true --- + +## Why do clients need a fork choice algorithm? + +Under ideal conditions of no network latency and 100% participation of completely honest validators, there is no need for a fork choice algorithm in proof-of-stake Ethereum. This is because in each slot, a single block proposer is randomly selected to create and propagate a block to other nodes, who all validate and add it to the head of their blockchain. However, in reality there are several circumstances that can lead different nodes to have different views of the head of the chain (for example because some nodes receive blocks later than other) or even for two blocks to exist in the same slot (if a malicious validator has "equivocated" by proposing twice). In these scenarios, Ethereum clients must have a fixed set of rules to follow to determine which block to add to the chain and which to discard. This set of rules is encoded in the fork choice algorithm. + +## LMD-GHOST + +Ethereum's fork-choice algorithm is known as LMD_GHOST, standing for "latest message driven greedy heaviest observed subtree". The basic idea is to choose the fork that has accumulated the greatest weight of attestations. The attestation weight is the total number of attestations weighted by the balance of each validator, so that validators that have been lazy or badly behaved have a smaller influence. This expains the GHOST part of the algorithm - if the client observes a subtree (>=1 fork) at the head of the chain, it chooses the heaviest one. + +The LMD part is a modification that ensures the client only accepts a single message from each validator. If it receives additional messages, older ones are simply discarded. This closes a theoretical attack vectors that uses delayed messages to trick LMD-naive GHOST algorithms into choosing a particular fork that it would otherwise discard. diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index ec0f1145ee0..fed5b811c8c 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -14,7 +14,7 @@ To understand this page it is necessary to first understand the fundamentals of At the very beginning of a blockchain, every node on the network agrees on a specific first block - a "genesis block". Any nodes that enter the network later on are required to download that genesis block and every block that came after it and re-execute the transactions inside them. The entire blockchain is built on top of the genesis block - it is the universal "ground truth". This means that the genesis block has to be irreversible and always present in the canonical chain. -In proof-of-work blockchains there is a single canonical chain that all honest nodes agree upon (e.g. the one that has taken the most energy, measured in proof-of-work difficulty, to mine). This is "objective". Alternatively, in "subjective" blockchains there are multiple valid states and nodes choose between them based on social information from their peers. Ethereum PoS is "weakly subjective" because there is one correct chain that all honest nodes agree upon. Nodes that are continuously connected are effectively objective because they simply follow the consensus mechanism to the head of the chain. However, nodes entering the network for the first time or after some long delay rely upon information about the state of the blockchain gathered from some trusted source such as a friend's node or a block explorer. Given an honest recent state and the full set of blocks, any new node entering the network will independently arrive at the correct head. +In proof-of-work blockchains there is a single canonical chain that all honest nodes agree upon (e.g. the one that has taken the most energy, measured in proof-of-work difficulty, to mine). This is "objective". Alternatively, in "subjective" blockchains there are multiple valid states and nodes choose between them based on social information from their peers. Ethereum PoS is "weakly subjective" because there is one correct chain that all honest nodes agree upon. Nodes that are continuously connected are effectively objective because they simply follow the consensus mechanism to the head of the chain. However, nodes entering the network for the first time or after some long delay rely upon information about the state of the blockchain gathered from some trusted source such as a friend's node or a block explorer (or it could be bundled into the client source code so that the trusted source is the client developer team). Given an honest recent state and the full set of blocks, any new node entering the network will independently arrive at the correct head. ## Weak subjectivity checkpoints @@ -22,6 +22,8 @@ The way weak subjectivity is implemented in proof-of-stake Ethereum is by using ## How weak is weak? -" -However, arguably this is a very weak requirement; in fact, users need to trust client developers and/or "the community" to about this extent already. At the very least, users need to trust someone (usually client developers) to tell them what the protocol is and what any updates to the protocol have been. This is unavoidable in any software application. Hence, the marginal additional trust requirement that PoS imposes is still quite low. -" +Subjectivity in a blockchain consensus mechanism is undesirable because it allows a attacker to take over the chain if they can control enough nodes. However, this is not the case in Ethereum's proof-of-stake because the checkpoints make the subjectivity "weak". The issue is reliance upon trusted sources for recent states to build upon. However, the risk of getting a bad weak subjectivity checkpoint is low. There is always some degree of trust required to run any software application, for example trusting that the software developers have produced honest software. Adding a requirement to trust the community to provide honest weak subjectivity checkpoints can be considered about as problematic as trusting the client developers. The overall trust required is low, and the checkpoint only adds marginally. + +## Difference between weak subjectivity checkpoints and finalized blocks + +"It is a block that the entire network agrees on as always being part of the canonical chain. Note that this is quite different than the concept of a “finalized” block – if a node sees two conflicting finalized blocks, then the network has experienced consensus failure and the node cannot identify a canonical fork. On the other hand, if a node sees a block conflicting with a weak subjectivity checkpoint, then it immediately rejects that block. As far as the fork choice of nodes is concerned, the latest weak subjectivity checkpoint is the new genesis block of the network." From 6432fc1d0c4b58778429ddddf3d11e5d769ec5a3 Mon Sep 17 00:00:00 2001 From: Joe Date: Mon, 25 Apr 2022 19:15:40 +0100 Subject: [PATCH 07/26] make tenses consistent, fin 1st draft of WS page --- .../docs/consensus-mechanisms/pos/index.md | 8 +++---- .../pos/weak-subjectivity/index.md | 21 ++++++++++++++----- 2 files changed, 20 insertions(+), 9 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index 7d808e8f454..0aa6ef9eaf5 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -18,9 +18,9 @@ Proof-of-stake is a type of [consensus mechanism](/developers/docs/consensus-mec Proof-of-stake comes with a number of improvements to the proof-of-work system: -- better energy efficiency – you don't need to use lots of energy on PoW computations -- lower barriers to entry, reduced hardware requirements – you don't need elite hardware to stand a chance of creating new blocks -- reduced centralization risk – proof-of-stake should lead to more nodes in the network +- better energy efficiency – there is no need to use lots of energy on PoW computations +- lower barriers to entry, reduced hardware requirements – there is no need for elite hardware to stand a chance of creating new blocks +- reduced centralization risk – proof-of-stake should lead to more nodes securing the network - because of the low energy requirement less ETH issuance is required to incentivize participation - economic penalties for misbehaviour make 51% style attacks much more costly for an attacker compared to PoW - the community can resort to social recovery of an honest chain if a 51% attack were to overcome the crypto-economic defenses. @@ -29,7 +29,7 @@ Proof-of-stake comes with a number of improvements to the proof-of-work system: To participate as a validator, a user must deposit 32 ETH into the deposit contract and run three separate pieces of software: an execution client, a consensus client and a validator. On depositing their ether, the user joins an activation queue that limits the rate at which new validators join the network. Once activated, validators receive new blocks from peers on the Ethereum network. The transactions delivered in the block are re-executed and the block signature is checked to ensure the block is valid. The validator then sends a vote (called an attestation) in favour of that block across the network. -Whereas under PoW the timing of blocks is determined byt he mining difficulty, in PoS the tempo is fixed. Time in PoS Ethereum is divided into slots (12 seconds) and epochs (32 slots). In every slot a committee of validators is randomly chosen whose votes are used to determine the validity of the block proposed in that slot. Also in every slot one validator is randomly selected to be a block proposer. That validator is responsible for creating a new block and sending it out to other nodes on the network. +Whereas under PoW the timing of blocks is determined by the mining difficulty, in PoS the tempo is fixed. Time in PoS Ethereum is divided into slots (12 seconds) and epochs (32 slots). In every slot a committee of validators is randomly chosen whose votes are used to determine the validity of the block proposed in that slot. Also in every slot one validator is randomly selected to be a block proposer. That validator is responsible for creating a new block and sending it out to other nodes on the network. ## Finality {#finality} diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index fed5b811c8c..08790d99946 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -12,18 +12,29 @@ To understand this page it is necessary to first understand the fundamentals of ## Weak Subjectivity -At the very beginning of a blockchain, every node on the network agrees on a specific first block - a "genesis block". Any nodes that enter the network later on are required to download that genesis block and every block that came after it and re-execute the transactions inside them. The entire blockchain is built on top of the genesis block - it is the universal "ground truth". This means that the genesis block has to be irreversible and always present in the canonical chain. +At the very beginning of a blockchain, every node on the network agrees on a specific first block - a "genesis block". The entire blockchain is built on top of the genesis block - it is the universal "ground truth". This means that the genesis block has to be irreversible and always present in the canonical chain. The blockchain can then grow from this genesis block "objectively" if there is only one valid chain that is chosen entirely deterministically. Alternatively, nodes might come to different conclusions about which blocks to add to the canonical blockchain depending upon the timing of certain messages received from their peers, possibly trusting some peers more than others. In this model there exist multiple possible valid blocks that a node chooses between by trusting social information from other nodes. This is "subjective". The subjectivity exposes the blockchain to certain types of attack, such as "long range attacks" where nodes that participated very early the chain genesis maintain an alternative fork that they release much later to their own advantage, having already withdrawn their staked ether. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the finalized canonical one. Nodes that lag the rest of the network because they have been offline for a long time or are simply new entrants to the network might not be aware that these attacking validators have withdrawn their funds, so they could follow these dishonest validators onto an incorrect chain. -In proof-of-work blockchains there is a single canonical chain that all honest nodes agree upon (e.g. the one that has taken the most energy, measured in proof-of-work difficulty, to mine). This is "objective". Alternatively, in "subjective" blockchains there are multiple valid states and nodes choose between them based on social information from their peers. Ethereum PoS is "weakly subjective" because there is one correct chain that all honest nodes agree upon. Nodes that are continuously connected are effectively objective because they simply follow the consensus mechanism to the head of the chain. However, nodes entering the network for the first time or after some long delay rely upon information about the state of the blockchain gathered from some trusted source such as a friend's node or a block explorer (or it could be bundled into the client source code so that the trusted source is the client developer team). Given an honest recent state and the full set of blocks, any new node entering the network will independently arrive at the correct head. +Closing this attack vector requires reducing the consensus mechanism's reliance on social information. For Ethereum, nodes that are always online and began participating at genesis are protected by the consensus mechanism. The valid chain is simply the one that has accumulated the most attestation-weight since genesis. However, new nodes entering the network or nodes that have been offline for long periods of time are not reliably protected by the consensus mechanism in the same way. They require some trusted information from peers about a recent block that is definitely part of the canonical chain - without this they could be tricked into following an alternative chain. There is still a subjective element to this process because the new node needs to find a trusted source to retrieve the state of a recent canonical block to build upon, but once this is available the node can sync to the head of the chain deterministically. The subjective aspect is therefore weak. ## Weak subjectivity checkpoints -The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are blocks that all nodes on the network agree belong in the canonical chain. They serve a similar purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm executed by each node treats the weak subjectivity checkpoints as a genesis block, trusting that the blockchain state defined in that checkpoint to be correct, and then independently verifying the chain from that point onwards. The fork choice algorithm automatically rejects any block that does not build upon the most recent weak subjectivity checkpoint. +The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are blocks that all nodes on the network agree belong in the canonical chain. They serve a similar purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm executed by each node treats the weak subjectivity checkpoints as de-facto genesis blocks, trusting that the blockchain state defined in that checkpoint to be correct, and then independently verifying the chain from that point onwards. The fork choice algorithm automatically rejects any block that does not build upon the most recent weak subjectivity checkpoint. The checkpoints can be thought of as "revert-limits" because blocks added to the chain before a weak-subjectivity checkpoint simply cannot be changed. This undermines long range attacks simply by defining long range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they are able to withdraw their stake, and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. ## How weak is weak? -Subjectivity in a blockchain consensus mechanism is undesirable because it allows a attacker to take over the chain if they can control enough nodes. However, this is not the case in Ethereum's proof-of-stake because the checkpoints make the subjectivity "weak". The issue is reliance upon trusted sources for recent states to build upon. However, the risk of getting a bad weak subjectivity checkpoint is low. There is always some degree of trust required to run any software application, for example trusting that the software developers have produced honest software. Adding a requirement to trust the community to provide honest weak subjectivity checkpoints can be considered about as problematic as trusting the client developers. The overall trust required is low, and the checkpoint only adds marginally. +The subjective aspect of Ethereum's proof-of-stake is the requirement for a recent state (weak subjectivity checkpoint) from a trusted source to sync from. The risk of getting a bad weak subjectivity checkpoint is very low, partly because they can be checked against several independent public sources such as block explorers or multiple nodes. There is always some degree of trust required to run any software application, for example trusting that the software developers have produced honest software. Adding a requirement to trust the community to provide honest weak subjectivity checkpoints can be considered about as problematic as trusting the client developers. The overall trust required is low, and the checkpoint only adds marginally. + +It is also important to realize that these considerations are become important in the very unlikely event of a majority of validators colluding to produce an alternate fork of the blockchain. Under any other circumstances there is only one Ethereum chain to choose from. ## Difference between weak subjectivity checkpoints and finalized blocks -"It is a block that the entire network agrees on as always being part of the canonical chain. Note that this is quite different than the concept of a “finalized” block – if a node sees two conflicting finalized blocks, then the network has experienced consensus failure and the node cannot identify a canonical fork. On the other hand, if a node sees a block conflicting with a weak subjectivity checkpoint, then it immediately rejects that block. As far as the fork choice of nodes is concerned, the latest weak subjectivity checkpoint is the new genesis block of the network." +Finalized blocks and weak subjectivity checkpoints are treated differently by Ethereum nodes. If a node becomes aware of two competing finalized blocks then it is torn between the two - it has no way to identify automatically which is the canonical fork. This is symptomatic of a consensus failure. In contrast, a node simply rejects any block that conflicts with its weak subjectivity checkpoint. From the node's perspective the weak subjectivity checkpoint is represents an absolute truth that cannot be undermined by any new knowledge arriving from its peers. + +## Who to trust? + +In practice, a weak subjectivity checkpoint may come as part of the client software used to verify the blockchain. Arguably an attacker can corrupt the checkpoint in the software can just as easily corrupt the software itself. There is no real crypto-economic route around this problem, but the impact of untrustworthy developers is minimized in Ethereum by having multiple independent client teams each building equivalent software in different langages, all with a vested interest in maintaining an honest chain. Block explorers may also provide weak subjecticity checkpoints, or at least provide a way to cross-reference checkpoints obtained from elsewhere against an additional source. Finally, checkpoints can simply be requested from another node, perhaps another Etheruem user that runs a full node can provide a checkpoint that can then be verified against data from a block explorer. + +## Further Reading + +[Weak subjectivity in Eth2](https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2) +[Vitalik: How I learned to love weak subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/) From abccca4a191290ebd2262e0e1a2927298cf1af5a Mon Sep 17 00:00:00 2001 From: Joe Date: Mon, 25 Apr 2022 21:04:14 +0100 Subject: [PATCH 08/26] refine WS page, fix typos --- .../pos/casper-ffg/index.md | 24 +++++++++++++------ .../pos/fork-choice/index.md | 2 +- .../pos/weak-subjectivity/index.md | 22 ++++++++--------- 3 files changed, 29 insertions(+), 19 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md b/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md index ca495fe5e4e..3b819df5e88 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md @@ -6,18 +6,28 @@ sidebar: true incomplete: true --- -# Consensus mechanism +## What is a finality gadget? -Casper-FFG (Casper the friendly finalty gadget) is the mechanism used to ossify the chain at specific intervals. This is a process of upgrading blocks to "justified" if they are voted for by at least 2/3 of the total staked ether, and "finalized" is another block is justified on top of it. +Casper the Friendly Finality Gadget (Casper-FFG) is an algorithm that finalizes blocks. This means upgrading certain blocks so that they cannot be reverted unless there has been a critical consensus failure. Selecting a unique canonical chain by providing finality is Casper-FFG's only purpose, meaning it has to be paired with other components, such as block proposal and fork-choice rules to form a complete consensus mechanism. Casper-FFG could be applied as an upgrade to several existing blockchain designs. It currently runs on top of Ethereum's proof-of-work blockchain and will soon switch to finalizing the proof-of-stake chain. This modularity is the reason Casper is referred to as a "finality gadget". -## Justification +## Why does Ethereum need a finality gadget? -Justified blocks are essentially candidates for finalization. They probably wont be reorg'd, but they technically could be. +Casper-FFG provides safety and liveness assurances to Ethereum. Once a block has been finalized, an attacker would have to destroy millions of ether (i.e. billions of USD) to change it. -## Finalization +## How does Casper-FFG work? -Finalization guarantees that the block will not be reverted unless the chain has suffered a critical consensus failure. +Casper-FFG (Casper the friendly finality gadget) is the mechanism used to ossify the chain at specific intervals. This is a process of upgrading blocks to "justified" if they are voted for by at least 2/3 of the total staked ether, and "finalized" is another block is justified on top of it. -## Economic Finality +### Justification + +### Finality Finalized blocks then cannot be reverted unless an attacker has burned at least 33% of the total staked ether (because they must have created two competing chains each with 2/3 attestations in order to create competing finalized blocks, meaning at least 1/3 of validators are contradicting themselves - they will be slashed maximally meaning 33% of the total stake is destroyed). + +### Slashing + +### Inactivity Leak + +### Fork choice + +The original definition of Casper-FFG included a fork choice algorithm that imposed the rule: `follow the chain containing the justified checkpoint that has the greatest height` where height is defined as the greatest distance from the genesis block. This has been deprecated in favour of a more sophisticated algorithm called LMD-GHOST. The combination of Casper-FFG and LMD-GHOST is soemtimes called "Gasper" and it is the consenuss mechanism that will be used in proof-of-stake Ethereum. diff --git a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md index 78fffdd05ed..f68de658d60 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md @@ -1,6 +1,6 @@ --- title: Fork choice -description: An explanation of the fork chopice algorithm implemented in proof-of-stake Ethereum. +description: An explanation of the fork choice algorithm implemented in proof-of-stake Ethereum. lang: en sidebar: true incomplete: true diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index 08790d99946..41113fd3cf5 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -12,27 +12,27 @@ To understand this page it is necessary to first understand the fundamentals of ## Weak Subjectivity -At the very beginning of a blockchain, every node on the network agrees on a specific first block - a "genesis block". The entire blockchain is built on top of the genesis block - it is the universal "ground truth". This means that the genesis block has to be irreversible and always present in the canonical chain. The blockchain can then grow from this genesis block "objectively" if there is only one valid chain that is chosen entirely deterministically. Alternatively, nodes might come to different conclusions about which blocks to add to the canonical blockchain depending upon the timing of certain messages received from their peers, possibly trusting some peers more than others. In this model there exist multiple possible valid blocks that a node chooses between by trusting social information from other nodes. This is "subjective". The subjectivity exposes the blockchain to certain types of attack, such as "long range attacks" where nodes that participated very early the chain genesis maintain an alternative fork that they release much later to their own advantage, having already withdrawn their staked ether. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the finalized canonical one. Nodes that lag the rest of the network because they have been offline for a long time or are simply new entrants to the network might not be aware that these attacking validators have withdrawn their funds, so they could follow these dishonest validators onto an incorrect chain. +At the very beginning of a blockchain, every node on the network agrees on a specific first block - a "genesis block". The entire blockchain is built on top of the genesis block - it is the universal "ground truth" agreed by all participants to be irreversible and always present in the canonical chain. Successive blocks are added objectively if there is only one valid chain that is chosen entirely deterministically. Alternatively, nodes might come to different conclusions about which blocks to add depending upon information received from their peers, possibly trusting some peers more than others. -Closing this attack vector requires reducing the consensus mechanism's reliance on social information. For Ethereum, nodes that are always online and began participating at genesis are protected by the consensus mechanism. The valid chain is simply the one that has accumulated the most attestation-weight since genesis. However, new nodes entering the network or nodes that have been offline for long periods of time are not reliably protected by the consensus mechanism in the same way. They require some trusted information from peers about a recent block that is definitely part of the canonical chain - without this they could be tricked into following an alternative chain. There is still a subjective element to this process because the new node needs to find a trusted source to retrieve the state of a recent canonical block to build upon, but once this is available the node can sync to the head of the chain deterministically. The subjective aspect is therefore weak. +The subjectivity in this model exposes the blockchain to certain types of attack, such as "long range attacks" where nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical one. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so they could follow these dishonest validators onto an incorrect chain. -## Weak subjectivity checkpoints - -The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are blocks that all nodes on the network agree belong in the canonical chain. They serve a similar purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm executed by each node treats the weak subjectivity checkpoints as de-facto genesis blocks, trusting that the blockchain state defined in that checkpoint to be correct, and then independently verifying the chain from that point onwards. The fork choice algorithm automatically rejects any block that does not build upon the most recent weak subjectivity checkpoint. The checkpoints can be thought of as "revert-limits" because blocks added to the chain before a weak-subjectivity checkpoint simply cannot be changed. This undermines long range attacks simply by defining long range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they are able to withdraw their stake, and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. - -## How weak is weak? +Closing these attack vector requires reducing the reliance of the consensus mechanism on social information. Instead of relying on social information from peers to form consensus, Ethereum nodes only require their peers to provide a recent state hash, known as a weak subjectvity checkpoint. Then, they can use that as a univeral ground truth, equivalent to a genesis block. From that genesis block, the new node can sync the remainder of the blocks deterministically using their consensus mechanism, confident that they are on the correct chain. This is known as "weak subjectivity". -The subjective aspect of Ethereum's proof-of-stake is the requirement for a recent state (weak subjectivity checkpoint) from a trusted source to sync from. The risk of getting a bad weak subjectivity checkpoint is very low, partly because they can be checked against several independent public sources such as block explorers or multiple nodes. There is always some degree of trust required to run any software application, for example trusting that the software developers have produced honest software. Adding a requirement to trust the community to provide honest weak subjectivity checkpoints can be considered about as problematic as trusting the client developers. The overall trust required is low, and the checkpoint only adds marginally. +## Weak subjectivity checkpoints -It is also important to realize that these considerations are become important in the very unlikely event of a majority of validators colluding to produce an alternate fork of the blockchain. Under any other circumstances there is only one Ethereum chain to choose from. +The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are blocks that all nodes on the network agree belong in the canonical chain. They serve a similar purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm run by each node treats the weak subjectivity checkpoints as de-facto genesis blocks, trusting that the blockchain state defined in that checkpoint is correct, and independently verifying the chain from that point onwards. The checkpoints can be thought of as "revert-limits" because blocks added to the chain before a weak-subjectivity checkpoint simply cannot be changed. This undermines long range attacks simply by defining long range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they are able to withdraw their stake, and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. ## Difference between weak subjectivity checkpoints and finalized blocks Finalized blocks and weak subjectivity checkpoints are treated differently by Ethereum nodes. If a node becomes aware of two competing finalized blocks then it is torn between the two - it has no way to identify automatically which is the canonical fork. This is symptomatic of a consensus failure. In contrast, a node simply rejects any block that conflicts with its weak subjectivity checkpoint. From the node's perspective the weak subjectivity checkpoint is represents an absolute truth that cannot be undermined by any new knowledge arriving from its peers. -## Who to trust? +## How weak is weak? + +The subjective aspect of Ethereum's proof-of-stake is the requirement for a recent state (weak subjectivity checkpoint) from a trusted source to sync from. The risk of getting a bad weak subjectivity checkpoint is very low, partly because they can be checked against several independent public sources such as block explorers or multiple nodes. There is always some degree of trust required to run any software application, for example trusting that the software developers have produced honest software. + +A weak subjectivity checkpoint may even come as part of the client software. Arguably an attacker can corrupt the checkpoint in the software can just as easily corrupt the software itself. There is no real crypto-economic route around this problem, but the impact of untrustworthy developers is minimized in Ethereum by having multiple independent client teams each building equivalent software in different langages, all with a vested interest in maintaining an honest chain. Block explorers may also provide weak subjectivity checkpoints, or at least provide a way to cross-reference checkpoints obtained from elsewhere against an additional source. -In practice, a weak subjectivity checkpoint may come as part of the client software used to verify the blockchain. Arguably an attacker can corrupt the checkpoint in the software can just as easily corrupt the software itself. There is no real crypto-economic route around this problem, but the impact of untrustworthy developers is minimized in Ethereum by having multiple independent client teams each building equivalent software in different langages, all with a vested interest in maintaining an honest chain. Block explorers may also provide weak subjecticity checkpoints, or at least provide a way to cross-reference checkpoints obtained from elsewhere against an additional source. Finally, checkpoints can simply be requested from another node, perhaps another Etheruem user that runs a full node can provide a checkpoint that can then be verified against data from a block explorer. +Finally, checkpoints can simply be requested from other nodes, perhaps another Etheruem user that runs a full node can provide a checkpoint that can then be verified against data from a block explorer. Overall, trusting the provider of a weak subjectivity checkpoint can be considered about as problematic as trusting the client developers. The overall trust required is low. It is also important to note that these considerations only become important in the very unlikely event where a majority of validators collude to produce an alternate fork of the blockchain. Under any other circumstances there is only one Ethereum chain to choose from. ## Further Reading From 9b8ad79b6a3e86c6495390d8af8a745546c9839b Mon Sep 17 00:00:00 2001 From: Joe Date: Tue, 26 Apr 2022 10:26:05 +0100 Subject: [PATCH 09/26] update WS page, mv casper ->gasper, rm fork choice --- .../pos/fork-choice/index.md | 17 ----------------- .../pos/{casper-ffg => gasper}/index.md | 6 ++++-- .../pos/weak-subjectivity/index.md | 11 +++++++---- 3 files changed, 11 insertions(+), 23 deletions(-) delete mode 100644 src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md rename src/content/developers/docs/consensus-mechanisms/pos/{casper-ffg => gasper}/index.md (80%) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md b/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md deleted file mode 100644 index f68de658d60..00000000000 --- a/src/content/developers/docs/consensus-mechanisms/pos/fork-choice/index.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Fork choice -description: An explanation of the fork choice algorithm implemented in proof-of-stake Ethereum. -lang: en -sidebar: true -incomplete: true ---- - -## Why do clients need a fork choice algorithm? - -Under ideal conditions of no network latency and 100% participation of completely honest validators, there is no need for a fork choice algorithm in proof-of-stake Ethereum. This is because in each slot, a single block proposer is randomly selected to create and propagate a block to other nodes, who all validate and add it to the head of their blockchain. However, in reality there are several circumstances that can lead different nodes to have different views of the head of the chain (for example because some nodes receive blocks later than other) or even for two blocks to exist in the same slot (if a malicious validator has "equivocated" by proposing twice). In these scenarios, Ethereum clients must have a fixed set of rules to follow to determine which block to add to the chain and which to discard. This set of rules is encoded in the fork choice algorithm. - -## LMD-GHOST - -Ethereum's fork-choice algorithm is known as LMD_GHOST, standing for "latest message driven greedy heaviest observed subtree". The basic idea is to choose the fork that has accumulated the greatest weight of attestations. The attestation weight is the total number of attestations weighted by the balance of each validator, so that validators that have been lazy or badly behaved have a smaller influence. This expains the GHOST part of the algorithm - if the client observes a subtree (>=1 fork) at the head of the chain, it chooses the heaviest one. - -The LMD part is a modification that ensures the client only accepts a single message from each validator. If it receives additional messages, older ones are simply discarded. This closes a theoretical attack vectors that uses delayed messages to trick LMD-naive GHOST algorithms into choosing a particular fork that it would otherwise discard. diff --git a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md similarity index 80% rename from src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md rename to src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 3b819df5e88..6d40ec57284 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/casper-ffg/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -1,11 +1,13 @@ --- -title: Casper-FFG -description: An explanation of the Casper-FFG mechanism. +title: Gasper +description: An explanation of the Gasper PoS mechanism. lang: en sidebar: true incomplete: true --- +Gasper is a combination of Casper the Friendly Finality Gadget and the LMD-GHOST fork choice algorithm. Together these components form the consensus mechanism securing proof-of-stake Ethereum. Casper is the mechanism that uprgades certain blocks to "finalized" so that new entrants into the network can be confident that they are syncing the canonical chain. The fork chocie algorithm uses accumulated votes to ensure that when forks arise in the blockchain nodes can easily select the correct one. + ## What is a finality gadget? Casper the Friendly Finality Gadget (Casper-FFG) is an algorithm that finalizes blocks. This means upgrading certain blocks so that they cannot be reverted unless there has been a critical consensus failure. Selecting a unique canonical chain by providing finality is Casper-FFG's only purpose, meaning it has to be paired with other components, such as block proposal and fork-choice rules to form a complete consensus mechanism. Casper-FFG could be applied as an upgrade to several existing blockchain designs. It currently runs on top of Ethereum's proof-of-work blockchain and will soon switch to finalizing the proof-of-stake chain. This modularity is the reason Casper is referred to as a "finality gadget". diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index 41113fd3cf5..8d0e2c53b6e 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -12,15 +12,15 @@ To understand this page it is necessary to first understand the fundamentals of ## Weak Subjectivity -At the very beginning of a blockchain, every node on the network agrees on a specific first block - a "genesis block". The entire blockchain is built on top of the genesis block - it is the universal "ground truth" agreed by all participants to be irreversible and always present in the canonical chain. Successive blocks are added objectively if there is only one valid chain that is chosen entirely deterministically. Alternatively, nodes might come to different conclusions about which blocks to add depending upon information received from their peers, possibly trusting some peers more than others. +Subjectivity in blockchains refers to reliance upon social information to agree on the current or past states. There may be multiple valid forks that are chosen from according to information gathered from other peers on the network. The converse is objectivity which refers to chains where there is only one possible valid chain that all nodes will necessarily agree upon by applying their coded rules. There is also a third state, known as weak subjectivity. This refers to a chain that can progress objectively after some initial seed of information is retrieved socially. This initial seed is a subjective element, but there is only one valid chain that all clients will objectively coverge upon provided there has not been a critical consensus failure. Weak subjectivity is a feature of Ethereum's proof-of-stake mechanism. -The subjectivity in this model exposes the blockchain to certain types of attack, such as "long range attacks" where nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical one. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so they could follow these dishonest validators onto an incorrect chain. +## What problems does weak subjectivity solve? -Closing these attack vector requires reducing the reliance of the consensus mechanism on social information. Instead of relying on social information from peers to form consensus, Ethereum nodes only require their peers to provide a recent state hash, known as a weak subjectvity checkpoint. Then, they can use that as a univeral ground truth, equivalent to a genesis block. From that genesis block, the new node can sync the remainder of the blocks deterministically using their consensus mechanism, confident that they are on the correct chain. This is known as "weak subjectivity". +Subjectivity is inherent to proof-of-stake blockchains because selecting the correct chain from multiple forks is done by counting votes. This exposes the blockchain to several attack vectors, including long range attacks whereby nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical one. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so they could be tricked into following an incorrect chain. These attack vectors can be solved by imposing constraints that diminish the subjective aspects of the mechanism - and therefore trust assumptions - to the bare minimum. ## Weak subjectivity checkpoints -The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are blocks that all nodes on the network agree belong in the canonical chain. They serve a similar purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm run by each node treats the weak subjectivity checkpoints as de-facto genesis blocks, trusting that the blockchain state defined in that checkpoint is correct, and independently verifying the chain from that point onwards. The checkpoints can be thought of as "revert-limits" because blocks added to the chain before a weak-subjectivity checkpoint simply cannot be changed. This undermines long range attacks simply by defining long range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they are able to withdraw their stake, and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. +The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are state roots that all nodes on the network agree belong in the canonical chain. They serve a similar "universal truth" purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm trusts that the blockchain state defined in that checkpoint is correct and independently and objectively verifies the chain from that point onwards. The checkpoints act as "revert-limits" because blocks located before weak-subjectivity checkpoints cannot be changed. This undermines long range attacks simply by defining long range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they are able to withdraw their stake, and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. ## Difference between weak subjectivity checkpoints and finalized blocks @@ -38,3 +38,6 @@ Finally, checkpoints can simply be requested from other nodes, perhaps another E [Weak subjectivity in Eth2](https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2) [Vitalik: How I learned to love weak subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/) +[Weak subjectivity (Teku docs)](https://docs.teku.consensys.net/en/latest/Concepts/Weak-Subjectivity/) +[Phase-0 Weak subjectivity guide](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/weak-subjectivity.md) +[Analysis n weak subjectivity in Ethereum 2.0](https://github.com/runtimeverification/beacon-chain-verification/blob/master/weak-subjectivity/weak-subjectivity-analysis.pdf) From 721713fdff26eee00db0d4c41f1a8b75a54a080b Mon Sep 17 00:00:00 2001 From: Joe Date: Tue, 26 Apr 2022 12:07:12 +0100 Subject: [PATCH 10/26] finish first draft of Gasper page --- .../consensus-mechanisms/pos/gasper/index.md | 35 +++++++++++-------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 6d40ec57284..1b48ba4def6 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -3,33 +3,40 @@ title: Gasper description: An explanation of the Gasper PoS mechanism. lang: en sidebar: true -incomplete: true +incomplete: false --- -Gasper is a combination of Casper the Friendly Finality Gadget and the LMD-GHOST fork choice algorithm. Together these components form the consensus mechanism securing proof-of-stake Ethereum. Casper is the mechanism that uprgades certain blocks to "finalized" so that new entrants into the network can be confident that they are syncing the canonical chain. The fork chocie algorithm uses accumulated votes to ensure that when forks arise in the blockchain nodes can easily select the correct one. +Gasper is a combination of Casper the Friendly Finality Gadget and the LMD-GHOST fork choice algorithm. Together these components form the consensus mechanism securing proof-of-stake Ethereum. Casper is the mechanism that uprgades certain blocks to "finalized" so that new entrants into the network can be confident that they are syncing the canonical chain. The fork choice algorithm uses accumulated votes to ensure that when forks arise in the blockchain nodes can easily select the correct one. -## What is a finality gadget? +**note** that the original definition of Casper-FFG was updated slightly for inclusion in Gasper. On this page we consider the updated version. -Casper the Friendly Finality Gadget (Casper-FFG) is an algorithm that finalizes blocks. This means upgrading certain blocks so that they cannot be reverted unless there has been a critical consensus failure. Selecting a unique canonical chain by providing finality is Casper-FFG's only purpose, meaning it has to be paired with other components, such as block proposal and fork-choice rules to form a complete consensus mechanism. Casper-FFG could be applied as an upgrade to several existing blockchain designs. It currently runs on top of Ethereum's proof-of-work blockchain and will soon switch to finalizing the proof-of-stake chain. This modularity is the reason Casper is referred to as a "finality gadget". +## The role of Gasper -## Why does Ethereum need a finality gadget? +Gasper is designed to sit atop a proof-of-stake blockchain where nodes provide ether as a security deposit that can be destroyed if they are lazy or dishonest in proposing or validating blocks. Gasper is the mechanism that defines how and why validators are rewarded and punished, how they decide which blocks to accept and reject, and which fork of the blockchain to build on. -Casper-FFG provides safety and liveness assurances to Ethereum. Once a block has been finalized, an attacker would have to destroy millions of ether (i.e. billions of USD) to change it. +## What is finality? -## How does Casper-FFG work? +[Casper the Friendly Finality Gadget (Casper-FFG)](https://arxiv.org/pdf/1710.09437.pdf) is an algorithm that finalizes blocks. This means upgrading certain blocks so that they cannot be reverted (unless there has been a critical consensus failure). Finalized blocks can be thought of as information the blockchain is certain about. In order for a block to be finalized it has to pass through a two-step uprgade procedure. First, 2/3 of the total staked ether must have voted in favor of that block's inclusion in the canonical chain. This condition upgrades the block to "justified". Justified blocks are unlikely to be reverted but technically they could be. The justified block is then upgraded to "finalized" when another block is justified on top of it. This is a commitment to include the block in the canonical chain so that it cannot be reverted unless an attacker destroys millions of ether (billions of $USD). -Casper-FFG (Casper the friendly finality gadget) is the mechanism used to ossify the chain at specific intervals. This is a process of upgrading blocks to "justified" if they are voted for by at least 2/3 of the total staked ether, and "finalized" is another block is justified on top of it. +These block upgrades do not happen in every slot. Instead, only epoch-boundary blocks can be justified and finalized. These blocks are known as "checkpoints". Upgrading considers pairs of checkpoints. A "supermajority link" must exist between two successive checkpoints (i.e. 2/3 of the total staked ether voting that checkpoint B is the correct descendant of checkpoint A) in order to upgrade the less recent checkpoint to finalized and the more recent block to justified. -### Justification +Because finality requires 2/3 agreement that a block is canonical, an attacker cannot possibly create an alternative finalized chain without a) owning or manipulating 2/3 of the total staked ether, b) destroying at least 1/3 of the total staked ether. The first condition arises because 2/3 of the staked ether is required to finalize a chain. The second condition arises because if 2/3 of the total stake has voted in favour of both forks then 1/3 must have voted on both - this is a slashing condition that would be maximally punished and 1/3 of the total stake would be destroyed. At the time of writing this requires an attacker be willing to lose about $10,000,000,000 worth of ether. -### Finality +### Incentives and Slashing -Finalized blocks then cannot be reverted unless an attacker has burned at least 33% of the total staked ether (because they must have created two competing chains each with 2/3 attestations in order to create competing finalized blocks, meaning at least 1/3 of validators are contradicting themselves - they will be slashed maximally meaning 33% of the total stake is destroyed). - -### Slashing +Validators are rewarded for honestly proposing and validating blocks. The rewards come in the form of ether added to their stake. On the other hand, validators that are absent and fail to act when called upon miss out on these rewards and sometimes lose a small portion of their existing stake. However, the penalties for being offline are small and in most cases amount to opportunity costs of missing rewards. There are some validator actions that are very difficult to do accidentally and signify some malicious intent such as proposing multiple blocks for the same slot, attesting to multiple blocks for the same slot or contradicting previous checkpoint votes. These are “slashable” behaviors that are penalized mroe harshly. Slashing results in some portion of the validator's stake being destroyed and the validator being removed from the network. This takes 36 days. On Day 1 there is an initial penalty of up to 0.5 ETH. Then the slashed validator’s ether slowly drains away across the exit period, but on Day 18 they receive a “correlation penalty” which is larger when more validators are slashed around the same time. The maximum penalty is the entire stake. These rewards and penalties are designed to incentivize honest validators and disincentivize attacks on the network. ### Inactivity Leak +As well as security, Gasper also provides "plausible liveness". This is the condition that as logn as 2/3 of the total staked ether is voting honestly and following the protocol, the chain will be able to finalize irrespective of any other activity (such as attacks, latency issues or slashings). Put another way, 1/3 of the total staked ether must be somehow compromised to prevent the chain from finalizing. In Gasper there is an additional line of defense against a liveness failure, known as the "inactivity leak". This mechanism activates when the chain has failed to finalize for more than 4 epochs. The validators that are not actively attesting to the majority chain have their stake gradually drained away until the majority regains 2/3 of the total stake, ensuring that liveness failures are only temporary. + ### Fork choice -The original definition of Casper-FFG included a fork choice algorithm that imposed the rule: `follow the chain containing the justified checkpoint that has the greatest height` where height is defined as the greatest distance from the genesis block. This has been deprecated in favour of a more sophisticated algorithm called LMD-GHOST. The combination of Casper-FFG and LMD-GHOST is soemtimes called "Gasper" and it is the consenuss mechanism that will be used in proof-of-stake Ethereum. +The original definition of Casper-FFG included a fork choice algorithm that imposed the rule: `follow the chain containing the justified checkpoint that has the greatest height` where height is defined as the greatest distance from the genesis block. In Gasper the original fork choice-rule has been deprecated in favour of a more sophisticated algorithm called LMD-GHOST. It is important to realize that under normal conditions a fork choice rule is uneccessary - there is a single block proposer for every slot and honest validators attest to it. It is only in cases of large network asynchronicity or when a dishonest block proposer has equivoated that a fork choice algorithm is required. However, when those cases do arise, the fork choice algorithm is a critical defense that secures the correct chain. + +LMD-GHOST stands for "latest message driven greedy heaviest observed sub-tree". This is a jargon-heavy way to define an algorithm the selects the fork with the greatest accumulated weight of attestations as the canonical one (greedy heaviest subtree) and that if multiple messages are received from a validator, only the latest one is considered (latest-message driven). Every validator assesses each block using this rule before adding the heaviest block to its canonical chain. + +## Further Reading + +[Gasper: Combinign GHOST and Casper](https://arxiv.org/pdf/2003.03052.pdf) +[Capser the Friendly Finality Gadget](https://arxiv.org/pdf/1710.09437.pdf) From 13a7e70572ed46a0c0badff31d7251101beb28f8 Mon Sep 17 00:00:00 2001 From: Joe Date: Tue, 26 Apr 2022 13:10:02 +0100 Subject: [PATCH 11/26] fix typos, add faq page --- .../consensus-mechanisms/pos/faqs/index.md | 246 ++++++++++++++++++ .../consensus-mechanisms/pos/gasper/index.md | 4 +- 2 files changed, 248 insertions(+), 2 deletions(-) create mode 100644 src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md diff --git a/src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md b/src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md new file mode 100644 index 00000000000..78da15b63a3 --- /dev/null +++ b/src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md @@ -0,0 +1,246 @@ +# Proof of Stake FAQs + +## What is Proof of Stake + +**Proof of Stake (PoS) is a category of consensus algorithms for public blockchains that depend on a validator's economic stake in the network**. In proof of work (PoW) based public blockchains (e.g. Bitcoin and the current implementation of Ethereum), the algorithm rewards participants who solve cryptographic puzzles in order to validate transactions and create new blocks (i.e. mining). In PoS-based public blockchains (e.g. Ethereum's upcoming Casper implementation), a set of validators take turns proposing and voting on the next block, and the weight of each validator's vote depends on the size of its deposit (i.e. stake). Significant advantages of PoS include **security, reduced risk of centralization, and energy efficiency**. + +In general, a proof of stake algorithm looks as follows. The blockchain keeps track of a set of validators, and anyone who holds the blockchain's base cryptocurrency (in Ethereum's case, ether) can become a validator by sending a special type of transaction that **locks up their ether into a deposit**. The process of creating and agreeing to new blocks is then done through a consensus algorithm that all current validators can participate in. + +There are many kinds of consensus algorithms, and many ways to assign rewards to validators who participate in the consensus algorithm, so there are many "flavors" of proof of stake. From an algorithmic perspective, there are two major types: chain-based proof of stake and [BFT](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)-style proof of stake. + +In **chain-based proof of stake**, the algorithm pseudo-randomly selects a validator during each time slot (eg. every period of 10 seconds might be a time slot), and assigns that validator the right to create a single block, and this block must point to some previous block (normally the block at the end of the previously longest chain), and so over time most blocks converge into a single constantly growing chain. + +In **BFT-style proof of stake**, validators are **randomly** assigned the right to _propose_ blocks, but _agreeing on which block is canonical_ is done through a multi-round process where every validator sends a "vote" for some specific block during each round, and at the end of the process all (honest and online) validators permanently agree on whether or not any given block is part of the chain. Note that blocks may still be _chained together_; the key difference is that consensus on a block can come within one block, and does not depend on the length or size of the chain after it. + +## What are the benefits of proof of stake as opposed to proof of work? + +See [A Proof of Stake Design Philosophy](https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51) for a more long-form argument. + +In short: + +- **No need to consume large quantities of electricity** in order to secure a blockchain (eg. it's estimated that both Bitcoin and Ethereum burn over $1 million worth of electricity and hardware costs per day as part of their consensus mechanism). +- Because of the lack of high electricity consumption, there is **not as much need to issue as many new coins** in order to motivate participants to keep participating in the network. It may theoretically even be possible to have _negative_ net issuance, where a portion of transaction fees is "burned" and so the supply goes down over time. +- Proof of stake opens the door to a wider array of techniques that use game-theoretic mechanism design in order to better **discourage centralized cartels** from forming and, if they do form, from acting in ways that are harmful to the network (eg. like [selfish mining](https://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf) in proof of work). +- **Reduced centralization risks**, as economies of scale are much less of an issue. $10 million of coins will get you exactly 10 times higher returns than $1 million of coins, without any additional disproportionate gains because at the higher level you can afford better mass-production equipment. +- Ability to use economic penalties to **make various forms of 51% attacks vastly more expensive** to carry out than proof of work - to paraphrase Vlad Zamfir, "it's as though your ASIC farm burned down if you participated in a 51% attack". + +## How does proof of stake fit into traditional Byzantine fault tolerance research? + +There are several fundamental results from Byzantine fault tolerance research that apply to all consensus algorithms, including traditional consensus algorithms like PBFT but also any proof of stake algorithm and, with the appropriate mathematical modeling, proof of work. + +The key results include: + +- [**CAP theorem**](https://en.wikipedia.org/wiki/CAP_theorem) - "in the cases that a network partition takes place, you have to choose either consistency or availability, you cannot have both". The intuitive argument is simple: if the network splits in half, and in one half I send a transaction "send my 10 coins to A" and in the other I send a transaction "send my 10 coins to B", then either the system is unavailable, as one or both transactions will not be processed, or it becomes inconsistent, as one half of the network will see the first transaction completed and the other half will see the second transaction completed. Note that the CAP theorem has nothing to do with scalability; it applies to sharded and non-sharded systems equally. +- [**FLP impossibility**](http://the-paper-trail.org/blog/a-brief-tour-of-flp-impossibility/) - in an asynchronous setting (ie. there are no guaranteed bounds on network latency even between correctly functioning nodes), it is not possible to create an algorithm which is guaranteed to reach consensus in any specific finite amount of time if even a single faulty/dishonest node is present. Note that this does NOT rule out ["Las Vegas" algorithms](https://en.wikipedia.org/wiki/Las_Vegas_algorithm) that have some probability each round of achieving consensus and thus will achieve consensus within T seconds with probability exponentially approaching 1 as T grows; this is in fact the "escape hatch" that many successful consensus algorithms use. +- **Bounds on fault tolerance** - from [the DLS paper](http://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf) we have: (i) protocols running in a partially synchronous network model (ie. there is a bound on network latency but we do not know ahead of time what it is) can tolerate up to 1/3 arbitrary (ie. "Byzantine") faults, (ii) deterministic protocols in an asynchronous model (ie. no bounds on network latency) cannot tolerate faults (although their paper fails to mention that [randomized algorithms can](http://link.springer.com/chapter/10.1007%2F978-3-540-77444-0_7) with up to 1/3 fault tolerance), (iii) protocols in a synchronous model (ie. network latency is guaranteed to be less than a known `d`) can, surprisingly, tolerate up to 100% fault tolerance, although there are restrictions on what can happen when more than or equal to 1/2 of nodes are faulty. Note that the "authenticated Byzantine" model is the one worth considering, not the "Byzantine" one; the "authenticated" part essentially means that we can use public key cryptography in our algorithms, which is in modern times very well-researched and very cheap. + +Proof of work has been [rigorously analyzed by Andrew Miller and others](https://socrates1024.s3.amazonaws.com/consensus.pdf) and fits into the picture as an algorithm reliant on a synchronous network model. We can model the network as being made up of a near-infinite number of nodes, with each node representing a very small unit of computing power and having a very small probability of being able to create a block in a given period. In this model, the protocol has 50% fault tolerance assuming zero network latency, ~46% (Ethereum) and ~49.5% (Bitcoin) fault tolerance under actually observed conditions, but goes down to 33% if network latency is equal to the block time, and reduces to zero as network latency approaches infinity. + +Proof of stake consensus fits more directly into the Byzantine fault tolerant consensus mould, as all validators have known identities (stable Ethereum addresses) and the network keeps track of the total size of the validator set. There are two general lines of proof of stake research, one looking at synchronous network models and one looking at partially asynchronous network models. "Chain-based" proof of stake algorithms almost always rely on synchronous network models, and their security can be formally proven within these models similarly to how security of [proof of work algorithms](http://nakamotoinstitute.org/static/docs/anonymous-byzantine-consensus.pdf) can be proven. A line of research connecting traditional Byzantine fault tolerant consensus in partially synchronous networks to proof of stake also exists, but is more complex to explain; it will be covered in more detail in later sections. + +Proof of work algorithms and chain-based proof of stake algorithms choose availability over consistency, but BFT-style consensus algorithms lean more toward consistency; [Tendermint](https://github.com/tendermint/tendermint) chooses consistency explicitly, and Casper uses a hybrid model that prefers availability but provides as much consistency as possible and makes both on-chain applications and clients aware of how strong the consistency guarantee is at any given time. + +Note that Ittay Eyal and Emin Gun Sirer's [selfish mining](https://bitcoinmagazine.com/articles/selfish-mining-a-25-attack-against-the-bitcoin-network-1383578440) discovery, which places 25% and 33% bounds on the incentive compatibility of Bitcoin mining depending on the network model (ie. mining is only incentive compatible if collusions larger than 25% or 33% are impossible) has NOTHING to do with results from traditional consensus algorithm research, which does not touch incentive compatibility. + +## What is the "nothing at stake" problem and how can it be fixed? + +In many early (all chain-based) proof of stake algorithms, including Peercoin, there are only rewards for producing blocks, and no penalties. This has the unfortunate consequence that, in the case that there are multiple competing chains, it is in a validator's incentive to try to make blocks on top of every chain at once, just to be sure: + +![](https://raw.githubusercontent.com/vbuterin/diagrams/master/possec.png) + +In proof of work, doing so would require splitting one's computing power in half, and so would not be lucrative: + +![](https://github.com/vbuterin/diagrams/blob/master/powsec.png?raw=true) + +The result is that if all actors are narrowly economically rational, then even if there are no attackers, a blockchain may never reach consensus. If there is an attacker, then the attacker need only overpower altruistic nodes (who would exclusively stake on the original chain), and not rational nodes (who would stake on both the original chain and the attacker's chain), in contrast to proof of work, where the attacker must overpower both altruists and rational nodes (or at least credibly threaten to: see [P + epsilon attacks](https://blog.ethereum.org/2015/01/28/p-epsilon-attack/)). + +Some argue that stakeholders have an incentive to act correctly and only stake on the longest chain in order to "preserve the value of their investment", however this ignores that this incentive suffers from [tragedy of the commons](https://en.wikipedia.org/wiki/Tragedy_of_the_commons) problems: each individual stakeholder might only have a 1% chance of being "pivotal" (ie. being in a situation where if they participate in an attack then it succeeds and if they do not participate it fails), and so the bribe needed to convince them personally to join an attack would be only 1% of the size of their deposit; hence, the required combined bribe would be only 0.5-1% of the total sum of all deposits. Additionally, this argument implies that any zero-chance-of-failure situation is not a stable equilibrium, as if the chance of failure is zero then everyone has a 0% chance of being pivotal. + +This can be solved via two strategies. The first, described in broad terms under the name "Slasher" [here](https://blog.ethereum.org/2014/01/15/slasher-a-punitive-proof-of-stake-algorithm/) and developed further by Iddo Bentov [here](https://arxiv.org/pdf/1406.5694.pdf), involves penalizing validators if they simultaneously create blocks on multiple chains, by means of including proof of misbehavior (ie. two conflicting signed block headers) into the blockchain as a later point in time at which point the malfeasant validator's deposit is deducted appropriately. This changes the incentive structure thus: + +![](https://github.com/vbuterin/diagrams/blob/master/slasher1sec.png?raw=true) + +Note that for this algorithm to work, the validator set needs to be determined well ahead of time. Otherwise, if a validator has 1% of the stake, then if there are two branches A and B then 0.99% of the time the validator will be eligible to stake only on A and not on B, 0.99% of the time the validator will be eligible to stake on B and not on A, and only 0.01% of the time will the validator will be eligible to stake on both. Hence, the validator can with 99% efficiency probabilistically double-stake: stake on A if possible, stake on B if possible, and only if the choice between both is open stake on the longer chain. This can only be avoided if the validator selection is the same for every block on both branches, which requires the validators to be selected at a time before the fork takes place. + +This has its own flaws, including requiring nodes to be frequently online to get a secure view of the blockchain, and opening up medium-range validator collusion risks (ie. situations where, for example, 25 out of 30 consecutive validators get together and agree ahead of time to implement a 51% attack on the previous 19 blocks), but if these risks are deemed acceptable then it works well. + +The second strategy is to simply punish validators for creating blocks on the _wrong_ chain. That is, if there are two competing chains, A and B, then if a validator creates a block on B, they get a reward of +R on B, but the block header can be included into A (in Casper this is called a "dunkle") and on A the validator suffers a penalty of -F (possibly F = R). This changes the economic calculation thus: + +![](https://github.com/vbuterin/diagrams/blob/master/slasher2sec.png?raw=true) + +The intuition here is that we can replicate the economics of proof of work inside of proof of stake. In proof of work, there is also a penalty for creating a block on the wrong chain, but this penalty is implicit in the external environment: miners have to spend extra electricity and obtain or rent extra hardware. Here, we simply make the penalties explicit. This mechanism has the disadvantage that it imposes slightly more risk on validators (although the effect should be smoothed out over time), but has the advantage that it does not require validators to be known ahead of time. + +## That shows how chain-based algorithms solve nothing-at-stake. Now how do BFT-style proof of stake algorithms work? + +BFT-style (partially synchronous) proof of stake algorithms allow validators to "vote" on blocks by sending one or more types of signed messages, and specify two kinds of rules: + +- **Finality conditions** - rules that determine when a given hash can be considered finalized. +- **Slashing conditions** - rules that determine when a given validator can be deemed beyond reasonable doubt to have misbehaved (eg. voting for multiple conflicting blocks at the same time). If a validator triggers one of these rules, their entire deposit gets deleted. + +To illustrate the different forms that slashing conditions can take, we will give two examples of slashing conditions (hereinafter, "2/3 of all validators" is shorthand for "2/3 of all validators weighted by deposited coins", and likewise for other fractions and percentages). In these examples, "PREPARE" and "COMMIT" should be understood as simply referring to two types of messages that validators can send. + +1. If `MESSAGES` contains messages of the form `["COMMIT", HASH1, view]` and `["COMMIT", HASH2, view]` for the same `view` but differing `HASH1` and `HASH2` signed by the same validator, then that validator is slashed. +2. If `MESSAGES` contains a message of the form `["COMMIT", HASH, view1]`, then UNLESS either view1 = -1 or there also exist messages of the form `["PREPARE", HASH, view1, view2]` for some specific `view2`, where `view2 < view1`, signed by 2/3 of all validators, then the validator that made the COMMIT is slashed. + +There are two important desiderata for a suitable set of slashing conditions to have: + +- **Accountable safety** - if conflicting `HASH1` and `HASH2` (ie. `HASH1` and `HASH2` are different, and neither is a descendant of the other) are finalized, then at least 1/3 of all validators must have violated some slashing condition. +- **Plausible liveness** - unless at least 1/3 of all validators have violated some slashing condition, there exists a set of messages that 2/3 of validators can produce that finalize some value. + +If we have a set of slashing conditions that satisfies both properties, then we can incentivize participants to send messages, and start benefiting from economic finality. + +## What is "economic finality" in general? + +Economic finality is the idea that once a block is finalized, or more generally once enough messages of certain types have been signed, then the only way that at any point in the future the canonical history will contain a conflicting block is if a large number of people are willing to burn very large amounts of money. If a node sees that this condition has been met for a given block, then they have a very economically strong assurance that that block will always be part of the canonical history that everyone agrees on. + +There are two "flavors" of economic finality: + +1. A block can be economically finalized if a sufficient number of validators have signed cryptoeconomic claims of the form "I agree to lose X in all histories where block B is not included". This gives clients assurance that either (i) B is part of the canonical chain, or (ii) validators lost a large amount of money in order to trick them into thinking that this is the case. +2. A block can be economically finalized if a sufficient number of validators have signed messages expressing support for block B, and there is a mathematical proof that _if some B' != B is also finalized under the same definition_ then validators lose a large amount of money. If clients see this, and also validate the chain, and validity plus finality is a sufficient condition for precedence in the canonical fork choice rule, then they get an assurance that either (i) B is part of the canonical chain, or (ii) validators lost a large amount of money in making a conflicting chain that was also finalized. + +The two approaches to finality inherit from the two solutions to the nothing at stake problem: finality by penalizing incorrectness, and finality by penalizing equivocation. The main benefit of the first approach is that it is more light-client friendly and is simpler to reason about, and the main benefits of the second approach are that (i) it's easier to see that honest validators will not be punished, and (ii) griefing factors are more favorable to honest validators. + +Casper follows the second flavor, though it is possible that an on-chain mechanism will be added where validators can voluntarily opt-in to signing finality messages of the first flavor, thereby enabling much more efficient light clients. + +## So how does this relate to Byzantine fault tolerance theory? + +Traditional byzantine fault tolerance theory posits similar safety and liveness desiderata, except with some differences. First of all, traditional byzantine fault tolerance theory simply requires that safety is achieved if 2/3 of validators are _honest_. This is a strictly easier model to work in; traditional fault tolerance tries to prove "if mechanism M has a safety failure, then at least 1/3 of nodes are faulty", whereas our model tries to prove "if mechanism M has a safety failure, then at least 1/3 of nodes are faulty, _and you know which ones, even if you were offline at the time the failure took place_". From a liveness perspective, our model is the easier one, as we do not demand a proof that the network _will_ come to consensus, we just demand a proof that it does not get _stuck_. + +Fortunately, we can show the additional accountability requirement is not a particularly difficult one; in fact, with the right "protocol armor", we can convert _any_ traditional partially synchronous or asynchronous Byzantine fault-tolerant algorithm into an accountable algorithm. The proof of this basically boils down to the fact that faults can be exhaustively categorized into a few classes, and each one of these classes is either accountable (ie. if you commit that type of fault you can get caught, so we can make a slashing condition for it) or indistinguishable from latency (note that even the fault of sending messages too early is indistinguishable from latency, as one can model it by speeding up everyone's clocks and assigning the messages that _weren't_ sent too early a higher latency). + +## What is "weak subjectivity"? + +It is important to note that the mechanism of using deposits to ensure there is "something at stake" does lead to one change in the security model. Suppose that deposits are locked for four months, and can later be withdrawn. Suppose that an attempted 51% attack happens that reverts 10 days worth of transactions. The blocks created by the attackers can simply be imported into the main chain as proof-of-malfeasance (or "dunkles") and the validators can be punished. However, suppose that such an attack happens after six months. Then, even though the blocks can certainly be re-imported, by that time the malfeasant validators will be able to withdraw their deposits on the main chain, and so they cannot be punished. + +To solve this problem, we introduce a "revert limit" - a rule that nodes must simply refuse to revert further back in time than the deposit length (ie. in our example, four months), and we additionally require nodes to log on at least once every deposit length to have a secure view of the chain. Note that this rule is different from every other consensus rule in the protocol, in that it means that nodes may come to different conclusions depending on when they saw certain messages. The time that a node saw a given message may be different between different nodes; hence we consider this rule "subjective" (alternatively, one well-versed in Byzantine fault tolerance theory may view it as a kind of synchrony assumption). + +However, the "subjectivity" here is very weak: in order for a node to get on the "wrong" chain, they must receive the original message four months later than they otherwise would have. This is only possible in two cases: + +1. When a node connects to the blockchain for the first time. +2. If a node has been offline for more than four months. + +We can solve (1) by making it the user's responsibility to authenticate the latest state out of band. They can do this by asking their friends, block explorers, businesses that they interact with, etc. for a recent block hash in the chain that they see as the canonical one. In practice, such a block hash may well simply come as part of the software they use to verify the blockchain; an attacker that can corrupt the checkpoint in the software can arguably just as easily corrupt the software itself, and no amount of pure cryptoeconomic verification can solve that problem. (2) does genuinely add an additional security requirement for nodes, though note once again that the possibility of hard forks and security vulnerabilities, and the requirement to stay up to date to know about them and install any needed software updates, exists in proof of work too. + +Note that all of this is a problem only in the very limited case where a majority of previous stakeholders from some point in time collude to attack the network and create an alternate chain; most of the time we expect there will only be one canonical chain to choose from. + +## Can we try to automate the social authentication to reduce the load on users? + +One approach is to bake it into natural user workflow: a [BIP 70](https://github.com/bitcoin/bips/blob/master/bip-0070.mediawiki)-style payment request could include a recent block hash, and the user's client software would make sure that they are on the same chain as the vendor before approving a payment (or for that matter, any on-chain interaction). The other is to use Jeff Coleman's [universal hash time](https://www.youtube.com/watch?v=phXohYF0xGo). If UHT is used, then a successful attack chain would need to be generated secretly _at the same time_ as the legitimate chain was being built, requiring a majority of validators to secretly collude for that long. + +## Can one economically penalize censorship in proof of stake? + +Unlike reverts, censorship is much more difficult to prove. The blockchain itself cannot directly tell the difference between "user A tried to send transaction X but it was unfairly censored", "user A tried to send transaction X but it never got in because the transaction fee was insufficient" and "user A never tried to send transaction X at all". However, there are a number of techniques that can be used to mitigate censorship issues. + +The first is censorship resistance by halting problem. In the weaker version of this scheme, the protocol is designed to be Turing-complete in such a way that a validator cannot even tell whether or not a given transaction will lead to an undesired action without spending a large amount of processing power executing the transaction, and thus opening itself up to denial-of-service attacks. This is what [prevented the DAO soft fork](http://hackingdistributed.com/2016/07/05/eth-is-more-resilient-to-censorship/). + +In the stronger version of the scheme, transactions can trigger guaranteed effects at some point in the near to mid-term future. Hence, a user could send multiple transactions which interact with each other and with predicted third-party information to lead to some future event, but the validators cannot possibly tell that this is going to happen until the transactions are already included (and economically finalized) and it is far too late to stop them; even if all future transactions are excluded, the event that validators wish to halt would still take place. Note that in this scheme, validators could still try to prevent **all** transactions, or perhaps all transactions that do not come packaged with some formal proof that they do not lead to anything undesired, but this would entail forbidding a very wide class of transactions to the point of essentially breaking the entire system, which would cause validators to lose value as the price of the cryptocurrency in which their deposits are denominated would drop. + +The second, [described by Adam Back here](https://www.reddit.com/r/Bitcoin/comments/4j7pfj/adam_backs_clever_mechanism_to_prevent_miners/d34t9xa), is to require transactions to be [timelock-encrypted](https://www.gwern.net/Self-decrypting%20files). Hence, validators will include the transactions without knowing the contents, and only later could the contents automatically be revealed, by which point once again it would be far too late to un-include the transactions. If validators were sufficiently malicious, however, they could simply only agree to include transactions that come with a cryptographic proof (eg. ZK-SNARK) of what the decrypted version is; this would force users to download new client software, but an adversary could conveniently provide such client software for easy download, and in a game-theoretic model users would have the incentive to play along. + +Perhaps the best that can be said in a proof-of-stake context is that users could also install a software update that includes a hard fork that deletes the malicious validators and this is not that much harder than installing a software update to make their transactions "censorship-friendly". Hence, all in all this scheme is also moderately effective, though it does come at the cost of slowing interaction with the blockchain down (note that the scheme must be mandatory to be effective; otherwise malicious validators could much more easily simply filter encrypted transactions without filtering the quicker unencrypted transactions). + +A third alternative is to include censorship detection in the fork choice rule. The idea is simple. Nodes watch the network for transactions, and if they see a transaction that has a sufficiently high fee for a sufficient amount of time, then they assign a lower "score" to blockchains that do not include this transaction. If all nodes follow this strategy, then eventually a minority chain would automatically coalesce that includes the transactions, and all honest online nodes would follow it. The main weakness of such a scheme is that offline nodes would still follow the majority branch, and if the censorship is temporary and they log back on after the censorship ends then they would end up on a different branch from online nodes. Hence, this scheme should be viewed more as a tool to facilitate automated emergency coordination on a hard fork than something that would play an active role in day-to-day fork choice. + +## How does validator selection work, and what is stake grinding? + +In any chain-based proof of stake algorithm, there is a need for some mechanism which randomly selects which validator out of the currently active validator set can make the next block. For example, if the currently active validator set consists of Alice with 40 ether, Bob with 30 ether, Charlie with 20 ether and David with 10 ether, then you want there to be a 40% chance that Alice will be the next block creator, 30% chance that Bob will be, etc (in practice, you want to randomly select not just one validator, but rather an infinite sequence of validators, so that if Alice doesn't show up there is someone who can replace her after some time, but this doesn't change the fundamental problem). In non-chain-based algorithms randomness is also often needed for different reasons. + +"Stake grinding" is a class of attack where a validator performs some computation or takes some other step to try to bias the randomness in their own favor. For example: + +1. In [Peercoin](https://bitcointalk.org/index.php?topic=131901.0), a validator could "grind" through many combinations of parameters and find favorable parameters that would increase the probability of their coins generating a valid block. +2. In one now-defunct implementation, the randomness for block N+1 was dependent on the signature of block N. This allowed a validator to repeatedly produce new signatures until they found one that allowed them to get the next block, thereby seizing control of the system forever. +3. In NXT, the randomness for block N+1 is dependent on the validator that creates block N. This allows a validator to manipulate the randomness by simply skipping an opportunity to create a block. This carries an opportunity cost equal to the block reward, but sometimes the new random seed would give the validator an above-average number of blocks over the next few dozen blocks. See [here](http://vitalik.ca/files/randomness.html) for a more detailed analysis. + +(1) and (2) are easy to solve; the general approach is to require validators to deposit their coins well in advance, and not to use information that can be easily manipulated as source data for the randomness. There are several main strategies for solving problems like (3). The first is to use schemes based on [secret sharing](https://en.wikipedia.org/wiki/Secret_sharing) or [deterministic threshold signatures](https://eprint.iacr.org/2002/081.pdf) and have validators collaboratively generate the random value. These schemes are robust against all manipulation unless a majority of validators collude (in some cases though, depending on the implementation, between 33-50% of validators can interfere in the operation, leading to the protocol having a 67% liveness assumption). + +The second is to use cryptoeconomic schemes where validators commit to information (ie. publish `sha3(x)`) well in advance, and then must publish `x` in the block; `x` is then added into the randomness pool. There are two theoretical attack vectors against this: + +1. Manipulate `x` at commitment time. This is impractical because the randomness result would take many actors' values into account, and if even one of them is honest then the output will be a uniform distribution. A uniform distribution XORed together with arbitrarily many arbitrarily biased distributions still gives a uniform distribution. +2. Selectively avoid publishing blocks. However, this attack costs one block reward of opportunity cost, and because the scheme prevents anyone from seeing any future validators except for the next, it almost never provides more than one block reward worth of revenue. The only exception is the case where, if a validator skips, the next validator in line AND the first child of that validator will both be the same validator; if these situations are a grave concern then we can punish skipping further via an explicit skipping penalty. + +The third is to use [Iddo Bentov's "majority beacon"](https://arxiv.org/pdf/1406.5694.pdf), which generates a random number by taking the bit-majority of the previous N random numbers generated through some other beacon (ie. the first bit of the result is 1 if the majority of the first bits in the source numbers is 1 and otherwise it's 0, the second bit of the result is 1 if the majority of the second bits in the source numbers is 1 and otherwise it's 0, etc). This gives a cost-of-exploitation of `~C * sqrt(N)` where `C` is the cost of exploitation of the underlying beacons. Hence, all in all, many known solutions to stake grinding exist; the problem is more like [differential cryptanalysis](https://en.wikipedia.org/wiki/Differential_cryptanalysis) than [the halting problem](https://en.wikipedia.org/wiki/Halting_problem) - an annoyance that proof of stake designers eventually understood and now know how to overcome, not a fundamental and inescapable flaw. + +## What would the equivalent of a 51% attack against Casper look like? + +The most basic form of "51% attack" is a simple **finality reversion**: validators that already finalized block A then finalize some competing block A', thereby breaking the blockchain's finality guarantee. In this case, there now exist two incompatible finalized histories, creating a split of the blockchain, that full nodes would be willing to accept, and so it is up to the community to coordinate out of band to focus on one of the branches and ignore the other(s). + +This coordination could take place on social media, through private channels between block explorer providers, businesses and exchanges, various online discussion forms, and the like. The principle according to which the decision would be made is "whichever one was finalized _first_ is the real one". Another alternative is to rely on "market consensus": both branches would be briefly being traded on exchanges for a very short period of time, until network effects rapidly make one branch much more valuable with the others. In this case, the "first finalized chain wins" principle would be a Schelling point for what the market would choose. It's very possible that a combination of both approaches will get used in practice. + +Once there is consensus on which chain is real, users (ie. validators and light and full node operators) would be able to manually insert the winning block hash into their client software through a special option in the interface, and their nodes would then ignore all other chains. No matter which chain wins, there exists evidence that can immediately be used to destroy at least 1/3 of the validators' deposits. + +Another kind of attack is **liveness denial**: instead of trying to revert blocks, a cartel of >=34% of validators could simply refuse to finalize any more blocks. In this case, blocks would never finalize. Casper uses a hybrid chain/BFT-style consensus, and so the blockchain would still grow, but it would have a much lower level of security. If no blocks are finalized for some long period of time (eg. 1 day), then there are several options: + +1. The protocol can include an automatic feature to rotate the validator set. Blocks under the new validator set would finalize, but clients would get an indication that the new finalized blocks are in some sense suspect, as it's very possible that the old validator set will resume operating and finalize some other blocks. Clients could then manually override this warning once it's clear that the old validator set is not coming back online. There would be a protocol rule that under such an event all old validators that did not try to participate in the consensus process take a large penalty to their deposits. +2. A hard fork is used to add in new validators and delete the attackers' balances. + +In case (2), the fork would once again be coordinated via social consensus and possibly via market consensus (ie. the branch with the old and new validator set briefly both being traded on exchanges). In the latter case, there is a strong argument that the market would want to choose the branch where "the good guys win", as such a chain has validators that have demonstrated their goodwill (or at least, their alignment with the interest of the users) and so is a more useful chain for application developers. + +Note that there is a spectrum of response strategies here between social coordination and in-protocol automation, and it is generally considered desirable to push as far toward automated resolution as possible so as to minimize the risk of simultaneous 51% attacks and attacks on the social layer (and market consensus tools such as exchanges). One can imagine an implementation of (1) where nodes automatically accept a switch to a new validator set if they do not see a new block being committed for a long enough time, which would reduce the need for social coordination but at the cost of requiring those nodes that do not wish to rely on social coordination to remain constantly online. In either case, a solution can be designed where attackers take a large hit to their deposits. + +A more insidious kind of attack is a **censorship attack**, where >= 34% of validators refuse to finalize blocks that contain certain kinds of transactions that they do not like, but otherwise the blockchain keeps going and blocks keep getting finalized. This could range from a mild censorship attack which only censors to interfere with a few specific applications (eg. selectively censoring transactions in something like Raiden or the lightning network is a fairly easy way for a cartel to steal money) to an attack that blocks all transactions. + +There are two sub-cases. The first is where the attacker has 34-67% of the stake. Here, we can program validators to refuse to finalize or build on blocks that they subjectively believe are clearly censoring transactions, which turns this kind of attack into a more standard liveness attack. The more dangerous case is where the attacker has more than 67% of the stake. Here, the attacker can freely block any transactions they wish to block and refuse to build on any blocks that do contain such transactions. + +There are two lines of defense. First, because Ethereum is Turing-complete it is [naturally somewhat resistant to censorship](http://hackingdistributed.com/2016/07/05/eth-is-more-resilient-to-censorship/) as censoring transactions that have a certain effect is in some ways similar to solving the halting problem. Because there is a gas limit, it is not literally impossible, though the "easy" ways to do it do open up denial-of-service attack vulnerabilities. + +This resistance [is not perfect](https://pdaian.com/blog/on-soft-fork-security/), and there are ways to improve it. The most interesting approach is to add in-protocol features where transactions can automatically schedule future events, as it would be extremely difficult to try to foresee what the result of executing scheduled events and the events resulting from those scheduled events would be ahead of time. Validators could then use obfuscated sequences of scheduled events to deposit their ether, and dilute the attacker to below 33%. + +Second, one can introduce the notion of an "active fork choice rule", where part of the process for determining whether or not a given chain is valid is trying to interact with it and verifying that it is not trying to censor you. The most effective way to do this would be for nodes to repeatedly send a transaction to schedule depositing their ether and then cancel the deposit at the last moment. If nodes detect censorship, they could then follow through with the deposit, and so temporarily join the validator pool en masse, diluting the attacker to below 33%. If the validator cartel censors their attempts to deposit, then nodes running this "active fork choice rule" would not recognize the chain as valid; this would collapse the censorship attack into a liveness denial attack, at which point it can be resolved through the same means as other liveness denial attacks. + +## That sounds like a lot of reliance on out-of-band social coordination; is that not dangerous? + +Attacks against Casper are extremely expensive; as we will see below, attacks against Casper cost as much, if not more, than the cost of buying enough mining power in a proof of work chain to permanently 51% attack it over and over again to the point of uselessness. Hence, the recovery techniques described above will only be used in very extreme circumstances; in fact, advocates of proof of work also generally express willingness to use social coordination in similar circumstances by, for example, [changing the proof of work algorithm](https://news.bitcoin.com/bitcoin-developers-changing-proof-work-algorithm/). Hence, it is not even clear that the need for social coordination in proof of stake is larger than it is in proof of work. + +In reality, we expect the amount of social coordination required to be near-zero, as attackers will realize that it is not in their benefit to burn such large amounts of money to simply take a blockchain offline for one or two days. + +## Doesn't MC => MR mean that all consensus algorithms with a given security level are equally efficient (or in other words, equally wasteful)? + +This is an argument that many have raised, perhaps best explained by [Paul Sztorc in this article](http://www.truthcoin.info/blog/pow-cheapest/). Essentially, if you create a way for people to earn $100, then people will be willing to spend anywhere up to $99.9 (including the cost of their own labor) in order to get it; marginal cost approaches marginal revenue. Hence, the theory goes, any algorithm with a given block reward will be equally "wasteful" in terms of the quantity of socially unproductive activity that is carried out in order to try to get the reward. + +There are three flaws with this: + +1. It's not enough to simply say that marginal cost approaches marginal revenue; one must also posit a plausible mechanism by which someone can actually expend that cost. For example, if tomorrow I announce that every day from then on I will give $100 to a randomly selected one of a given list of ten people (using my laptop's /dev/urandom as randomness), then there is simply no way for anyone to send $99 to try to get at that randomness. Either they are not in the list of ten, in which case they have no chance no matter what they do, or they are in the list of ten, in which case they don't have any reasonable way to manipulate my randomness so they're stuck with getting the expected-value $10 per day. +2. MC => MR does NOT imply total cost approaches total revenue. For example, suppose that there is an algorithm which pseudorandomly selects 1000 validators out of some very large set (each validator getting a reward of $1), you have 10% of the stake so on average you get 100, and at a cost of $1 you can force the randomness to reset (and you can repeat this an unlimited number of times). Due to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem), the standard deviation of your reward is $10, and based on [other known results in math](http://math.stackexchange.com/questions/89030/expectation-of-the-maximum-of-gaussian-random-variables) the expected maximum of N random samples is slightly under `M + S * sqrt(2 * log(N))` where `M` is the mean and `S` is the standard deviation. Hence the reward for making additional trials (ie. increasing N) drops off sharply, eg. with 0 re-trials your expected reward is $100, with one re-trial it's $105.5, with two it's $108.5, with three it's $110.3, with four it's $111.6, with five it's $112.6 and with six it's $113.5. Hence, after five retrials it stops being worth it. As a result, an economically motivated attacker with ten percent of stake will inefficiently spend $5 to get an additional revenue of $13, though the total revenue is $113. If the exploitable mechanisms only expose small opportunities, the economic loss will be small; it is decidedly NOT the case that a single drop of exploitability brings the entire flood of PoW-level economic waste rushing back in. This point will also be very relevant in our below discussion on capital lockup costs. +3. Proof of stake can be secured with much lower total rewards than proof of work. + +## What about capital lockup costs? + +Locking up X ether in a deposit is not free; it entails a sacrifice of optionality for the ether holder. Right now, if I have 1000 ether, I can do whatever I want with it; if I lock it up in a deposit, then it's stuck there for months, and I do not have, for example, the insurance utility of the money being there to pay for sudden unexpected expenses. I also lose some freedom to change my token allocations away from ether within that timeframe; I could simulate selling ether by shorting an amount equivalent to the deposit on an exchange, but this itself carries costs including exchange fees and paying interest. Some might argue: isn't this capital lockup inefficiency really just a highly indirect way of achieving the exact same level of economic inefficiency as exists in proof of work? The answer is no, for both reasons (2) and (3) above. + +Let us start with (3) first. Consider a model where proof of stake deposits are infinite-term, ASICs last forever, ASIC technology is fixed (ie. no Moore's law) and electricity costs are zero. Let's say the equilibrium interest rate is 5% per annum. In a proof of work blockchain, I can take $1000, convert it into a miner, and the miner will pay me $50 in rewards per year forever. In a proof of stake blockchain, I would buy $1000 of coins, deposit them (ie. losing them forever), and get $50 in rewards per year forever. So far, the situation looks completely symmetrical (technically, even here, in the proof of stake case my destruction of coins isn't fully socially destructive as it makes others' coins worth more, but we can leave that aside for the moment). The cost of a "Maginot-line" 51% attack (ie. buying up more hardware than the rest of the network) increases by $1000 in both cases. + +Now, let's perform the following changes to our model in turn: + +1. Moore's law exists, ASICs depreciate by 50% every 2.772 years (that's a continuously-compounded 25% per annum; picked to make the numbers simpler). If I want to retain the same "pay once, get money forever" behavior, I can do so: I would put $1000 into a fund, where $167 would go into an ASIC and the remaining $833 would go into investments at 5% interest; the $41.67 dividends per year would be just enough to keep renewing the ASIC hardware (assuming technological development is fully continuous, once again to make the math simpler). Rewards would go down to $8.33 per year; hence, 83.3% of miners will drop out until the system comes back into equilibrium with me earning $50 per year, and so the Maginot-line cost of an attack on PoW given the same rewards drops by a factor of 6. +2. Electricity plus maintenance makes up 1/3 of mining costs. We estimate the 1/3 from recent mining statistics: one of Bitfury's new data centers consumes [0.06 joules per gigahash](http://www.coindesk.com/bitfury-details-100-million-georgia-data-center/), or 60 J/TH or 0.000017 kWh/TH, and if we assume the entire Bitcoin network has similar efficiencies we get 27.9 kWh per second given [1.67 million TH/s total Bitcoin hashpower](http://bitcoinwatch.com/). Electricity in China costs [$0.11 per kWh](http://www.statista.com/statistics/477995/global-prices-of-electricity-by-select-country/), so that's about $3 per second, or $260,000 per day. Bitcoin block rewards plus fees are $600 per BTC _ 13 BTC per block _ 144 blocks per day = $1.12m per day. Thus electricity itself would make up 23% of costs, and we can back-of-the-envelope estimate maintenance at 10% to give a clean 1/3 ongoing costs, 2/3 fixed costs split. This means that out of your $1000 fund, only $111 would go into the ASIC, $55 would go into paying ongoing costs, and $833 would go into hardware investments; hence the Maginot-line cost of attack is 9x lower than in our original setting. +3. Deposits are temporary, not permanent. Sure, if I voluntarily keep staking forever, then this changes nothing. However, I regain some of the optionality that I had before; I could quit within a medium timeframe (say, 4 months) at any time. This means that I would be willing to put more than $1000 of ether in for the $50 per year gain; perhaps in equilibrium it would be something like $3000. Hence, the cost of the Maginot line attack on PoS _increases_ by a factor of three, and so on net PoS gives 27x more security than PoW for the same cost. + +The above included a large amount of simplified modeling, however it serves to show how multiple factors stack up heavily in favor of PoS in such a way that PoS gets _more_ bang for its buck in terms of security. The meta-argument for why this [perhaps suspiciously multifactorial argument](http://lesswrong.com/lw/kpj/multiple_factor_explanations_should_not_appear/) leans so heavily in favor of PoS is simple: in PoW, we are working directly with the laws of physics. In PoS, we are able to design the protocol in such a way that it has the precise properties that we want - in short, we can _optimize the laws of physics in our favor_. The "hidden trapdoor" that gives us (3) is the change in the security model, specifically the introduction of weak subjectivity. + +Now, we can talk about the marginal/total distinction. In the case of capital lockup costs, this is very important. For example, consider a case where you have $100,000 of ether. You probably intend to hold a large portion of it for a long time; hence, locking up even $50,000 of the ether should be nearly free. Locking up $80,000 would be slightly more inconvenient, but $20,000 of breathing room still gives you a large space to maneuver. Locking up $90,000 is more problematic, $99,000 is very problematic, and locking up all $100,000 is absurd, as it means you would not even have a single bit of ether left to pay basic transaction fees. Hence, your marginal costs increase quickly. We can show the difference between this state of affairs and the state of affairs in proof of work as follows: + +![](https://blog.ethereum.org/wp-content/uploads/2014/07/liquidity.png) + +Hence, the total cost of proof of stake is potentially much lower than the marginal cost of depositing 1 more ETH into the system multiplied by the amount of ether currently deposited. + +Note that this component of the argument unfortunately does not fully translate into reduction of the "safe level of issuance". It does help us because it shows that we can get substantial proof of stake participation even if we keep issuance very low; however, it also means that a large portion of the gains will simply be borne by validators as economic surplus. + +## Will exchanges in proof of stake pose a similar centralization risk to pools in proof of work? + +From a centralization perspective, in both [Bitcoin](https://blockchain.info/pools) and [Ethereum](https://etherscan.io/stat/miner?range=7&blocktype=blocks) it's the case that roughly three pools are needed to coordinate on a 51% attack (4 in Bitcoin, 3 in Ethereum at the time of this writing). In PoS, if we assume 30% participation including all exchanges, then [three exchanges](https://etherscan.io/accounts) would be enough to make a 51% attack; if participation goes up to 40% then the required number goes up to eight. However, exchanges will not be able to participate with all of their ether; the reason is that they need to accomodate withdrawals. + +Additionally, pooling in PoS is discouraged because it has a much higher trust requirement - a proof of stake pool can pretend to be hacked, destroy its participants' deposits and claim a reward for it. On the other hand, the ability to earn interest on one's coins without oneself running a node, even if trust is required, is something that many may find attractive; all in all, the centralization balance is an empirical question for which the answer is unclear until the system is actually running for a substantial period of time. With sharding, we expect pooling incentives to reduce further, as (i) there is even less concern about variance, and (ii) in a sharded model, transaction verification load is proportional to the amount of capital that one puts in, and so there are no direct infrastructure savings from pooling. + +A final point is that centralization is less harmful in proof of stake than in proof of work, as there are much cheaper ways to recover from successful 51% attacks; one does not need to switch to a new mining algorithm. + +## Are there economic ways to discourage centralization? + +One strategy suggested by Vlad Zamfir is to only partially destroy deposits of validators that get slashed, setting the percentage destroyed to be proportional to the percentage of other validators that have been slashed recently. This ensures that validators lose all of their deposits in the event of an actual attack, but only a small part of their deposits in the event of a one-off mistake. This makes lower-security staking strategies possible, and also specifically incentivizes validators to have their errors be as uncorrelated (or ideally, anti-correlated) with other validators as possible; this involves not being in the largest pool, putting one's node on the largest virtual private server provider and even using secondary software implementations, all of which increase decentralization. + +## Can proof of stake be used in private/consortium chains? + +Generally, yes; any proof of stake algorithm can be used as a consensus algorithm in private/consortium chain settings. The only change is that the way the validator set is selected would be different: it would start off as a set of trusted users that everyone agrees on, and then it would be up to the validator set to vote on adding in new validators. + +## Further reading + +- [Casper proof of stake compedium](./casper-proof-of-stake-compendium.md) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 1b48ba4def6..7b124aa7257 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -28,7 +28,7 @@ Validators are rewarded for honestly proposing and validating blocks. The reward ### Inactivity Leak -As well as security, Gasper also provides "plausible liveness". This is the condition that as logn as 2/3 of the total staked ether is voting honestly and following the protocol, the chain will be able to finalize irrespective of any other activity (such as attacks, latency issues or slashings). Put another way, 1/3 of the total staked ether must be somehow compromised to prevent the chain from finalizing. In Gasper there is an additional line of defense against a liveness failure, known as the "inactivity leak". This mechanism activates when the chain has failed to finalize for more than 4 epochs. The validators that are not actively attesting to the majority chain have their stake gradually drained away until the majority regains 2/3 of the total stake, ensuring that liveness failures are only temporary. +As well as security, Gasper also provides "plausible liveness". This is the condition that as long as 2/3 of the total staked ether is voting honestly and following the protocol, the chain will be able to finalize irrespective of any other activity (such as attacks, latency issues or slashings). Put another way, 1/3 of the total staked ether must be somehow compromised to prevent the chain from finalizing. In Gasper there is an additional line of defense against a liveness failure, known as the "inactivity leak". This mechanism activates when the chain has failed to finalize for more than 4 epochs. The validators that are not actively attesting to the majority chain have their stake gradually drained away until the majority regains 2/3 of the total stake, ensuring that liveness failures are only temporary. ### Fork choice @@ -38,5 +38,5 @@ LMD-GHOST stands for "latest message driven greedy heaviest observed sub-tree". ## Further Reading -[Gasper: Combinign GHOST and Casper](https://arxiv.org/pdf/2003.03052.pdf) +[Gasper: Combining GHOST and Casper](https://arxiv.org/pdf/2003.03052.pdf) [Capser the Friendly Finality Gadget](https://arxiv.org/pdf/1710.09437.pdf) From eae73e78bd88afabac3d8ca67abb4515d30a474d Mon Sep 17 00:00:00 2001 From: Joe Date: Tue, 26 Apr 2022 13:29:29 +0100 Subject: [PATCH 12/26] rm faqs as replicated on vitalik.ca (linked). --- .../consensus-mechanisms/pos/faqs/index.md | 246 ------------------ 1 file changed, 246 deletions(-) delete mode 100644 src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md diff --git a/src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md b/src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md deleted file mode 100644 index 78da15b63a3..00000000000 --- a/src/content/developers/docs/consensus-mechanisms/pos/faqs/index.md +++ /dev/null @@ -1,246 +0,0 @@ -# Proof of Stake FAQs - -## What is Proof of Stake - -**Proof of Stake (PoS) is a category of consensus algorithms for public blockchains that depend on a validator's economic stake in the network**. In proof of work (PoW) based public blockchains (e.g. Bitcoin and the current implementation of Ethereum), the algorithm rewards participants who solve cryptographic puzzles in order to validate transactions and create new blocks (i.e. mining). In PoS-based public blockchains (e.g. Ethereum's upcoming Casper implementation), a set of validators take turns proposing and voting on the next block, and the weight of each validator's vote depends on the size of its deposit (i.e. stake). Significant advantages of PoS include **security, reduced risk of centralization, and energy efficiency**. - -In general, a proof of stake algorithm looks as follows. The blockchain keeps track of a set of validators, and anyone who holds the blockchain's base cryptocurrency (in Ethereum's case, ether) can become a validator by sending a special type of transaction that **locks up their ether into a deposit**. The process of creating and agreeing to new blocks is then done through a consensus algorithm that all current validators can participate in. - -There are many kinds of consensus algorithms, and many ways to assign rewards to validators who participate in the consensus algorithm, so there are many "flavors" of proof of stake. From an algorithmic perspective, there are two major types: chain-based proof of stake and [BFT](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)-style proof of stake. - -In **chain-based proof of stake**, the algorithm pseudo-randomly selects a validator during each time slot (eg. every period of 10 seconds might be a time slot), and assigns that validator the right to create a single block, and this block must point to some previous block (normally the block at the end of the previously longest chain), and so over time most blocks converge into a single constantly growing chain. - -In **BFT-style proof of stake**, validators are **randomly** assigned the right to _propose_ blocks, but _agreeing on which block is canonical_ is done through a multi-round process where every validator sends a "vote" for some specific block during each round, and at the end of the process all (honest and online) validators permanently agree on whether or not any given block is part of the chain. Note that blocks may still be _chained together_; the key difference is that consensus on a block can come within one block, and does not depend on the length or size of the chain after it. - -## What are the benefits of proof of stake as opposed to proof of work? - -See [A Proof of Stake Design Philosophy](https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51) for a more long-form argument. - -In short: - -- **No need to consume large quantities of electricity** in order to secure a blockchain (eg. it's estimated that both Bitcoin and Ethereum burn over $1 million worth of electricity and hardware costs per day as part of their consensus mechanism). -- Because of the lack of high electricity consumption, there is **not as much need to issue as many new coins** in order to motivate participants to keep participating in the network. It may theoretically even be possible to have _negative_ net issuance, where a portion of transaction fees is "burned" and so the supply goes down over time. -- Proof of stake opens the door to a wider array of techniques that use game-theoretic mechanism design in order to better **discourage centralized cartels** from forming and, if they do form, from acting in ways that are harmful to the network (eg. like [selfish mining](https://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf) in proof of work). -- **Reduced centralization risks**, as economies of scale are much less of an issue. $10 million of coins will get you exactly 10 times higher returns than $1 million of coins, without any additional disproportionate gains because at the higher level you can afford better mass-production equipment. -- Ability to use economic penalties to **make various forms of 51% attacks vastly more expensive** to carry out than proof of work - to paraphrase Vlad Zamfir, "it's as though your ASIC farm burned down if you participated in a 51% attack". - -## How does proof of stake fit into traditional Byzantine fault tolerance research? - -There are several fundamental results from Byzantine fault tolerance research that apply to all consensus algorithms, including traditional consensus algorithms like PBFT but also any proof of stake algorithm and, with the appropriate mathematical modeling, proof of work. - -The key results include: - -- [**CAP theorem**](https://en.wikipedia.org/wiki/CAP_theorem) - "in the cases that a network partition takes place, you have to choose either consistency or availability, you cannot have both". The intuitive argument is simple: if the network splits in half, and in one half I send a transaction "send my 10 coins to A" and in the other I send a transaction "send my 10 coins to B", then either the system is unavailable, as one or both transactions will not be processed, or it becomes inconsistent, as one half of the network will see the first transaction completed and the other half will see the second transaction completed. Note that the CAP theorem has nothing to do with scalability; it applies to sharded and non-sharded systems equally. -- [**FLP impossibility**](http://the-paper-trail.org/blog/a-brief-tour-of-flp-impossibility/) - in an asynchronous setting (ie. there are no guaranteed bounds on network latency even between correctly functioning nodes), it is not possible to create an algorithm which is guaranteed to reach consensus in any specific finite amount of time if even a single faulty/dishonest node is present. Note that this does NOT rule out ["Las Vegas" algorithms](https://en.wikipedia.org/wiki/Las_Vegas_algorithm) that have some probability each round of achieving consensus and thus will achieve consensus within T seconds with probability exponentially approaching 1 as T grows; this is in fact the "escape hatch" that many successful consensus algorithms use. -- **Bounds on fault tolerance** - from [the DLS paper](http://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf) we have: (i) protocols running in a partially synchronous network model (ie. there is a bound on network latency but we do not know ahead of time what it is) can tolerate up to 1/3 arbitrary (ie. "Byzantine") faults, (ii) deterministic protocols in an asynchronous model (ie. no bounds on network latency) cannot tolerate faults (although their paper fails to mention that [randomized algorithms can](http://link.springer.com/chapter/10.1007%2F978-3-540-77444-0_7) with up to 1/3 fault tolerance), (iii) protocols in a synchronous model (ie. network latency is guaranteed to be less than a known `d`) can, surprisingly, tolerate up to 100% fault tolerance, although there are restrictions on what can happen when more than or equal to 1/2 of nodes are faulty. Note that the "authenticated Byzantine" model is the one worth considering, not the "Byzantine" one; the "authenticated" part essentially means that we can use public key cryptography in our algorithms, which is in modern times very well-researched and very cheap. - -Proof of work has been [rigorously analyzed by Andrew Miller and others](https://socrates1024.s3.amazonaws.com/consensus.pdf) and fits into the picture as an algorithm reliant on a synchronous network model. We can model the network as being made up of a near-infinite number of nodes, with each node representing a very small unit of computing power and having a very small probability of being able to create a block in a given period. In this model, the protocol has 50% fault tolerance assuming zero network latency, ~46% (Ethereum) and ~49.5% (Bitcoin) fault tolerance under actually observed conditions, but goes down to 33% if network latency is equal to the block time, and reduces to zero as network latency approaches infinity. - -Proof of stake consensus fits more directly into the Byzantine fault tolerant consensus mould, as all validators have known identities (stable Ethereum addresses) and the network keeps track of the total size of the validator set. There are two general lines of proof of stake research, one looking at synchronous network models and one looking at partially asynchronous network models. "Chain-based" proof of stake algorithms almost always rely on synchronous network models, and their security can be formally proven within these models similarly to how security of [proof of work algorithms](http://nakamotoinstitute.org/static/docs/anonymous-byzantine-consensus.pdf) can be proven. A line of research connecting traditional Byzantine fault tolerant consensus in partially synchronous networks to proof of stake also exists, but is more complex to explain; it will be covered in more detail in later sections. - -Proof of work algorithms and chain-based proof of stake algorithms choose availability over consistency, but BFT-style consensus algorithms lean more toward consistency; [Tendermint](https://github.com/tendermint/tendermint) chooses consistency explicitly, and Casper uses a hybrid model that prefers availability but provides as much consistency as possible and makes both on-chain applications and clients aware of how strong the consistency guarantee is at any given time. - -Note that Ittay Eyal and Emin Gun Sirer's [selfish mining](https://bitcoinmagazine.com/articles/selfish-mining-a-25-attack-against-the-bitcoin-network-1383578440) discovery, which places 25% and 33% bounds on the incentive compatibility of Bitcoin mining depending on the network model (ie. mining is only incentive compatible if collusions larger than 25% or 33% are impossible) has NOTHING to do with results from traditional consensus algorithm research, which does not touch incentive compatibility. - -## What is the "nothing at stake" problem and how can it be fixed? - -In many early (all chain-based) proof of stake algorithms, including Peercoin, there are only rewards for producing blocks, and no penalties. This has the unfortunate consequence that, in the case that there are multiple competing chains, it is in a validator's incentive to try to make blocks on top of every chain at once, just to be sure: - -![](https://raw.githubusercontent.com/vbuterin/diagrams/master/possec.png) - -In proof of work, doing so would require splitting one's computing power in half, and so would not be lucrative: - -![](https://github.com/vbuterin/diagrams/blob/master/powsec.png?raw=true) - -The result is that if all actors are narrowly economically rational, then even if there are no attackers, a blockchain may never reach consensus. If there is an attacker, then the attacker need only overpower altruistic nodes (who would exclusively stake on the original chain), and not rational nodes (who would stake on both the original chain and the attacker's chain), in contrast to proof of work, where the attacker must overpower both altruists and rational nodes (or at least credibly threaten to: see [P + epsilon attacks](https://blog.ethereum.org/2015/01/28/p-epsilon-attack/)). - -Some argue that stakeholders have an incentive to act correctly and only stake on the longest chain in order to "preserve the value of their investment", however this ignores that this incentive suffers from [tragedy of the commons](https://en.wikipedia.org/wiki/Tragedy_of_the_commons) problems: each individual stakeholder might only have a 1% chance of being "pivotal" (ie. being in a situation where if they participate in an attack then it succeeds and if they do not participate it fails), and so the bribe needed to convince them personally to join an attack would be only 1% of the size of their deposit; hence, the required combined bribe would be only 0.5-1% of the total sum of all deposits. Additionally, this argument implies that any zero-chance-of-failure situation is not a stable equilibrium, as if the chance of failure is zero then everyone has a 0% chance of being pivotal. - -This can be solved via two strategies. The first, described in broad terms under the name "Slasher" [here](https://blog.ethereum.org/2014/01/15/slasher-a-punitive-proof-of-stake-algorithm/) and developed further by Iddo Bentov [here](https://arxiv.org/pdf/1406.5694.pdf), involves penalizing validators if they simultaneously create blocks on multiple chains, by means of including proof of misbehavior (ie. two conflicting signed block headers) into the blockchain as a later point in time at which point the malfeasant validator's deposit is deducted appropriately. This changes the incentive structure thus: - -![](https://github.com/vbuterin/diagrams/blob/master/slasher1sec.png?raw=true) - -Note that for this algorithm to work, the validator set needs to be determined well ahead of time. Otherwise, if a validator has 1% of the stake, then if there are two branches A and B then 0.99% of the time the validator will be eligible to stake only on A and not on B, 0.99% of the time the validator will be eligible to stake on B and not on A, and only 0.01% of the time will the validator will be eligible to stake on both. Hence, the validator can with 99% efficiency probabilistically double-stake: stake on A if possible, stake on B if possible, and only if the choice between both is open stake on the longer chain. This can only be avoided if the validator selection is the same for every block on both branches, which requires the validators to be selected at a time before the fork takes place. - -This has its own flaws, including requiring nodes to be frequently online to get a secure view of the blockchain, and opening up medium-range validator collusion risks (ie. situations where, for example, 25 out of 30 consecutive validators get together and agree ahead of time to implement a 51% attack on the previous 19 blocks), but if these risks are deemed acceptable then it works well. - -The second strategy is to simply punish validators for creating blocks on the _wrong_ chain. That is, if there are two competing chains, A and B, then if a validator creates a block on B, they get a reward of +R on B, but the block header can be included into A (in Casper this is called a "dunkle") and on A the validator suffers a penalty of -F (possibly F = R). This changes the economic calculation thus: - -![](https://github.com/vbuterin/diagrams/blob/master/slasher2sec.png?raw=true) - -The intuition here is that we can replicate the economics of proof of work inside of proof of stake. In proof of work, there is also a penalty for creating a block on the wrong chain, but this penalty is implicit in the external environment: miners have to spend extra electricity and obtain or rent extra hardware. Here, we simply make the penalties explicit. This mechanism has the disadvantage that it imposes slightly more risk on validators (although the effect should be smoothed out over time), but has the advantage that it does not require validators to be known ahead of time. - -## That shows how chain-based algorithms solve nothing-at-stake. Now how do BFT-style proof of stake algorithms work? - -BFT-style (partially synchronous) proof of stake algorithms allow validators to "vote" on blocks by sending one or more types of signed messages, and specify two kinds of rules: - -- **Finality conditions** - rules that determine when a given hash can be considered finalized. -- **Slashing conditions** - rules that determine when a given validator can be deemed beyond reasonable doubt to have misbehaved (eg. voting for multiple conflicting blocks at the same time). If a validator triggers one of these rules, their entire deposit gets deleted. - -To illustrate the different forms that slashing conditions can take, we will give two examples of slashing conditions (hereinafter, "2/3 of all validators" is shorthand for "2/3 of all validators weighted by deposited coins", and likewise for other fractions and percentages). In these examples, "PREPARE" and "COMMIT" should be understood as simply referring to two types of messages that validators can send. - -1. If `MESSAGES` contains messages of the form `["COMMIT", HASH1, view]` and `["COMMIT", HASH2, view]` for the same `view` but differing `HASH1` and `HASH2` signed by the same validator, then that validator is slashed. -2. If `MESSAGES` contains a message of the form `["COMMIT", HASH, view1]`, then UNLESS either view1 = -1 or there also exist messages of the form `["PREPARE", HASH, view1, view2]` for some specific `view2`, where `view2 < view1`, signed by 2/3 of all validators, then the validator that made the COMMIT is slashed. - -There are two important desiderata for a suitable set of slashing conditions to have: - -- **Accountable safety** - if conflicting `HASH1` and `HASH2` (ie. `HASH1` and `HASH2` are different, and neither is a descendant of the other) are finalized, then at least 1/3 of all validators must have violated some slashing condition. -- **Plausible liveness** - unless at least 1/3 of all validators have violated some slashing condition, there exists a set of messages that 2/3 of validators can produce that finalize some value. - -If we have a set of slashing conditions that satisfies both properties, then we can incentivize participants to send messages, and start benefiting from economic finality. - -## What is "economic finality" in general? - -Economic finality is the idea that once a block is finalized, or more generally once enough messages of certain types have been signed, then the only way that at any point in the future the canonical history will contain a conflicting block is if a large number of people are willing to burn very large amounts of money. If a node sees that this condition has been met for a given block, then they have a very economically strong assurance that that block will always be part of the canonical history that everyone agrees on. - -There are two "flavors" of economic finality: - -1. A block can be economically finalized if a sufficient number of validators have signed cryptoeconomic claims of the form "I agree to lose X in all histories where block B is not included". This gives clients assurance that either (i) B is part of the canonical chain, or (ii) validators lost a large amount of money in order to trick them into thinking that this is the case. -2. A block can be economically finalized if a sufficient number of validators have signed messages expressing support for block B, and there is a mathematical proof that _if some B' != B is also finalized under the same definition_ then validators lose a large amount of money. If clients see this, and also validate the chain, and validity plus finality is a sufficient condition for precedence in the canonical fork choice rule, then they get an assurance that either (i) B is part of the canonical chain, or (ii) validators lost a large amount of money in making a conflicting chain that was also finalized. - -The two approaches to finality inherit from the two solutions to the nothing at stake problem: finality by penalizing incorrectness, and finality by penalizing equivocation. The main benefit of the first approach is that it is more light-client friendly and is simpler to reason about, and the main benefits of the second approach are that (i) it's easier to see that honest validators will not be punished, and (ii) griefing factors are more favorable to honest validators. - -Casper follows the second flavor, though it is possible that an on-chain mechanism will be added where validators can voluntarily opt-in to signing finality messages of the first flavor, thereby enabling much more efficient light clients. - -## So how does this relate to Byzantine fault tolerance theory? - -Traditional byzantine fault tolerance theory posits similar safety and liveness desiderata, except with some differences. First of all, traditional byzantine fault tolerance theory simply requires that safety is achieved if 2/3 of validators are _honest_. This is a strictly easier model to work in; traditional fault tolerance tries to prove "if mechanism M has a safety failure, then at least 1/3 of nodes are faulty", whereas our model tries to prove "if mechanism M has a safety failure, then at least 1/3 of nodes are faulty, _and you know which ones, even if you were offline at the time the failure took place_". From a liveness perspective, our model is the easier one, as we do not demand a proof that the network _will_ come to consensus, we just demand a proof that it does not get _stuck_. - -Fortunately, we can show the additional accountability requirement is not a particularly difficult one; in fact, with the right "protocol armor", we can convert _any_ traditional partially synchronous or asynchronous Byzantine fault-tolerant algorithm into an accountable algorithm. The proof of this basically boils down to the fact that faults can be exhaustively categorized into a few classes, and each one of these classes is either accountable (ie. if you commit that type of fault you can get caught, so we can make a slashing condition for it) or indistinguishable from latency (note that even the fault of sending messages too early is indistinguishable from latency, as one can model it by speeding up everyone's clocks and assigning the messages that _weren't_ sent too early a higher latency). - -## What is "weak subjectivity"? - -It is important to note that the mechanism of using deposits to ensure there is "something at stake" does lead to one change in the security model. Suppose that deposits are locked for four months, and can later be withdrawn. Suppose that an attempted 51% attack happens that reverts 10 days worth of transactions. The blocks created by the attackers can simply be imported into the main chain as proof-of-malfeasance (or "dunkles") and the validators can be punished. However, suppose that such an attack happens after six months. Then, even though the blocks can certainly be re-imported, by that time the malfeasant validators will be able to withdraw their deposits on the main chain, and so they cannot be punished. - -To solve this problem, we introduce a "revert limit" - a rule that nodes must simply refuse to revert further back in time than the deposit length (ie. in our example, four months), and we additionally require nodes to log on at least once every deposit length to have a secure view of the chain. Note that this rule is different from every other consensus rule in the protocol, in that it means that nodes may come to different conclusions depending on when they saw certain messages. The time that a node saw a given message may be different between different nodes; hence we consider this rule "subjective" (alternatively, one well-versed in Byzantine fault tolerance theory may view it as a kind of synchrony assumption). - -However, the "subjectivity" here is very weak: in order for a node to get on the "wrong" chain, they must receive the original message four months later than they otherwise would have. This is only possible in two cases: - -1. When a node connects to the blockchain for the first time. -2. If a node has been offline for more than four months. - -We can solve (1) by making it the user's responsibility to authenticate the latest state out of band. They can do this by asking their friends, block explorers, businesses that they interact with, etc. for a recent block hash in the chain that they see as the canonical one. In practice, such a block hash may well simply come as part of the software they use to verify the blockchain; an attacker that can corrupt the checkpoint in the software can arguably just as easily corrupt the software itself, and no amount of pure cryptoeconomic verification can solve that problem. (2) does genuinely add an additional security requirement for nodes, though note once again that the possibility of hard forks and security vulnerabilities, and the requirement to stay up to date to know about them and install any needed software updates, exists in proof of work too. - -Note that all of this is a problem only in the very limited case where a majority of previous stakeholders from some point in time collude to attack the network and create an alternate chain; most of the time we expect there will only be one canonical chain to choose from. - -## Can we try to automate the social authentication to reduce the load on users? - -One approach is to bake it into natural user workflow: a [BIP 70](https://github.com/bitcoin/bips/blob/master/bip-0070.mediawiki)-style payment request could include a recent block hash, and the user's client software would make sure that they are on the same chain as the vendor before approving a payment (or for that matter, any on-chain interaction). The other is to use Jeff Coleman's [universal hash time](https://www.youtube.com/watch?v=phXohYF0xGo). If UHT is used, then a successful attack chain would need to be generated secretly _at the same time_ as the legitimate chain was being built, requiring a majority of validators to secretly collude for that long. - -## Can one economically penalize censorship in proof of stake? - -Unlike reverts, censorship is much more difficult to prove. The blockchain itself cannot directly tell the difference between "user A tried to send transaction X but it was unfairly censored", "user A tried to send transaction X but it never got in because the transaction fee was insufficient" and "user A never tried to send transaction X at all". However, there are a number of techniques that can be used to mitigate censorship issues. - -The first is censorship resistance by halting problem. In the weaker version of this scheme, the protocol is designed to be Turing-complete in such a way that a validator cannot even tell whether or not a given transaction will lead to an undesired action without spending a large amount of processing power executing the transaction, and thus opening itself up to denial-of-service attacks. This is what [prevented the DAO soft fork](http://hackingdistributed.com/2016/07/05/eth-is-more-resilient-to-censorship/). - -In the stronger version of the scheme, transactions can trigger guaranteed effects at some point in the near to mid-term future. Hence, a user could send multiple transactions which interact with each other and with predicted third-party information to lead to some future event, but the validators cannot possibly tell that this is going to happen until the transactions are already included (and economically finalized) and it is far too late to stop them; even if all future transactions are excluded, the event that validators wish to halt would still take place. Note that in this scheme, validators could still try to prevent **all** transactions, or perhaps all transactions that do not come packaged with some formal proof that they do not lead to anything undesired, but this would entail forbidding a very wide class of transactions to the point of essentially breaking the entire system, which would cause validators to lose value as the price of the cryptocurrency in which their deposits are denominated would drop. - -The second, [described by Adam Back here](https://www.reddit.com/r/Bitcoin/comments/4j7pfj/adam_backs_clever_mechanism_to_prevent_miners/d34t9xa), is to require transactions to be [timelock-encrypted](https://www.gwern.net/Self-decrypting%20files). Hence, validators will include the transactions without knowing the contents, and only later could the contents automatically be revealed, by which point once again it would be far too late to un-include the transactions. If validators were sufficiently malicious, however, they could simply only agree to include transactions that come with a cryptographic proof (eg. ZK-SNARK) of what the decrypted version is; this would force users to download new client software, but an adversary could conveniently provide such client software for easy download, and in a game-theoretic model users would have the incentive to play along. - -Perhaps the best that can be said in a proof-of-stake context is that users could also install a software update that includes a hard fork that deletes the malicious validators and this is not that much harder than installing a software update to make their transactions "censorship-friendly". Hence, all in all this scheme is also moderately effective, though it does come at the cost of slowing interaction with the blockchain down (note that the scheme must be mandatory to be effective; otherwise malicious validators could much more easily simply filter encrypted transactions without filtering the quicker unencrypted transactions). - -A third alternative is to include censorship detection in the fork choice rule. The idea is simple. Nodes watch the network for transactions, and if they see a transaction that has a sufficiently high fee for a sufficient amount of time, then they assign a lower "score" to blockchains that do not include this transaction. If all nodes follow this strategy, then eventually a minority chain would automatically coalesce that includes the transactions, and all honest online nodes would follow it. The main weakness of such a scheme is that offline nodes would still follow the majority branch, and if the censorship is temporary and they log back on after the censorship ends then they would end up on a different branch from online nodes. Hence, this scheme should be viewed more as a tool to facilitate automated emergency coordination on a hard fork than something that would play an active role in day-to-day fork choice. - -## How does validator selection work, and what is stake grinding? - -In any chain-based proof of stake algorithm, there is a need for some mechanism which randomly selects which validator out of the currently active validator set can make the next block. For example, if the currently active validator set consists of Alice with 40 ether, Bob with 30 ether, Charlie with 20 ether and David with 10 ether, then you want there to be a 40% chance that Alice will be the next block creator, 30% chance that Bob will be, etc (in practice, you want to randomly select not just one validator, but rather an infinite sequence of validators, so that if Alice doesn't show up there is someone who can replace her after some time, but this doesn't change the fundamental problem). In non-chain-based algorithms randomness is also often needed for different reasons. - -"Stake grinding" is a class of attack where a validator performs some computation or takes some other step to try to bias the randomness in their own favor. For example: - -1. In [Peercoin](https://bitcointalk.org/index.php?topic=131901.0), a validator could "grind" through many combinations of parameters and find favorable parameters that would increase the probability of their coins generating a valid block. -2. In one now-defunct implementation, the randomness for block N+1 was dependent on the signature of block N. This allowed a validator to repeatedly produce new signatures until they found one that allowed them to get the next block, thereby seizing control of the system forever. -3. In NXT, the randomness for block N+1 is dependent on the validator that creates block N. This allows a validator to manipulate the randomness by simply skipping an opportunity to create a block. This carries an opportunity cost equal to the block reward, but sometimes the new random seed would give the validator an above-average number of blocks over the next few dozen blocks. See [here](http://vitalik.ca/files/randomness.html) for a more detailed analysis. - -(1) and (2) are easy to solve; the general approach is to require validators to deposit their coins well in advance, and not to use information that can be easily manipulated as source data for the randomness. There are several main strategies for solving problems like (3). The first is to use schemes based on [secret sharing](https://en.wikipedia.org/wiki/Secret_sharing) or [deterministic threshold signatures](https://eprint.iacr.org/2002/081.pdf) and have validators collaboratively generate the random value. These schemes are robust against all manipulation unless a majority of validators collude (in some cases though, depending on the implementation, between 33-50% of validators can interfere in the operation, leading to the protocol having a 67% liveness assumption). - -The second is to use cryptoeconomic schemes where validators commit to information (ie. publish `sha3(x)`) well in advance, and then must publish `x` in the block; `x` is then added into the randomness pool. There are two theoretical attack vectors against this: - -1. Manipulate `x` at commitment time. This is impractical because the randomness result would take many actors' values into account, and if even one of them is honest then the output will be a uniform distribution. A uniform distribution XORed together with arbitrarily many arbitrarily biased distributions still gives a uniform distribution. -2. Selectively avoid publishing blocks. However, this attack costs one block reward of opportunity cost, and because the scheme prevents anyone from seeing any future validators except for the next, it almost never provides more than one block reward worth of revenue. The only exception is the case where, if a validator skips, the next validator in line AND the first child of that validator will both be the same validator; if these situations are a grave concern then we can punish skipping further via an explicit skipping penalty. - -The third is to use [Iddo Bentov's "majority beacon"](https://arxiv.org/pdf/1406.5694.pdf), which generates a random number by taking the bit-majority of the previous N random numbers generated through some other beacon (ie. the first bit of the result is 1 if the majority of the first bits in the source numbers is 1 and otherwise it's 0, the second bit of the result is 1 if the majority of the second bits in the source numbers is 1 and otherwise it's 0, etc). This gives a cost-of-exploitation of `~C * sqrt(N)` where `C` is the cost of exploitation of the underlying beacons. Hence, all in all, many known solutions to stake grinding exist; the problem is more like [differential cryptanalysis](https://en.wikipedia.org/wiki/Differential_cryptanalysis) than [the halting problem](https://en.wikipedia.org/wiki/Halting_problem) - an annoyance that proof of stake designers eventually understood and now know how to overcome, not a fundamental and inescapable flaw. - -## What would the equivalent of a 51% attack against Casper look like? - -The most basic form of "51% attack" is a simple **finality reversion**: validators that already finalized block A then finalize some competing block A', thereby breaking the blockchain's finality guarantee. In this case, there now exist two incompatible finalized histories, creating a split of the blockchain, that full nodes would be willing to accept, and so it is up to the community to coordinate out of band to focus on one of the branches and ignore the other(s). - -This coordination could take place on social media, through private channels between block explorer providers, businesses and exchanges, various online discussion forms, and the like. The principle according to which the decision would be made is "whichever one was finalized _first_ is the real one". Another alternative is to rely on "market consensus": both branches would be briefly being traded on exchanges for a very short period of time, until network effects rapidly make one branch much more valuable with the others. In this case, the "first finalized chain wins" principle would be a Schelling point for what the market would choose. It's very possible that a combination of both approaches will get used in practice. - -Once there is consensus on which chain is real, users (ie. validators and light and full node operators) would be able to manually insert the winning block hash into their client software through a special option in the interface, and their nodes would then ignore all other chains. No matter which chain wins, there exists evidence that can immediately be used to destroy at least 1/3 of the validators' deposits. - -Another kind of attack is **liveness denial**: instead of trying to revert blocks, a cartel of >=34% of validators could simply refuse to finalize any more blocks. In this case, blocks would never finalize. Casper uses a hybrid chain/BFT-style consensus, and so the blockchain would still grow, but it would have a much lower level of security. If no blocks are finalized for some long period of time (eg. 1 day), then there are several options: - -1. The protocol can include an automatic feature to rotate the validator set. Blocks under the new validator set would finalize, but clients would get an indication that the new finalized blocks are in some sense suspect, as it's very possible that the old validator set will resume operating and finalize some other blocks. Clients could then manually override this warning once it's clear that the old validator set is not coming back online. There would be a protocol rule that under such an event all old validators that did not try to participate in the consensus process take a large penalty to their deposits. -2. A hard fork is used to add in new validators and delete the attackers' balances. - -In case (2), the fork would once again be coordinated via social consensus and possibly via market consensus (ie. the branch with the old and new validator set briefly both being traded on exchanges). In the latter case, there is a strong argument that the market would want to choose the branch where "the good guys win", as such a chain has validators that have demonstrated their goodwill (or at least, their alignment with the interest of the users) and so is a more useful chain for application developers. - -Note that there is a spectrum of response strategies here between social coordination and in-protocol automation, and it is generally considered desirable to push as far toward automated resolution as possible so as to minimize the risk of simultaneous 51% attacks and attacks on the social layer (and market consensus tools such as exchanges). One can imagine an implementation of (1) where nodes automatically accept a switch to a new validator set if they do not see a new block being committed for a long enough time, which would reduce the need for social coordination but at the cost of requiring those nodes that do not wish to rely on social coordination to remain constantly online. In either case, a solution can be designed where attackers take a large hit to their deposits. - -A more insidious kind of attack is a **censorship attack**, where >= 34% of validators refuse to finalize blocks that contain certain kinds of transactions that they do not like, but otherwise the blockchain keeps going and blocks keep getting finalized. This could range from a mild censorship attack which only censors to interfere with a few specific applications (eg. selectively censoring transactions in something like Raiden or the lightning network is a fairly easy way for a cartel to steal money) to an attack that blocks all transactions. - -There are two sub-cases. The first is where the attacker has 34-67% of the stake. Here, we can program validators to refuse to finalize or build on blocks that they subjectively believe are clearly censoring transactions, which turns this kind of attack into a more standard liveness attack. The more dangerous case is where the attacker has more than 67% of the stake. Here, the attacker can freely block any transactions they wish to block and refuse to build on any blocks that do contain such transactions. - -There are two lines of defense. First, because Ethereum is Turing-complete it is [naturally somewhat resistant to censorship](http://hackingdistributed.com/2016/07/05/eth-is-more-resilient-to-censorship/) as censoring transactions that have a certain effect is in some ways similar to solving the halting problem. Because there is a gas limit, it is not literally impossible, though the "easy" ways to do it do open up denial-of-service attack vulnerabilities. - -This resistance [is not perfect](https://pdaian.com/blog/on-soft-fork-security/), and there are ways to improve it. The most interesting approach is to add in-protocol features where transactions can automatically schedule future events, as it would be extremely difficult to try to foresee what the result of executing scheduled events and the events resulting from those scheduled events would be ahead of time. Validators could then use obfuscated sequences of scheduled events to deposit their ether, and dilute the attacker to below 33%. - -Second, one can introduce the notion of an "active fork choice rule", where part of the process for determining whether or not a given chain is valid is trying to interact with it and verifying that it is not trying to censor you. The most effective way to do this would be for nodes to repeatedly send a transaction to schedule depositing their ether and then cancel the deposit at the last moment. If nodes detect censorship, they could then follow through with the deposit, and so temporarily join the validator pool en masse, diluting the attacker to below 33%. If the validator cartel censors their attempts to deposit, then nodes running this "active fork choice rule" would not recognize the chain as valid; this would collapse the censorship attack into a liveness denial attack, at which point it can be resolved through the same means as other liveness denial attacks. - -## That sounds like a lot of reliance on out-of-band social coordination; is that not dangerous? - -Attacks against Casper are extremely expensive; as we will see below, attacks against Casper cost as much, if not more, than the cost of buying enough mining power in a proof of work chain to permanently 51% attack it over and over again to the point of uselessness. Hence, the recovery techniques described above will only be used in very extreme circumstances; in fact, advocates of proof of work also generally express willingness to use social coordination in similar circumstances by, for example, [changing the proof of work algorithm](https://news.bitcoin.com/bitcoin-developers-changing-proof-work-algorithm/). Hence, it is not even clear that the need for social coordination in proof of stake is larger than it is in proof of work. - -In reality, we expect the amount of social coordination required to be near-zero, as attackers will realize that it is not in their benefit to burn such large amounts of money to simply take a blockchain offline for one or two days. - -## Doesn't MC => MR mean that all consensus algorithms with a given security level are equally efficient (or in other words, equally wasteful)? - -This is an argument that many have raised, perhaps best explained by [Paul Sztorc in this article](http://www.truthcoin.info/blog/pow-cheapest/). Essentially, if you create a way for people to earn $100, then people will be willing to spend anywhere up to $99.9 (including the cost of their own labor) in order to get it; marginal cost approaches marginal revenue. Hence, the theory goes, any algorithm with a given block reward will be equally "wasteful" in terms of the quantity of socially unproductive activity that is carried out in order to try to get the reward. - -There are three flaws with this: - -1. It's not enough to simply say that marginal cost approaches marginal revenue; one must also posit a plausible mechanism by which someone can actually expend that cost. For example, if tomorrow I announce that every day from then on I will give $100 to a randomly selected one of a given list of ten people (using my laptop's /dev/urandom as randomness), then there is simply no way for anyone to send $99 to try to get at that randomness. Either they are not in the list of ten, in which case they have no chance no matter what they do, or they are in the list of ten, in which case they don't have any reasonable way to manipulate my randomness so they're stuck with getting the expected-value $10 per day. -2. MC => MR does NOT imply total cost approaches total revenue. For example, suppose that there is an algorithm which pseudorandomly selects 1000 validators out of some very large set (each validator getting a reward of $1), you have 10% of the stake so on average you get 100, and at a cost of $1 you can force the randomness to reset (and you can repeat this an unlimited number of times). Due to the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem), the standard deviation of your reward is $10, and based on [other known results in math](http://math.stackexchange.com/questions/89030/expectation-of-the-maximum-of-gaussian-random-variables) the expected maximum of N random samples is slightly under `M + S * sqrt(2 * log(N))` where `M` is the mean and `S` is the standard deviation. Hence the reward for making additional trials (ie. increasing N) drops off sharply, eg. with 0 re-trials your expected reward is $100, with one re-trial it's $105.5, with two it's $108.5, with three it's $110.3, with four it's $111.6, with five it's $112.6 and with six it's $113.5. Hence, after five retrials it stops being worth it. As a result, an economically motivated attacker with ten percent of stake will inefficiently spend $5 to get an additional revenue of $13, though the total revenue is $113. If the exploitable mechanisms only expose small opportunities, the economic loss will be small; it is decidedly NOT the case that a single drop of exploitability brings the entire flood of PoW-level economic waste rushing back in. This point will also be very relevant in our below discussion on capital lockup costs. -3. Proof of stake can be secured with much lower total rewards than proof of work. - -## What about capital lockup costs? - -Locking up X ether in a deposit is not free; it entails a sacrifice of optionality for the ether holder. Right now, if I have 1000 ether, I can do whatever I want with it; if I lock it up in a deposit, then it's stuck there for months, and I do not have, for example, the insurance utility of the money being there to pay for sudden unexpected expenses. I also lose some freedom to change my token allocations away from ether within that timeframe; I could simulate selling ether by shorting an amount equivalent to the deposit on an exchange, but this itself carries costs including exchange fees and paying interest. Some might argue: isn't this capital lockup inefficiency really just a highly indirect way of achieving the exact same level of economic inefficiency as exists in proof of work? The answer is no, for both reasons (2) and (3) above. - -Let us start with (3) first. Consider a model where proof of stake deposits are infinite-term, ASICs last forever, ASIC technology is fixed (ie. no Moore's law) and electricity costs are zero. Let's say the equilibrium interest rate is 5% per annum. In a proof of work blockchain, I can take $1000, convert it into a miner, and the miner will pay me $50 in rewards per year forever. In a proof of stake blockchain, I would buy $1000 of coins, deposit them (ie. losing them forever), and get $50 in rewards per year forever. So far, the situation looks completely symmetrical (technically, even here, in the proof of stake case my destruction of coins isn't fully socially destructive as it makes others' coins worth more, but we can leave that aside for the moment). The cost of a "Maginot-line" 51% attack (ie. buying up more hardware than the rest of the network) increases by $1000 in both cases. - -Now, let's perform the following changes to our model in turn: - -1. Moore's law exists, ASICs depreciate by 50% every 2.772 years (that's a continuously-compounded 25% per annum; picked to make the numbers simpler). If I want to retain the same "pay once, get money forever" behavior, I can do so: I would put $1000 into a fund, where $167 would go into an ASIC and the remaining $833 would go into investments at 5% interest; the $41.67 dividends per year would be just enough to keep renewing the ASIC hardware (assuming technological development is fully continuous, once again to make the math simpler). Rewards would go down to $8.33 per year; hence, 83.3% of miners will drop out until the system comes back into equilibrium with me earning $50 per year, and so the Maginot-line cost of an attack on PoW given the same rewards drops by a factor of 6. -2. Electricity plus maintenance makes up 1/3 of mining costs. We estimate the 1/3 from recent mining statistics: one of Bitfury's new data centers consumes [0.06 joules per gigahash](http://www.coindesk.com/bitfury-details-100-million-georgia-data-center/), or 60 J/TH or 0.000017 kWh/TH, and if we assume the entire Bitcoin network has similar efficiencies we get 27.9 kWh per second given [1.67 million TH/s total Bitcoin hashpower](http://bitcoinwatch.com/). Electricity in China costs [$0.11 per kWh](http://www.statista.com/statistics/477995/global-prices-of-electricity-by-select-country/), so that's about $3 per second, or $260,000 per day. Bitcoin block rewards plus fees are $600 per BTC _ 13 BTC per block _ 144 blocks per day = $1.12m per day. Thus electricity itself would make up 23% of costs, and we can back-of-the-envelope estimate maintenance at 10% to give a clean 1/3 ongoing costs, 2/3 fixed costs split. This means that out of your $1000 fund, only $111 would go into the ASIC, $55 would go into paying ongoing costs, and $833 would go into hardware investments; hence the Maginot-line cost of attack is 9x lower than in our original setting. -3. Deposits are temporary, not permanent. Sure, if I voluntarily keep staking forever, then this changes nothing. However, I regain some of the optionality that I had before; I could quit within a medium timeframe (say, 4 months) at any time. This means that I would be willing to put more than $1000 of ether in for the $50 per year gain; perhaps in equilibrium it would be something like $3000. Hence, the cost of the Maginot line attack on PoS _increases_ by a factor of three, and so on net PoS gives 27x more security than PoW for the same cost. - -The above included a large amount of simplified modeling, however it serves to show how multiple factors stack up heavily in favor of PoS in such a way that PoS gets _more_ bang for its buck in terms of security. The meta-argument for why this [perhaps suspiciously multifactorial argument](http://lesswrong.com/lw/kpj/multiple_factor_explanations_should_not_appear/) leans so heavily in favor of PoS is simple: in PoW, we are working directly with the laws of physics. In PoS, we are able to design the protocol in such a way that it has the precise properties that we want - in short, we can _optimize the laws of physics in our favor_. The "hidden trapdoor" that gives us (3) is the change in the security model, specifically the introduction of weak subjectivity. - -Now, we can talk about the marginal/total distinction. In the case of capital lockup costs, this is very important. For example, consider a case where you have $100,000 of ether. You probably intend to hold a large portion of it for a long time; hence, locking up even $50,000 of the ether should be nearly free. Locking up $80,000 would be slightly more inconvenient, but $20,000 of breathing room still gives you a large space to maneuver. Locking up $90,000 is more problematic, $99,000 is very problematic, and locking up all $100,000 is absurd, as it means you would not even have a single bit of ether left to pay basic transaction fees. Hence, your marginal costs increase quickly. We can show the difference between this state of affairs and the state of affairs in proof of work as follows: - -![](https://blog.ethereum.org/wp-content/uploads/2014/07/liquidity.png) - -Hence, the total cost of proof of stake is potentially much lower than the marginal cost of depositing 1 more ETH into the system multiplied by the amount of ether currently deposited. - -Note that this component of the argument unfortunately does not fully translate into reduction of the "safe level of issuance". It does help us because it shows that we can get substantial proof of stake participation even if we keep issuance very low; however, it also means that a large portion of the gains will simply be borne by validators as economic surplus. - -## Will exchanges in proof of stake pose a similar centralization risk to pools in proof of work? - -From a centralization perspective, in both [Bitcoin](https://blockchain.info/pools) and [Ethereum](https://etherscan.io/stat/miner?range=7&blocktype=blocks) it's the case that roughly three pools are needed to coordinate on a 51% attack (4 in Bitcoin, 3 in Ethereum at the time of this writing). In PoS, if we assume 30% participation including all exchanges, then [three exchanges](https://etherscan.io/accounts) would be enough to make a 51% attack; if participation goes up to 40% then the required number goes up to eight. However, exchanges will not be able to participate with all of their ether; the reason is that they need to accomodate withdrawals. - -Additionally, pooling in PoS is discouraged because it has a much higher trust requirement - a proof of stake pool can pretend to be hacked, destroy its participants' deposits and claim a reward for it. On the other hand, the ability to earn interest on one's coins without oneself running a node, even if trust is required, is something that many may find attractive; all in all, the centralization balance is an empirical question for which the answer is unclear until the system is actually running for a substantial period of time. With sharding, we expect pooling incentives to reduce further, as (i) there is even less concern about variance, and (ii) in a sharded model, transaction verification load is proportional to the amount of capital that one puts in, and so there are no direct infrastructure savings from pooling. - -A final point is that centralization is less harmful in proof of stake than in proof of work, as there are much cheaper ways to recover from successful 51% attacks; one does not need to switch to a new mining algorithm. - -## Are there economic ways to discourage centralization? - -One strategy suggested by Vlad Zamfir is to only partially destroy deposits of validators that get slashed, setting the percentage destroyed to be proportional to the percentage of other validators that have been slashed recently. This ensures that validators lose all of their deposits in the event of an actual attack, but only a small part of their deposits in the event of a one-off mistake. This makes lower-security staking strategies possible, and also specifically incentivizes validators to have their errors be as uncorrelated (or ideally, anti-correlated) with other validators as possible; this involves not being in the largest pool, putting one's node on the largest virtual private server provider and even using secondary software implementations, all of which increase decentralization. - -## Can proof of stake be used in private/consortium chains? - -Generally, yes; any proof of stake algorithm can be used as a consensus algorithm in private/consortium chain settings. The only change is that the way the validator set is selected would be different: it would start off as a set of trusted users that everyone agrees on, and then it would be up to the validator set to vote on adding in new validators. - -## Further reading - -- [Casper proof of stake compedium](./casper-proof-of-stake-compendium.md) From ec489729535831f60647afa7a37248c73c98eed8 Mon Sep 17 00:00:00 2001 From: Joe Date: Tue, 26 Apr 2022 13:38:52 +0100 Subject: [PATCH 13/26] update links yaml --- .../docs/consensus-mechanisms/pos/weak-subjectivity/index.md | 4 ++-- src/data/developer-docs-links.yaml | 5 +++++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index 8d0e2c53b6e..22f5887a65c 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -12,11 +12,11 @@ To understand this page it is necessary to first understand the fundamentals of ## Weak Subjectivity -Subjectivity in blockchains refers to reliance upon social information to agree on the current or past states. There may be multiple valid forks that are chosen from according to information gathered from other peers on the network. The converse is objectivity which refers to chains where there is only one possible valid chain that all nodes will necessarily agree upon by applying their coded rules. There is also a third state, known as weak subjectivity. This refers to a chain that can progress objectively after some initial seed of information is retrieved socially. This initial seed is a subjective element, but there is only one valid chain that all clients will objectively coverge upon provided there has not been a critical consensus failure. Weak subjectivity is a feature of Ethereum's proof-of-stake mechanism. +Subjectivity in blockchains refers to reliance upon social information to agree on the current state. There may be multiple valid forks that are chosen from according to information gathered from other peers on the network. The converse is objectivity which refers to chains where there is only one possible valid chain that all nodes will necessarily agree upon by applying their coded rules. There is also a third state, known as weak subjectivity. This refers to a chain that can progress objectively after some initial seed of information is retrieved socially. ## What problems does weak subjectivity solve? -Subjectivity is inherent to proof-of-stake blockchains because selecting the correct chain from multiple forks is done by counting votes. This exposes the blockchain to several attack vectors, including long range attacks whereby nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical one. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so they could be tricked into following an incorrect chain. These attack vectors can be solved by imposing constraints that diminish the subjective aspects of the mechanism - and therefore trust assumptions - to the bare minimum. +Subjectivity is inherent to proof-of-stake blockchains because selecting the correct chain from multiple forks is done by counting historical votes. This exposes the blockchain to several attack vectors, including long range attacks whereby nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical one. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so they could be tricked into following an incorrect chain. These attack vectors can be solved by imposing constraints that diminish the subjective aspects of the mechanism - and therefore trust assumptions - to the bare minimum. ## Weak subjectivity checkpoints diff --git a/src/data/developer-docs-links.yaml b/src/data/developer-docs-links.yaml index 1ba59bac1de..2cd11852b79 100644 --- a/src/data/developer-docs-links.yaml +++ b/src/data/developer-docs-links.yaml @@ -62,6 +62,11 @@ to: /developers/docs/consensus-mechanisms/pow/mining/ - id: docs-nav-proof-of-stake to: /developers/docs/consensus-mechanisms/pos/ + items: + - id: docs-nav-gasper + to: /developers/docs/consensus-mechanisms/pos/gasper/ + - id: docs-nav-weak-subjectivity + to: /developers/docs/consensus-mechanisms/pos/weak-subjectivity - id: docs-nav-ethereum-stack path: /developers/docs/ items: From 86a700055379e85e80245fe7919431076ad7a488 Mon Sep 17 00:00:00 2001 From: Joe Date: Tue, 26 Apr 2022 13:49:53 +0100 Subject: [PATCH 14/26] add links to sidebar --- .../docs/consensus-mechanisms/pos/gasper/index.md | 4 ++-- .../pos/weak-subjectivity/index.md | 10 +++++----- src/intl/en/page-developers-docs.json | 2 ++ 3 files changed, 9 insertions(+), 7 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 7b124aa7257..52ea70cb176 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -38,5 +38,5 @@ LMD-GHOST stands for "latest message driven greedy heaviest observed sub-tree". ## Further Reading -[Gasper: Combining GHOST and Casper](https://arxiv.org/pdf/2003.03052.pdf) -[Capser the Friendly Finality Gadget](https://arxiv.org/pdf/1710.09437.pdf) +- [Gasper: Combining GHOST and Casper](https://arxiv.org/pdf/2003.03052.pdf) +- [Capser the Friendly Finality Gadget](https://arxiv.org/pdf/1710.09437.pdf) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index 22f5887a65c..8e34ae7119e 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -36,8 +36,8 @@ Finally, checkpoints can simply be requested from other nodes, perhaps another E ## Further Reading -[Weak subjectivity in Eth2](https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2) -[Vitalik: How I learned to love weak subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/) -[Weak subjectivity (Teku docs)](https://docs.teku.consensys.net/en/latest/Concepts/Weak-Subjectivity/) -[Phase-0 Weak subjectivity guide](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/weak-subjectivity.md) -[Analysis n weak subjectivity in Ethereum 2.0](https://github.com/runtimeverification/beacon-chain-verification/blob/master/weak-subjectivity/weak-subjectivity-analysis.pdf) +- [Weak subjectivity in Eth2](https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2) +- [Vitalik: How I learned to love weak subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/) +- [Weak subjectivity (Teku docs)](https://docs.teku.consensys.net/en/latest/Concepts/Weak-Subjectivity/) +- [Phase-0 Weak subjectivity guide](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/weak-subjectivity.md) +- [Analysis n weak subjectivity in Ethereum 2.0](https://github.com/runtimeverification/beacon-chain-verification/blob/master/weak-subjectivity/weak-subjectivity-analysis.pdf) diff --git a/src/intl/en/page-developers-docs.json b/src/intl/en/page-developers-docs.json index ea46a01a1e0..954865088f5 100644 --- a/src/intl/en/page-developers-docs.json +++ b/src/intl/en/page-developers-docs.json @@ -10,6 +10,8 @@ "docs-nav-composability": "Composability", "docs-nav-consensus-mechanisms": "Consensus mechanisms", "docs-nav-consensus-mechanisms-description": "How the individual nodes of a distributed network agree on the current state of the system", + "docs-nav-gasper": "Gasper", + "docs-nav-weak-subjectivity": "Weak subjectivity", "docs-nav-data-and-analytics": "Data and analytics", "docs-nav-data-and-analytics-description": "How blockchain data is aggregated, organized and implemented into dapps", "docs-nav-dart": "Dart", From 1c1306d23954faab3a1893a4b0fcdad63139c99a Mon Sep 17 00:00:00 2001 From: Joe Date: Thu, 28 Apr 2022 11:47:50 +0100 Subject: [PATCH 15/26] update according to jshua review comments --- .../pos/weak-subjectivity/index.md | 16 +++++++--------- src/data/developer-docs-links.yaml | 2 +- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index 8e34ae7119e..5d64aca6f0d 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -6,27 +6,25 @@ sidebar: true incomplete: false --- +Subjectivity in blockchains refers to reliance upon social information to agree on the current state. There may be multiple valid forks that are chosen from according to information gathered from other peers on the network. The converse is objectivity which refers to chains where there is only one possible valid chain that all nodes will necessarily agree upon by applying their coded rules. There is also a third state, known as weak subjectivity. This refers to a chain that can progress objectively after some initial seed of information is retrieved socially. + ## Prerequisites To understand this page it is necessary to first understand the fundamentals of [proof-of-stake](/developers/docs/consensus-mechanisms/pos/). -## Weak Subjectivity - -Subjectivity in blockchains refers to reliance upon social information to agree on the current state. There may be multiple valid forks that are chosen from according to information gathered from other peers on the network. The converse is objectivity which refers to chains where there is only one possible valid chain that all nodes will necessarily agree upon by applying their coded rules. There is also a third state, known as weak subjectivity. This refers to a chain that can progress objectively after some initial seed of information is retrieved socially. - -## What problems does weak subjectivity solve? +## What problems does weak subjectivity solve? {#problems-ws-solves} Subjectivity is inherent to proof-of-stake blockchains because selecting the correct chain from multiple forks is done by counting historical votes. This exposes the blockchain to several attack vectors, including long range attacks whereby nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical one. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so they could be tricked into following an incorrect chain. These attack vectors can be solved by imposing constraints that diminish the subjective aspects of the mechanism - and therefore trust assumptions - to the bare minimum. -## Weak subjectivity checkpoints +## Weak subjectivity checkpoints {#ws-checkpoints} The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are state roots that all nodes on the network agree belong in the canonical chain. They serve a similar "universal truth" purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm trusts that the blockchain state defined in that checkpoint is correct and independently and objectively verifies the chain from that point onwards. The checkpoints act as "revert-limits" because blocks located before weak-subjectivity checkpoints cannot be changed. This undermines long range attacks simply by defining long range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they are able to withdraw their stake, and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. -## Difference between weak subjectivity checkpoints and finalized blocks +## Difference between weak subjectivity checkpoints and finalized blocks {#difference-between-ws-and-finalized-blocks} Finalized blocks and weak subjectivity checkpoints are treated differently by Ethereum nodes. If a node becomes aware of two competing finalized blocks then it is torn between the two - it has no way to identify automatically which is the canonical fork. This is symptomatic of a consensus failure. In contrast, a node simply rejects any block that conflicts with its weak subjectivity checkpoint. From the node's perspective the weak subjectivity checkpoint is represents an absolute truth that cannot be undermined by any new knowledge arriving from its peers. -## How weak is weak? +## How weak is weak? {#how-weak-is-weak} The subjective aspect of Ethereum's proof-of-stake is the requirement for a recent state (weak subjectivity checkpoint) from a trusted source to sync from. The risk of getting a bad weak subjectivity checkpoint is very low, partly because they can be checked against several independent public sources such as block explorers or multiple nodes. There is always some degree of trust required to run any software application, for example trusting that the software developers have produced honest software. @@ -34,7 +32,7 @@ A weak subjectivity checkpoint may even come as part of the client software. Arg Finally, checkpoints can simply be requested from other nodes, perhaps another Etheruem user that runs a full node can provide a checkpoint that can then be verified against data from a block explorer. Overall, trusting the provider of a weak subjectivity checkpoint can be considered about as problematic as trusting the client developers. The overall trust required is low. It is also important to note that these considerations only become important in the very unlikely event where a majority of validators collude to produce an alternate fork of the blockchain. Under any other circumstances there is only one Ethereum chain to choose from. -## Further Reading +## Further Reading {#further-reading} - [Weak subjectivity in Eth2](https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2) - [Vitalik: How I learned to love weak subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/) diff --git a/src/data/developer-docs-links.yaml b/src/data/developer-docs-links.yaml index 2cd11852b79..4f2b8afb3d6 100644 --- a/src/data/developer-docs-links.yaml +++ b/src/data/developer-docs-links.yaml @@ -66,7 +66,7 @@ - id: docs-nav-gasper to: /developers/docs/consensus-mechanisms/pos/gasper/ - id: docs-nav-weak-subjectivity - to: /developers/docs/consensus-mechanisms/pos/weak-subjectivity + to: /developers/docs/consensus-mechanisms/pos/weak-subjectivity/ - id: docs-nav-ethereum-stack path: /developers/docs/ items: From ff125c281943e6d59c995b7a4558adac47b5571e Mon Sep 17 00:00:00 2001 From: Joe Date: Thu, 28 Apr 2022 11:50:02 +0100 Subject: [PATCH 16/26] header-id's to gasper page --- .../docs/consensus-mechanisms/pos/gasper/index.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 52ea70cb176..f8299a7ed82 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -10,11 +10,11 @@ Gasper is a combination of Casper the Friendly Finality Gadget and the LMD-GHOST **note** that the original definition of Casper-FFG was updated slightly for inclusion in Gasper. On this page we consider the updated version. -## The role of Gasper +## The role of Gasper {#role-of-gasper} Gasper is designed to sit atop a proof-of-stake blockchain where nodes provide ether as a security deposit that can be destroyed if they are lazy or dishonest in proposing or validating blocks. Gasper is the mechanism that defines how and why validators are rewarded and punished, how they decide which blocks to accept and reject, and which fork of the blockchain to build on. -## What is finality? +## What is finality? {#what-is-finality} [Casper the Friendly Finality Gadget (Casper-FFG)](https://arxiv.org/pdf/1710.09437.pdf) is an algorithm that finalizes blocks. This means upgrading certain blocks so that they cannot be reverted (unless there has been a critical consensus failure). Finalized blocks can be thought of as information the blockchain is certain about. In order for a block to be finalized it has to pass through a two-step uprgade procedure. First, 2/3 of the total staked ether must have voted in favor of that block's inclusion in the canonical chain. This condition upgrades the block to "justified". Justified blocks are unlikely to be reverted but technically they could be. The justified block is then upgraded to "finalized" when another block is justified on top of it. This is a commitment to include the block in the canonical chain so that it cannot be reverted unless an attacker destroys millions of ether (billions of $USD). @@ -22,21 +22,21 @@ These block upgrades do not happen in every slot. Instead, only epoch-boundary b Because finality requires 2/3 agreement that a block is canonical, an attacker cannot possibly create an alternative finalized chain without a) owning or manipulating 2/3 of the total staked ether, b) destroying at least 1/3 of the total staked ether. The first condition arises because 2/3 of the staked ether is required to finalize a chain. The second condition arises because if 2/3 of the total stake has voted in favour of both forks then 1/3 must have voted on both - this is a slashing condition that would be maximally punished and 1/3 of the total stake would be destroyed. At the time of writing this requires an attacker be willing to lose about $10,000,000,000 worth of ether. -### Incentives and Slashing +### Incentives and Slashing {#incentives-and-slashing} Validators are rewarded for honestly proposing and validating blocks. The rewards come in the form of ether added to their stake. On the other hand, validators that are absent and fail to act when called upon miss out on these rewards and sometimes lose a small portion of their existing stake. However, the penalties for being offline are small and in most cases amount to opportunity costs of missing rewards. There are some validator actions that are very difficult to do accidentally and signify some malicious intent such as proposing multiple blocks for the same slot, attesting to multiple blocks for the same slot or contradicting previous checkpoint votes. These are “slashable” behaviors that are penalized mroe harshly. Slashing results in some portion of the validator's stake being destroyed and the validator being removed from the network. This takes 36 days. On Day 1 there is an initial penalty of up to 0.5 ETH. Then the slashed validator’s ether slowly drains away across the exit period, but on Day 18 they receive a “correlation penalty” which is larger when more validators are slashed around the same time. The maximum penalty is the entire stake. These rewards and penalties are designed to incentivize honest validators and disincentivize attacks on the network. -### Inactivity Leak +### Inactivity Leak {#inactivity-leak} As well as security, Gasper also provides "plausible liveness". This is the condition that as long as 2/3 of the total staked ether is voting honestly and following the protocol, the chain will be able to finalize irrespective of any other activity (such as attacks, latency issues or slashings). Put another way, 1/3 of the total staked ether must be somehow compromised to prevent the chain from finalizing. In Gasper there is an additional line of defense against a liveness failure, known as the "inactivity leak". This mechanism activates when the chain has failed to finalize for more than 4 epochs. The validators that are not actively attesting to the majority chain have their stake gradually drained away until the majority regains 2/3 of the total stake, ensuring that liveness failures are only temporary. -### Fork choice +### Fork choice {#fork-choice} The original definition of Casper-FFG included a fork choice algorithm that imposed the rule: `follow the chain containing the justified checkpoint that has the greatest height` where height is defined as the greatest distance from the genesis block. In Gasper the original fork choice-rule has been deprecated in favour of a more sophisticated algorithm called LMD-GHOST. It is important to realize that under normal conditions a fork choice rule is uneccessary - there is a single block proposer for every slot and honest validators attest to it. It is only in cases of large network asynchronicity or when a dishonest block proposer has equivoated that a fork choice algorithm is required. However, when those cases do arise, the fork choice algorithm is a critical defense that secures the correct chain. LMD-GHOST stands for "latest message driven greedy heaviest observed sub-tree". This is a jargon-heavy way to define an algorithm the selects the fork with the greatest accumulated weight of attestations as the canonical one (greedy heaviest subtree) and that if multiple messages are received from a validator, only the latest one is considered (latest-message driven). Every validator assesses each block using this rule before adding the heaviest block to its canonical chain. -## Further Reading +## Further Reading {#further-reading} - [Gasper: Combining GHOST and Casper](https://arxiv.org/pdf/2003.03052.pdf) - [Capser the Friendly Finality Gadget](https://arxiv.org/pdf/1710.09437.pdf) From a1b600f5efb36c9b03ab379687f11bdb34d17987 Mon Sep 17 00:00:00 2001 From: Joshua <62268199+minimalsm@users.noreply.github.com> Date: Mon, 2 May 2022 13:09:56 +0100 Subject: [PATCH 17/26] Apply suggestions from code review --- .../consensus-mechanisms/pos/gasper/index.md | 21 ++++++++----- .../docs/consensus-mechanisms/pos/index.md | 30 +++++++++---------- .../pos/weak-subjectivity/index.md | 14 ++++----- 3 files changed, 35 insertions(+), 30 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index f8299a7ed82..af6851af1e4 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -1,18 +1,18 @@ --- title: Gasper -description: An explanation of the Gasper PoS mechanism. +description: An explanation of the Gasper proof-of-stake mechanism. lang: en sidebar: true incomplete: false --- -Gasper is a combination of Casper the Friendly Finality Gadget and the LMD-GHOST fork choice algorithm. Together these components form the consensus mechanism securing proof-of-stake Ethereum. Casper is the mechanism that uprgades certain blocks to "finalized" so that new entrants into the network can be confident that they are syncing the canonical chain. The fork choice algorithm uses accumulated votes to ensure that when forks arise in the blockchain nodes can easily select the correct one. +Gasper is a combination of Casper the Friendly Finality Gadget (Casper-FGG) and the LMD-GHOST fork choice algorithm. Together these components form the consensus mechanism securing proof-of-stake Ethereum. Casper is the mechanism that upgrades certain blocks to "finalized" so that new entrants into the network can be confident that they are syncing the canonical chain. The fork choice algorithm uses accumulated votes to ensure that nodes can easily select the correct one when forks arise in the blockchain. **note** that the original definition of Casper-FFG was updated slightly for inclusion in Gasper. On this page we consider the updated version. ## The role of Gasper {#role-of-gasper} -Gasper is designed to sit atop a proof-of-stake blockchain where nodes provide ether as a security deposit that can be destroyed if they are lazy or dishonest in proposing or validating blocks. Gasper is the mechanism that defines how and why validators are rewarded and punished, how they decide which blocks to accept and reject, and which fork of the blockchain to build on. +Gasper sits on top of a proof-of-stake blockchain where nodes provide ether as a security deposit that can be destroyed if they are lazy or dishonest in proposing or validating blocks. Gasper is the mechanism defining how validators get rewarded and punished, decide which blocks to accept and reject, and which fork of the blockchain to build on. ## What is finality? {#what-is-finality} @@ -20,21 +20,26 @@ Gasper is designed to sit atop a proof-of-stake blockchain where nodes provide e These block upgrades do not happen in every slot. Instead, only epoch-boundary blocks can be justified and finalized. These blocks are known as "checkpoints". Upgrading considers pairs of checkpoints. A "supermajority link" must exist between two successive checkpoints (i.e. 2/3 of the total staked ether voting that checkpoint B is the correct descendant of checkpoint A) in order to upgrade the less recent checkpoint to finalized and the more recent block to justified. -Because finality requires 2/3 agreement that a block is canonical, an attacker cannot possibly create an alternative finalized chain without a) owning or manipulating 2/3 of the total staked ether, b) destroying at least 1/3 of the total staked ether. The first condition arises because 2/3 of the staked ether is required to finalize a chain. The second condition arises because if 2/3 of the total stake has voted in favour of both forks then 1/3 must have voted on both - this is a slashing condition that would be maximally punished and 1/3 of the total stake would be destroyed. At the time of writing this requires an attacker be willing to lose about $10,000,000,000 worth of ether. +Because finality requires a two-thirds agreement that a block is canonical, an attacker cannot possibly create an alternative finalized chain without: + +1. Owning or manipulating two-thirds of the total staked ether. +2. Destroying at least one-third of the total staked ether. + +The first condition arises because two-thirds of the staked ether is required to finalize a chain. The second condition arises because if two-thirds of the total stake has voted in favor of both forks, then one-third must have voted on both. Double-voting is a slashing condition that would be maximally punished, and one-third of the total stake would be destroyed. As of May 2022, this requires an attacker to burn around $10 billion worth of ether. ### Incentives and Slashing {#incentives-and-slashing} -Validators are rewarded for honestly proposing and validating blocks. The rewards come in the form of ether added to their stake. On the other hand, validators that are absent and fail to act when called upon miss out on these rewards and sometimes lose a small portion of their existing stake. However, the penalties for being offline are small and in most cases amount to opportunity costs of missing rewards. There are some validator actions that are very difficult to do accidentally and signify some malicious intent such as proposing multiple blocks for the same slot, attesting to multiple blocks for the same slot or contradicting previous checkpoint votes. These are “slashable” behaviors that are penalized mroe harshly. Slashing results in some portion of the validator's stake being destroyed and the validator being removed from the network. This takes 36 days. On Day 1 there is an initial penalty of up to 0.5 ETH. Then the slashed validator’s ether slowly drains away across the exit period, but on Day 18 they receive a “correlation penalty” which is larger when more validators are slashed around the same time. The maximum penalty is the entire stake. These rewards and penalties are designed to incentivize honest validators and disincentivize attacks on the network. +Validators get rewarded for honestly proposing and validating blocks. The rewards ether added to their stake. On the other hand, validators that are absent and fail to act when called upon miss out on these rewards and sometimes lose a small portion of their existing stake. However, the penalties for being offline are small and, in most cases, amount to opportunity costs of missing rewards. However, some validator actions are very difficult to do accidentally and signify some malicious intent, such as proposing multiple blocks for the same slot, attesting to multiple blocks for the same slot, or contradicting previous checkpoint votes. These are "slashable" behaviors that are penalized more harshly—slashing results in some portion of the validator's stake being destroyed and the validator being removed from the network. This process takes 36 days. On Day 1, there is an initial penalty of up to 0.5 ETH. Then the slashed validator's ether slowly drains away across the exit period, but on Day 18, they receive a "correlation penalty", which is larger when more validators are slashed around the same time. The maximum penalty is the entire stake. These rewards and penalties are designed to incentivize honest validators and disincentivize attacks on the network. ### Inactivity Leak {#inactivity-leak} -As well as security, Gasper also provides "plausible liveness". This is the condition that as long as 2/3 of the total staked ether is voting honestly and following the protocol, the chain will be able to finalize irrespective of any other activity (such as attacks, latency issues or slashings). Put another way, 1/3 of the total staked ether must be somehow compromised to prevent the chain from finalizing. In Gasper there is an additional line of defense against a liveness failure, known as the "inactivity leak". This mechanism activates when the chain has failed to finalize for more than 4 epochs. The validators that are not actively attesting to the majority chain have their stake gradually drained away until the majority regains 2/3 of the total stake, ensuring that liveness failures are only temporary. +As well as security, Gasper also provides "plausible liveness". This is the condition that as long as two-thirds of the total staked ether is voting honestly and following the protocol, the chain will be able to finalize irrespective of any other activity (such as attacks, latency issues, or slashings). Put another way, one-third of the total staked ether must be somehow compromised to prevent the chain from finalizing. In Gasper, there is an additional line of defense against a liveness failure, known as the "inactivity leak". This mechanism activates when the chain has failed to finalize for more than four epochs. The validators that are not actively attesting to the majority chain have their stake gradually drained away until the majority regains two-thirds of the total stake, ensuring that liveness failures are only temporary. ### Fork choice {#fork-choice} -The original definition of Casper-FFG included a fork choice algorithm that imposed the rule: `follow the chain containing the justified checkpoint that has the greatest height` where height is defined as the greatest distance from the genesis block. In Gasper the original fork choice-rule has been deprecated in favour of a more sophisticated algorithm called LMD-GHOST. It is important to realize that under normal conditions a fork choice rule is uneccessary - there is a single block proposer for every slot and honest validators attest to it. It is only in cases of large network asynchronicity or when a dishonest block proposer has equivoated that a fork choice algorithm is required. However, when those cases do arise, the fork choice algorithm is a critical defense that secures the correct chain. +The original definition of Casper-FFG included a fork choice algorithm that imposed the rule: `follow the chain containing the justified checkpoint that has the greatest height` where height is defined as the greatest distance from the genesis block. In Gasper, the original fork choice rule is deprecated in favor of a more sophisticated algorithm called LMD-GHOST. It is important to realize that under normal conditions, a fork choice rule is unnecessary - there is a single block proposer for every slot, and honest validators attest to it. It is only in cases of large network asynchronicity or when a dishonest block proposer has equivocated that a fork choice algorithm is required. However, when those cases do arise, the fork choice algorithm is a critical defense that secures the correct chain. -LMD-GHOST stands for "latest message driven greedy heaviest observed sub-tree". This is a jargon-heavy way to define an algorithm the selects the fork with the greatest accumulated weight of attestations as the canonical one (greedy heaviest subtree) and that if multiple messages are received from a validator, only the latest one is considered (latest-message driven). Every validator assesses each block using this rule before adding the heaviest block to its canonical chain. +LMD-GHOST stands for "latest message-driven greedy heaviest observed sub-tree". This is a jargon-heavy way to define an algorithm that selects the fork with the greatest accumulated weight of attestations as the canonical one (greedy heaviest subtree) and that if multiple messages are received from a validator, only the latest one is considered (latest-message driven). Before adding the heaviest block to its canonical chain, every validator assesses each block using this rule. ## Further Reading {#further-reading} diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index 0aa6ef9eaf5..9de9e2c5caf 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -6,7 +6,7 @@ sidebar: true incomplete: false --- -Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan because PoS is demonstrably more secure than PoW, uses drastically less energy and enables new scaling solutions to be implemented. However PoS is also more complex than PoW. Refining the PoS mechanism has taken years of research and development, and the challenge now is implementing it on the live Ethereum network - a process known as ["the merge"](/upgrades/merge/). +Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan because PoS is demonstrably more secure than PoW, uses drastically less energy, and enables new scaling solutions to be implemented. However, proof-of-stake is also more complex than proof-of-work. Refining the proof-of-stake mechanism has taken years of research and development, and the challenge now is implementing it on the live Ethereum network - a process known as ["The Merge"](/upgrades/merge/). ## Prerequisites {#prerequisites} @@ -14,50 +14,50 @@ To better understand this page, we recommend you first read up on [consensus mec ## What is proof-of-stake (PoS)? {#what-is-pos} -Proof-of-stake is a type of [consensus mechanism](/developers/docs/consensus-mechanisms/) used by blockchains to achieve distributed consensus. Whereas in PoW miners prove they have capital at risk by expending energy, in PoS validators explicitly stake capital in the form of ether into a smart contract on Ethereum. This staked ether then acts as collateral that can be destroyed if the validator behaves dishonestly or lazily. The validator is then responsible for checking that new blocks propagated over the network are valid and occasionally creating and propagating new blocks themselves. +Proof-of-stake is a type of [consensus mechanism](/developers/docs/consensus-mechanisms/) used by blockchains to achieve distributed consensus. In proof-of-work, miners prove they have capital at risk by expending energy. In proof-of-stake, validators explicitly stake capital in the form of ether into a smart contract on Ethereum. This staked ether then acts as collateral that can be destroyed if the validator behaves dishonestly or lazily. The validator is then responsible for checking that new blocks propagated over the network are valid and occasionally creating and propagating new blocks themselves. Proof-of-stake comes with a number of improvements to the proof-of-work system: -- better energy efficiency – there is no need to use lots of energy on PoW computations +- better energy efficiency – there is no need to use lots of energy on proof-of-work computations - lower barriers to entry, reduced hardware requirements – there is no need for elite hardware to stand a chance of creating new blocks - reduced centralization risk – proof-of-stake should lead to more nodes securing the network - because of the low energy requirement less ETH issuance is required to incentivize participation -- economic penalties for misbehaviour make 51% style attacks much more costly for an attacker compared to PoW +- economic penalties for misbehaviour make 51% style attacks exponentially more costly for an attacker compared to proof-of-work - the community can resort to social recovery of an honest chain if a 51% attack were to overcome the crypto-economic defenses. ## Validators {#validators} -To participate as a validator, a user must deposit 32 ETH into the deposit contract and run three separate pieces of software: an execution client, a consensus client and a validator. On depositing their ether, the user joins an activation queue that limits the rate at which new validators join the network. Once activated, validators receive new blocks from peers on the Ethereum network. The transactions delivered in the block are re-executed and the block signature is checked to ensure the block is valid. The validator then sends a vote (called an attestation) in favour of that block across the network. +To participate as a validator, a user must deposit 32 ETH into the deposit contract and run three separate pieces of software: an execution client, a consensus client, and a validator. On depositing their ether, the user joins an activation queue that limits the rate of new validators joining the network. Once activated, validators receive new blocks from peers on the Ethereum network. The transactions delivered in the block are re-executed, and the block signature is checked to ensure the block is valid. The validator then sends a vote (called an attestation) in favor of that block across the network. -Whereas under PoW the timing of blocks is determined by the mining difficulty, in PoS the tempo is fixed. Time in PoS Ethereum is divided into slots (12 seconds) and epochs (32 slots). In every slot a committee of validators is randomly chosen whose votes are used to determine the validity of the block proposed in that slot. Also in every slot one validator is randomly selected to be a block proposer. That validator is responsible for creating a new block and sending it out to other nodes on the network. +Whereas under proof-of-work, the timing of blocks is determined by the mining difficulty, in proof-of-stake, the tempo is fixed. Time in proof-of-stake Ethereum is divided into slots (12 seconds) and epochs (32 slots). In every slot, a committee of validators is randomly chosen, whose votes are used to determine the validity of the block proposed in that slot. Also, one validator is randomly selected to be a block proposer in every slot. This validator is responsible for creating a new block and sending it out to other nodes on the network. ## Finality {#finality} -In distributed networks, a transaction has "finality" when it's part of a block that can't change without a large amount to ether being burned. On PoS Ethereum this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least 2/3 of the total staked ether the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least 1/3 of the total supply of staked ether (currently around $10,000,000,000 USD). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). Since finality requires a 2/3 majority, an attacker could prevent the network reaching finality by voting with 1/3 of the total stake. There is a mechanism to defend against this: the inactivity leak. This activates whenever the chain fails to finalize for more than 4 epochs. The inactivity leak bleeds away the staked ether from validators voting against the majority, allowing the majority to regain a 2/3 majority and finalize the chain. +A transaction has "finality" in distributed networks when it's part of a block that can't change without a significant amount of ether getting burned. On proof-of-stake Ethereum, this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least two-thirds of the total staked ether, the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least one-third of the total supply of staked ether (currently around $10,000,000,000). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). Since finality requires a two-thirds majority, an attacker could prevent the network from reaching finality by voting with one-third of the total stake. There is a mechanism to defend against this: the inactivity leak. This activates whenever the chain fails to finalize for more than four epochs. The inactivity leak bleeds away the staked ether from validators voting against the majority, allowing the majority to regain a two-thirds majority and finalize the chain. ## Crypto-economic security {#crypto-economic-security} -Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for a user to maliciously attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviours that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on how many validators are also being slashed at aroud the same time. This is known as the "correlation penalty" and it can be minor (~1% stake for a single validator slashed on their own) or can result in 100% of the validators stake being destroyed (mass slashing event). It is imposed half way through a forced exit period that begins with an immediate penalty (up to 0.5 ETH) on Day 1, the correlation penalty on Day 18 and finally ejection from the network on Day 36. They receive small attestation penalties every day because they are present on the network but not submitting votes. This all means a coordinated attack would be very costly for the attacker. +Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for users to attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviors that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on how many validators are also being slashed at around the same time. This is known as the "correlation penalty", and it can be minor (~1% stake for a single validator slashed on their own) or can result in 100% of the validator's stake getting destroyed (mass slashing event). It is imposed halfway through a forced exit period that begins with an immediate penalty (up to 0.5 ETH) on Day 1, the correlation penalty on Day 18, and finally, ejection from the network on Day 36. They receive minor attestation penalties every day because they are present on the network but not submitting votes. This all means a coordinated attack would be very costly for the attacker. ## Fork choice {#fork-choice} -When the network performs optimally and honestly, there is only ever one new block at the head of the chain and all validators attest to it. However, it is possible for validators to have different views of the head of the chain due to network latency or because a block proposer has equivocated. Therefore, consensus clients require an algorithm to decide which one to favor. The algorithm used in PoS Ethereum is called LMD-GHOST and it works by identifying the fork that has the greatest weight of attestations in its history. +When the network performs optimally and honestly, there is only ever one new block at the head of the chain, and all validators attest to it. However, it is possible for validators to have different views of the head of the chain due to network latency or because a block proposer has equivocated. Therefore, consensus clients require an algorithm to decide which one to favor. The algorithm used in proof-of-stake Ethereum is called LMD-GHOST, and it works by identifying the fork that has the greatest weight of attestations in its history. ## Proof-of-stake and security {#pos-and-security} -The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists in PoS as it does in PoW, but it's even more risky for the attackers. To do so, a attacker would need 51% of the staked ETH (about $15,000,000,000 USD). They could then use their own attestations to ensure their preferred fork was the one with the most accumulated attestations. The 'weight' of accumulated attestations is what consensus clients use to determine the correct chain, so this attacker would be able to make their fork the canonical one. However, a strength of PoS over PoW is that the community has flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain and ignore the attacker's fork, while aso encouraging apps, exchanges and pools to do the same. They could also decide to forcibly remove the attacker from the network and destroy their staked ether. These are strong economic defenses against a 51% attack. +The threat of a [51% attack](https://www.investopedia.com/terms/1/51-attack.asp) still exists on proof-of-stake as it does on proof-of-work, but it's even riskier for the attackers. A attacker would need 51% of the staked ETH (about $15,000,000,000 USD). They could then use their own attestations to ensure their preferred fork was the one with the most accumulated attestations. The 'weight' of accumulated attestations is what consensus clients use to determine the correct chain, so this attacker would be able to make their fork the canonical one. However, a strength of proof-of-stake over proof-of-work is that the community has flexibility in mounting a counter-attack. For example, the honest validators could decide to keep building on the minority chain and ignore the attacker's fork while encouraging apps, exchanges, and pools to do the same. They could also decide to forcibly remove the attacker from the network and destroy their staked ether. These are strong economic defenses against a 51% attack. -51% attacks are just one flavour of malicious activity. Bad actors could attempt long-range attacks (although the finality gadget neutralizes this attack vector), short range 'reorgs' (although proposer boosting and attestation deadlines mitigate this), bouncing and balancing attacks (also mitigated by proposer boosting, and these attacks have anyway only been demonstrated under idealized network conditions) or avalanche attacks (neutralized by the fork choice algorithms rule of only considering the latest message). +51% attacks are just one flavor of malicious activity. Bad actors could attempt long-range attacks (although the finality gadget neutralizes this attack vector), short range 'reorgs' (although proposer boosting and attestation deadlines mitigate this), bouncing and balancing attacks (also mitigated by proposer boosting, and these attacks have anyway only been demonstrated under idealized network conditions) or avalanche attacks (neutralized by the fork choice algorithms rule of only considering the latest message). -Overall, PoS as it is implemented on Ethereum has been demonstrated to be more economically secure than PoW. +Overall, proof-of-stake, as it is implemented on Ethereum, has been demonstrated to be more economically secure than proof-of-work. ## Pros and cons {#pros-and-cons} | Pros | Cons | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | -| Staking makes it easier for individuals to participate in securing the network, promoting decentralization. validator node can be run on a normal laptop. Staking pools allow users to stake without having 32 ETH. | PoS is younger and less battle-tested, compared to PoW | -| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, whereas more hashpower means higher % returns with PoW mining. | PoS is more complex to implement than PoW | -| PoS offers greater crypto-economic security than PoW | Users need to run three pieces of software to participate in Ethereum's PoS compared to one for PoW. | +| Staking makes it easier for individuals to participate in securing the network, promoting decentralization. validator node can be run on a normal laptop. Staking pools allow users to stake without having 32 ETH. | Proof-of-stake is younger and less battle-tested compared to proof-of-work | +| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, whereas more hashpower means higher % returns with proof-of-work mining. | Proof-of-stake is more complex to implement than proof-of-work | +| Proof-of-stake offers greater crypto-economic security than proof-of-work | Users need to run three pieces of software to participate in Ethereum's proof-of-stake compared to one for proof-of-work. | | Less issuance of new ether is required to incentivize network participants | | More information about the PoS design rationale is available in this [blog post by Vitalik](https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51). diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index 5d64aca6f0d..19acdbd99bd 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -8,29 +8,29 @@ incomplete: false Subjectivity in blockchains refers to reliance upon social information to agree on the current state. There may be multiple valid forks that are chosen from according to information gathered from other peers on the network. The converse is objectivity which refers to chains where there is only one possible valid chain that all nodes will necessarily agree upon by applying their coded rules. There is also a third state, known as weak subjectivity. This refers to a chain that can progress objectively after some initial seed of information is retrieved socially. -## Prerequisites +## Prerequisites {#prerequisites} To understand this page it is necessary to first understand the fundamentals of [proof-of-stake](/developers/docs/consensus-mechanisms/pos/). ## What problems does weak subjectivity solve? {#problems-ws-solves} -Subjectivity is inherent to proof-of-stake blockchains because selecting the correct chain from multiple forks is done by counting historical votes. This exposes the blockchain to several attack vectors, including long range attacks whereby nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical one. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so they could be tricked into following an incorrect chain. These attack vectors can be solved by imposing constraints that diminish the subjective aspects of the mechanism - and therefore trust assumptions - to the bare minimum. +Subjectivity is inherent to proof-of-stake blockchains because selecting the correct chain from multiple forks is done by counting historical votes. This exposes the blockchain to several attack vectors, including long-range attacks whereby nodes that participated very early in the chain maintain an alternative fork that they release much later to their own advantage. Alternatively, if 33% of validators withdraw their stake but continue to attest and produce blocks, they might generate an alternative fork that conflicts with the canonical chain. New nodes or nodes that have been offline for a long time might not be aware that these attacking validators have withdrawn their funds, so attackers could trick them into following an incorrect chain. Ethereum can solve these attack vectors by imposing constraints that diminish the subjective aspects of the mechanism—and therefore trust assumptions—to the bare minimum. ## Weak subjectivity checkpoints {#ws-checkpoints} -The way weak subjectivity is implemented in proof-of-stake Ethereum is by using "weak subjectivity checkpoints". These are state roots that all nodes on the network agree belong in the canonical chain. They serve a similar "universal truth" purpose to genesis blocks except that they do not sit at the genesis position in the blockchain. The fork choice algorithm trusts that the blockchain state defined in that checkpoint is correct and independently and objectively verifies the chain from that point onwards. The checkpoints act as "revert-limits" because blocks located before weak-subjectivity checkpoints cannot be changed. This undermines long range attacks simply by defining long range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they are able to withdraw their stake, and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. +Weak subjectivity is implemented in proof-of-stake Ethereum by using "weak subjectivity checkpoints". These are state roots that all nodes on the network agree belong in the canonical chain. They serve the same "universal truth" purpose to genesis blocks, except that they do not sit at the genesis position in the blockchain. The fork choice algorithm trusts that the blockchain state defined in that checkpoint is correct and that it independently and objectively verifies the chain from that point onwards. The checkpoints act as "revert limits" because blocks located before weak-subjectivity checkpoints cannot be changed. This undermines long-range attacks simply by defining long-range forks to be invalid as part of the mechanism design. Ensuring that the weak subjectivity checkpoints are separated by a smaller distance than the validator withdrawal period ensures that a validator that forks the chain is slashed at least some threshold amount before they can withdraw their stake and that new entrants cannot be tricked onto incorrect forks by validators whose stake has been withdrawn. ## Difference between weak subjectivity checkpoints and finalized blocks {#difference-between-ws-and-finalized-blocks} -Finalized blocks and weak subjectivity checkpoints are treated differently by Ethereum nodes. If a node becomes aware of two competing finalized blocks then it is torn between the two - it has no way to identify automatically which is the canonical fork. This is symptomatic of a consensus failure. In contrast, a node simply rejects any block that conflicts with its weak subjectivity checkpoint. From the node's perspective the weak subjectivity checkpoint is represents an absolute truth that cannot be undermined by any new knowledge arriving from its peers. +Finalized blocks and weak subjectivity checkpoints are treated differently by Ethereum nodes. If a node becomes aware of two competing finalized blocks, then it is torn between the two - it has no way to identify automatically which is the canonical fork. This is symptomatic of a consensus failure. In contrast, a node simply rejects any block that conflicts with its weak subjectivity checkpoint. From the node's perspective, the weak subjectivity checkpoint represents an absolute truth that cannot be undermined by new knowledge from its peers. ## How weak is weak? {#how-weak-is-weak} -The subjective aspect of Ethereum's proof-of-stake is the requirement for a recent state (weak subjectivity checkpoint) from a trusted source to sync from. The risk of getting a bad weak subjectivity checkpoint is very low, partly because they can be checked against several independent public sources such as block explorers or multiple nodes. There is always some degree of trust required to run any software application, for example trusting that the software developers have produced honest software. +The subjective aspect of Ethereum's proof-of-stake is the requirement for a recent state (weak subjectivity checkpoint) from a trusted source to sync from. The risk of getting a bad weak subjectivity checkpoint is very low because they can be checked against several independent public sources such as block explorers or multiple nodes. However, there is always some degree of trust required to run any software application, for example, trusting that the software developers have produced honest software. -A weak subjectivity checkpoint may even come as part of the client software. Arguably an attacker can corrupt the checkpoint in the software can just as easily corrupt the software itself. There is no real crypto-economic route around this problem, but the impact of untrustworthy developers is minimized in Ethereum by having multiple independent client teams each building equivalent software in different langages, all with a vested interest in maintaining an honest chain. Block explorers may also provide weak subjectivity checkpoints, or at least provide a way to cross-reference checkpoints obtained from elsewhere against an additional source. +A weak subjectivity checkpoint may even come as part of the client software. Arguably an attacker can corrupt the checkpoint in the software and can just as easily corrupt the software itself. There is no real crypto-economic route around this problem, but the impact of untrustworthy developers is minimized in Ethereum by having multiple independent client teams, each building equivalent software in different languages, all with a vested interest in maintaining an honest chain. Block explorers may also provide weak subjectivity checkpoints or a way to cross-reference checkpoints obtained from elsewhere against an additional source. -Finally, checkpoints can simply be requested from other nodes, perhaps another Etheruem user that runs a full node can provide a checkpoint that can then be verified against data from a block explorer. Overall, trusting the provider of a weak subjectivity checkpoint can be considered about as problematic as trusting the client developers. The overall trust required is low. It is also important to note that these considerations only become important in the very unlikely event where a majority of validators collude to produce an alternate fork of the blockchain. Under any other circumstances there is only one Ethereum chain to choose from. +Finally, checkpoints can be requested from other nodes; perhaps another Ethereum user that runs a full node can provide a checkpoint that validators can then verify against data from a block explorer. Overall, trusting the provider of a weak subjectivity checkpoint can be considered as problematic as trusting the client developers. The overall trust required is low. It is important to note that these considerations only become important in the very unlikely event that a majority of validators conspire to produce an alternate fork of the blockchain. Under any other circumstances, there is only one Ethereum chain to choose from. ## Further Reading {#further-reading} From bd8b9ec55ca147775a887706188c61b61c0bc6de Mon Sep 17 00:00:00 2001 From: Joshua <62268199+minimalsm@users.noreply.github.com> Date: Mon, 2 May 2022 13:10:09 +0100 Subject: [PATCH 18/26] Update src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md --- .../developers/docs/consensus-mechanisms/pos/gasper/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index af6851af1e4..2adb683b3a8 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -18,7 +18,7 @@ Gasper sits on top of a proof-of-stake blockchain where nodes provide ether as a [Casper the Friendly Finality Gadget (Casper-FFG)](https://arxiv.org/pdf/1710.09437.pdf) is an algorithm that finalizes blocks. This means upgrading certain blocks so that they cannot be reverted (unless there has been a critical consensus failure). Finalized blocks can be thought of as information the blockchain is certain about. In order for a block to be finalized it has to pass through a two-step uprgade procedure. First, 2/3 of the total staked ether must have voted in favor of that block's inclusion in the canonical chain. This condition upgrades the block to "justified". Justified blocks are unlikely to be reverted but technically they could be. The justified block is then upgraded to "finalized" when another block is justified on top of it. This is a commitment to include the block in the canonical chain so that it cannot be reverted unless an attacker destroys millions of ether (billions of $USD). -These block upgrades do not happen in every slot. Instead, only epoch-boundary blocks can be justified and finalized. These blocks are known as "checkpoints". Upgrading considers pairs of checkpoints. A "supermajority link" must exist between two successive checkpoints (i.e. 2/3 of the total staked ether voting that checkpoint B is the correct descendant of checkpoint A) in order to upgrade the less recent checkpoint to finalized and the more recent block to justified. +These block upgrades do not happen in every slot. Instead, only epoch-boundary blocks can be justified and finalized. These blocks are known as "checkpoints". Upgrading considers pairs of checkpoints. A "supermajority link" must exist between two successive checkpoints (i.e. two-thirds of the total staked ether voting that checkpoint B is the correct descendant of checkpoint A to upgrade the less recent checkpoint to finalized and the more recent block to justified. Because finality requires a two-thirds agreement that a block is canonical, an attacker cannot possibly create an alternative finalized chain without: From f7f62418dc7558bc8be3424dfcd8dcd87dd5e04e Mon Sep 17 00:00:00 2001 From: Joshua <62268199+minimalsm@users.noreply.github.com> Date: Mon, 2 May 2022 13:11:33 +0100 Subject: [PATCH 19/26] Update src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md --- .../developers/docs/consensus-mechanisms/pos/gasper/index.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 2adb683b3a8..1928df1d7ed 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -16,7 +16,10 @@ Gasper sits on top of a proof-of-stake blockchain where nodes provide ether as a ## What is finality? {#what-is-finality} -[Casper the Friendly Finality Gadget (Casper-FFG)](https://arxiv.org/pdf/1710.09437.pdf) is an algorithm that finalizes blocks. This means upgrading certain blocks so that they cannot be reverted (unless there has been a critical consensus failure). Finalized blocks can be thought of as information the blockchain is certain about. In order for a block to be finalized it has to pass through a two-step uprgade procedure. First, 2/3 of the total staked ether must have voted in favor of that block's inclusion in the canonical chain. This condition upgrades the block to "justified". Justified blocks are unlikely to be reverted but technically they could be. The justified block is then upgraded to "finalized" when another block is justified on top of it. This is a commitment to include the block in the canonical chain so that it cannot be reverted unless an attacker destroys millions of ether (billions of $USD). +[Casper the Friendly Finality Gadget (Casper-FFG)](https://arxiv.org/pdf/1710.09437.pdf) is an algorithm that finalizes blocks. Finalizing a block means upgrading a block so that it cannot be reverted (unless there has been a critical consensus failure). Finalized blocks can be thought of as information the blockchain is certain about. A block must pass through a two-step upgrade procedure for a block to be finalized: + +1. Two-thirds of the total staked ether must have voted in favor of that block's inclusion in the canonical chain. This condition upgrades the block to "justified". Justified blocks are unlikely to be reverted, but they can be under certain conditions. +2. When another block is justified on top of a justified block, it is upgraded to "finalized". Finalizing a block is a commitment to include the block in the canonical chain. It cannot be reverted unless an attacker destroys millions of ether (billions of $USD). These block upgrades do not happen in every slot. Instead, only epoch-boundary blocks can be justified and finalized. These blocks are known as "checkpoints". Upgrading considers pairs of checkpoints. A "supermajority link" must exist between two successive checkpoints (i.e. two-thirds of the total staked ether voting that checkpoint B is the correct descendant of checkpoint A to upgrade the less recent checkpoint to finalized and the more recent block to justified. From 85309345d6c3d0971e5208a549500cd64a4d4989 Mon Sep 17 00:00:00 2001 From: Joseph Cook <33655003+jmcook1186@users.noreply.github.com> Date: Tue, 3 May 2022 15:50:09 +0100 Subject: [PATCH 20/26] add prerequisites --- .../developers/docs/consensus-mechanisms/pos/gasper/index.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 1928df1d7ed..ef2d913b64d 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -10,6 +10,11 @@ Gasper is a combination of Casper the Friendly Finality Gadget (Casper-FGG) and **note** that the original definition of Casper-FFG was updated slightly for inclusion in Gasper. On this page we consider the updated version. +## Prerequisites + +To understand this material it is necessary to read the introductory page on [proof-of-stake](/developers/docs/consensus-mechanisms/pos/). + + ## The role of Gasper {#role-of-gasper} Gasper sits on top of a proof-of-stake blockchain where nodes provide ether as a security deposit that can be destroyed if they are lazy or dishonest in proposing or validating blocks. Gasper is the mechanism defining how validators get rewarded and punished, decide which blocks to accept and reject, and which fork of the blockchain to build on. From 12973fff2283c82e018e718160ae6f89a30a4ae6 Mon Sep 17 00:00:00 2001 From: Joseph Cook <33655003+jmcook1186@users.noreply.github.com> Date: Tue, 3 May 2022 15:54:09 +0100 Subject: [PATCH 21/26] Apply suggestions from @wackerow code review Co-authored-by: Paul Wackerow <54227730+wackerow@users.noreply.github.com> --- .../developers/docs/consensus-mechanisms/pos/gasper/index.md | 4 ++-- src/content/developers/docs/consensus-mechanisms/pos/index.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index ef2d913b64d..55e4382d897 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -8,7 +8,7 @@ incomplete: false Gasper is a combination of Casper the Friendly Finality Gadget (Casper-FGG) and the LMD-GHOST fork choice algorithm. Together these components form the consensus mechanism securing proof-of-stake Ethereum. Casper is the mechanism that upgrades certain blocks to "finalized" so that new entrants into the network can be confident that they are syncing the canonical chain. The fork choice algorithm uses accumulated votes to ensure that nodes can easily select the correct one when forks arise in the blockchain. -**note** that the original definition of Casper-FFG was updated slightly for inclusion in Gasper. On this page we consider the updated version. +**Note** that the original definition of Casper-FFG was updated slightly for inclusion in Gasper. On this page we consider the updated version. ## Prerequisites @@ -37,7 +37,7 @@ The first condition arises because two-thirds of the staked ether is required to ### Incentives and Slashing {#incentives-and-slashing} -Validators get rewarded for honestly proposing and validating blocks. The rewards ether added to their stake. On the other hand, validators that are absent and fail to act when called upon miss out on these rewards and sometimes lose a small portion of their existing stake. However, the penalties for being offline are small and, in most cases, amount to opportunity costs of missing rewards. However, some validator actions are very difficult to do accidentally and signify some malicious intent, such as proposing multiple blocks for the same slot, attesting to multiple blocks for the same slot, or contradicting previous checkpoint votes. These are "slashable" behaviors that are penalized more harshly—slashing results in some portion of the validator's stake being destroyed and the validator being removed from the network. This process takes 36 days. On Day 1, there is an initial penalty of up to 0.5 ETH. Then the slashed validator's ether slowly drains away across the exit period, but on Day 18, they receive a "correlation penalty", which is larger when more validators are slashed around the same time. The maximum penalty is the entire stake. These rewards and penalties are designed to incentivize honest validators and disincentivize attacks on the network. +Validators get rewarded for honestly proposing and validating blocks. Ether is rewarded and added to their stake. On the other hand, validators that are absent and fail to act when called upon miss out on these rewards and sometimes lose a small portion of their existing stake. However, the penalties for being offline are small and, in most cases, amount to opportunity costs of missing rewards. However, some validator actions are very difficult to do accidentally and signify some malicious intent, such as proposing multiple blocks for the same slot, attesting to multiple blocks for the same slot, or contradicting previous checkpoint votes. These are "slashable" behaviors that are penalized more harshly—slashing results in some portion of the validator's stake being destroyed and the validator being removed from the network of validators. This process takes 36 days. On Day 1, there is an initial penalty of up to 0.5 ETH. Then the slashed validator's ether slowly drains away across the exit period, but on Day 18, they receive a "correlation penalty", which is larger when more validators are slashed around the same time. The maximum penalty is the entire stake. These rewards and penalties are designed to incentivize honest validators and disincentivize attacks on the network. ### Inactivity Leak {#inactivity-leak} diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index 9de9e2c5caf..ada2be74350 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -29,7 +29,7 @@ Proof-of-stake comes with a number of improvements to the proof-of-work system: To participate as a validator, a user must deposit 32 ETH into the deposit contract and run three separate pieces of software: an execution client, a consensus client, and a validator. On depositing their ether, the user joins an activation queue that limits the rate of new validators joining the network. Once activated, validators receive new blocks from peers on the Ethereum network. The transactions delivered in the block are re-executed, and the block signature is checked to ensure the block is valid. The validator then sends a vote (called an attestation) in favor of that block across the network. -Whereas under proof-of-work, the timing of blocks is determined by the mining difficulty, in proof-of-stake, the tempo is fixed. Time in proof-of-stake Ethereum is divided into slots (12 seconds) and epochs (32 slots). In every slot, a committee of validators is randomly chosen, whose votes are used to determine the validity of the block proposed in that slot. Also, one validator is randomly selected to be a block proposer in every slot. This validator is responsible for creating a new block and sending it out to other nodes on the network. +Whereas under proof-of-work, the timing of blocks is determined by the mining difficulty, in proof-of-stake, the tempo is fixed. Time in proof-of-stake Ethereum is divided into slots (12 seconds) and epochs (32 slots). One validator is randomly selected to be a block proposer in every slot. This validator is responsible for creating a new block and sending it out to other nodes on the network. Also in every slot, a committee of validators is randomly chosen, whose votes are used to determine the validity of the block being proposed. ## Finality {#finality} From d6ea7025070331c8c5274d3c33c3774c25a5e9e7 Mon Sep 17 00:00:00 2001 From: Joseph Cook <33655003+jmcook1186@users.noreply.github.com> Date: Tue, 3 May 2022 16:02:39 +0100 Subject: [PATCH 22/26] rearrange finality paragraph as @wackerow comment --- .../developers/docs/consensus-mechanisms/pos/gasper/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index 55e4382d897..e5e6e110d7d 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -21,7 +21,7 @@ Gasper sits on top of a proof-of-stake blockchain where nodes provide ether as a ## What is finality? {#what-is-finality} -[Casper the Friendly Finality Gadget (Casper-FFG)](https://arxiv.org/pdf/1710.09437.pdf) is an algorithm that finalizes blocks. Finalizing a block means upgrading a block so that it cannot be reverted (unless there has been a critical consensus failure). Finalized blocks can be thought of as information the blockchain is certain about. A block must pass through a two-step upgrade procedure for a block to be finalized: +Finality is a property of certain blocks that means they cannot be reverted unless there has been a critical consensus failure and an attacker has destroyed at least 1/3 of the total staked ether. Finalized blocks can be thought of as information the blockchain is certain about. A block must pass through a two-step upgrade procedure for a block to be finalized: 1. Two-thirds of the total staked ether must have voted in favor of that block's inclusion in the canonical chain. This condition upgrades the block to "justified". Justified blocks are unlikely to be reverted, but they can be under certain conditions. 2. When another block is justified on top of a justified block, it is upgraded to "finalized". Finalizing a block is a commitment to include the block in the canonical chain. It cannot be reverted unless an attacker destroys millions of ether (billions of $USD). @@ -33,7 +33,7 @@ Because finality requires a two-thirds agreement that a block is canonical, an a 1. Owning or manipulating two-thirds of the total staked ether. 2. Destroying at least one-third of the total staked ether. -The first condition arises because two-thirds of the staked ether is required to finalize a chain. The second condition arises because if two-thirds of the total stake has voted in favor of both forks, then one-third must have voted on both. Double-voting is a slashing condition that would be maximally punished, and one-third of the total stake would be destroyed. As of May 2022, this requires an attacker to burn around $10 billion worth of ether. +The first condition arises because two-thirds of the staked ether is required to finalize a chain. The second condition arises because if two-thirds of the total stake has voted in favor of both forks, then one-third must have voted on both. Double-voting is a slashing condition that would be maximally punished, and one-third of the total stake would be destroyed. As of May 2022, this requires an attacker to burn around $10 billion worth of ether. The algorithm that justifies and finalizes blocks in Gasper is a slightly modified form of [Casper the Friendly Finality Gadget (Casper-FFG)](https://arxiv.org/pdf/1710.09437.pdf). ### Incentives and Slashing {#incentives-and-slashing} From 17ddcf741a8467cbdac270a66d00a037e7e0170b Mon Sep 17 00:00:00 2001 From: Joseph Cook <33655003+jmcook1186@users.noreply.github.com> Date: Tue, 3 May 2022 16:10:51 +0100 Subject: [PATCH 23/26] apply updates from code review --- .../developers/docs/consensus-mechanisms/pos/index.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index ada2be74350..36c82092b77 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -6,7 +6,7 @@ sidebar: true incomplete: false --- -Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan because PoS is demonstrably more secure than PoW, uses drastically less energy, and enables new scaling solutions to be implemented. However, proof-of-stake is also more complex than proof-of-work. Refining the proof-of-stake mechanism has taken years of research and development, and the challenge now is implementing it on the live Ethereum network - a process known as ["The Merge"](/upgrades/merge/). +Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan because PoS is thought to be more secure than PoW, uses drastically less energy, and enables new scaling solutions to be implemented. However, proof-of-stake is also more complex than proof-of-work. Refining the proof-of-stake mechanism has taken years of research and development, and the challenge now is to implement it on the live Ethereum network - a process known as ["The Merge"](/upgrades/merge/). ## Prerequisites {#prerequisites} @@ -33,15 +33,15 @@ Whereas under proof-of-work, the timing of blocks is determined by the mining di ## Finality {#finality} -A transaction has "finality" in distributed networks when it's part of a block that can't change without a significant amount of ether getting burned. On proof-of-stake Ethereum, this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least two-thirds of the total staked ether, the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least one-third of the total supply of staked ether (currently around $10,000,000,000). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). Since finality requires a two-thirds majority, an attacker could prevent the network from reaching finality by voting with one-third of the total stake. There is a mechanism to defend against this: the inactivity leak. This activates whenever the chain fails to finalize for more than four epochs. The inactivity leak bleeds away the staked ether from validators voting against the majority, allowing the majority to regain a two-thirds majority and finalize the chain. +A transaction has "finality" in distributed networks when it's part of a block that can't change without a significant amount of ether getting burned. On proof-of-stake Ethereum, this is managed using "checkpoint" blocks. The first block in each epoch is a checkpoint. Validators vote for pairs of checkpoints that it considers to be valid. If a pair of checkpoints attracts votes representing at least two-thirds of the total staked ether, the checkpoints are upgraded. The more recent of the two (target) becomes "justified". The earlier of the two is already justified because it was the "target" in the previous epoch. Now it is upgraded to "finalized". To revert a finalized block, an attacker would commit to losing at least one-third of the total supply of staked ether (currently around $10,000,000,000). The exact reason for this is explained [in this Ethereum Foundation blog post](https://blog.ethereum.org/2016/05/09/on-settlement-finality/). Since finality requires a two-thirds majority, an attacker could prevent the network from reaching finality by voting with one-third of the total stake. There is a mechanism to defend against this: the [inactivity leak](https://arxiv.org/pdf/2003.03052.pdf). This activates whenever the chain fails to finalize for more than four epochs. The inactivity leak bleeds away the staked ether from validators voting against the majority, allowing the majority to regain a two-thirds majority and finalize the chain. ## Crypto-economic security {#crypto-economic-security} -Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for users to attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviors that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on how many validators are also being slashed at around the same time. This is known as the "correlation penalty", and it can be minor (~1% stake for a single validator slashed on their own) or can result in 100% of the validator's stake getting destroyed (mass slashing event). It is imposed halfway through a forced exit period that begins with an immediate penalty (up to 0.5 ETH) on Day 1, the correlation penalty on Day 18, and finally, ejection from the network on Day 36. They receive minor attestation penalties every day because they are present on the network but not submitting votes. This all means a coordinated attack would be very costly for the attacker. +Running a validator is a commitment. The validator is expected to maintain sufficient hardware and connectivity to participate in block validation and proposal. In return, the validator is paid in ether (their staked balance increases). On the other hand, participating as a validator also opens new avenues for users to attack the network for personal gain or sabotage. To prevent this, validators miss out on ether rewards if they fail to participate when called upon, and their existing stake can be destroyed if they behave dishonestly. There are two primary behaviors that can be considered dishonest: proposing multiple blocks in a single slot (equivocating) and submitting contradictory attestations. The amount of ether slashed depends on how many validators are also being slashed at around the same time. This is known as the ["correlation penalty"](https://arxiv.org/pdf/2003.03052.pdf), and it can be minor (~1% stake for a single validator slashed on their own) or can result in 100% of the validator's stake getting destroyed (mass slashing event). It is imposed halfway through a forced exit period that begins with an immediate penalty (up to 0.5 ETH) on Day 1, the correlation penalty on Day 18, and finally, ejection from the network on Day 36. They receive minor attestation penalties every day because they are present on the network but not submitting votes. This all means a coordinated attack would be very costly for the attacker. ## Fork choice {#fork-choice} -When the network performs optimally and honestly, there is only ever one new block at the head of the chain, and all validators attest to it. However, it is possible for validators to have different views of the head of the chain due to network latency or because a block proposer has equivocated. Therefore, consensus clients require an algorithm to decide which one to favor. The algorithm used in proof-of-stake Ethereum is called LMD-GHOST, and it works by identifying the fork that has the greatest weight of attestations in its history. +When the network performs optimally and honestly, there is only ever one new block at the head of the chain, and all validators attest to it. However, it is possible for validators to have different views of the head of the chain due to network latency or because a block proposer has equivocated. Therefore, consensus clients require an algorithm to decide which one to favor. The algorithm used in proof-of-stake Ethereum is called [LMD-GHOST](https://arxiv.org/pdf/2003.03052.pdf), and it works by identifying the fork that has the greatest weight of attestations in its history. ## Proof-of-stake and security {#pos-and-security} @@ -56,11 +56,10 @@ Overall, proof-of-stake, as it is implemented on Ethereum, has been demonstrated | Pros | Cons | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | | Staking makes it easier for individuals to participate in securing the network, promoting decentralization. validator node can be run on a normal laptop. Staking pools allow users to stake without having 32 ETH. | Proof-of-stake is younger and less battle-tested compared to proof-of-work | -| Staking is more decentralized. It allows for increased participation, and more nodes doesn't mean increased % returns, whereas more hashpower means higher % returns with proof-of-work mining. | Proof-of-stake is more complex to implement than proof-of-work | +| Staking is more decentralized. Economies of scale do not apply in the same way that they do for PoW mining. | Proof-of-stake is more complex to implement than proof-of-work | | Proof-of-stake offers greater crypto-economic security than proof-of-work | Users need to run three pieces of software to participate in Ethereum's proof-of-stake compared to one for proof-of-work. | | Less issuance of new ether is required to incentivize network participants | | -More information about the PoS design rationale is available in this [blog post by Vitalik](https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51). ## Further reading {#further-reading} From 181ecc05389f28223658be11ef36f9647c3ff5b7 Mon Sep 17 00:00:00 2001 From: Joseph Cook <33655003+jmcook1186@users.noreply.github.com> Date: Mon, 16 May 2022 10:53:28 +0100 Subject: [PATCH 24/26] add info to intro as per @samajammin suggestions --- src/content/developers/docs/consensus-mechanisms/pos/index.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/index.md b/src/content/developers/docs/consensus-mechanisms/pos/index.md index 36c82092b77..544f91aa669 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/index.md @@ -3,10 +3,9 @@ title: Proof-of-stake (PoS) description: An explanation of the proof-of-stake consensus protocol and its role in Ethereum. lang: en sidebar: true -incomplete: false --- -Ethereum is moving to a consensus mechanism called proof-of-stake (PoS) from [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/). This was always the plan because PoS is thought to be more secure than PoW, uses drastically less energy, and enables new scaling solutions to be implemented. However, proof-of-stake is also more complex than proof-of-work. Refining the proof-of-stake mechanism has taken years of research and development, and the challenge now is to implement it on the live Ethereum network - a process known as ["The Merge"](/upgrades/merge/). +A [consensus mechanism](/developers/docs/consensus-mechanisms/) is a set of rules that incentives that enable nodes to come to agreement about the state of the Ethereum network. There are several different classes of consensus mechanism that have been implemented on various blockchains, including [proof-of-work (PoW)](/developers/docs/consensus-mechanisms/pow/), proof-of-stake (PoS) and proof-of-authority (PoA). Ethereum has used PoW since its genesis, but is moving to PoS. This was always the plan because PoS is thought to be more secure than PoW, uses drastically less energy, and enables new scaling solutions to be implemented. However, proof-of-stake is also more complex than proof-of-work and refining the mechanism has taken years of research and development. The challenge now is to implement PoS on the live Ethereum network - a process known as ["The Merge"](/upgrades/merge/). ## Prerequisites {#prerequisites} From 824c9a85c589a3b624ff296d18060224bfe8a7d0 Mon Sep 17 00:00:00 2001 From: Joseph Cook <33655003+jmcook1186@users.noreply.github.com> Date: Mon, 16 May 2022 10:55:22 +0100 Subject: [PATCH 25/26] rm `incomplete: false` from header --- .../developers/docs/consensus-mechanisms/pos/gasper/index.md | 1 - 1 file changed, 1 deletion(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md index e5e6e110d7d..fe0a8df5eda 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/gasper/index.md @@ -3,7 +3,6 @@ title: Gasper description: An explanation of the Gasper proof-of-stake mechanism. lang: en sidebar: true -incomplete: false --- Gasper is a combination of Casper the Friendly Finality Gadget (Casper-FGG) and the LMD-GHOST fork choice algorithm. Together these components form the consensus mechanism securing proof-of-stake Ethereum. Casper is the mechanism that upgrades certain blocks to "finalized" so that new entrants into the network can be confident that they are syncing the canonical chain. The fork choice algorithm uses accumulated votes to ensure that nodes can easily select the correct one when forks arise in the blockchain. From 1ea9ebf239e09c88aed753df7e74fd763a4e0ff0 Mon Sep 17 00:00:00 2001 From: Paul Wackerow <54227730+wackerow@users.noreply.github.com> Date: Mon, 16 May 2022 20:05:47 -0700 Subject: [PATCH 26/26] remove incomplete: false --- .../docs/consensus-mechanisms/pos/weak-subjectivity/index.md | 1 - .../pow/mining-algorithms/dagger-hashamoto/index.md | 1 - .../consensus-mechanisms/pow/mining-algorithms/ethash/index.md | 1 - .../docs/consensus-mechanisms/pow/mining-algorithms/index.md | 1 - .../developers/docs/consensus-mechanisms/pow/mining/index.md | 1 - 5 files changed, 5 deletions(-) diff --git a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md index 19acdbd99bd..1340e33558c 100644 --- a/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pos/weak-subjectivity/index.md @@ -3,7 +3,6 @@ title: Weak subjectivity description: An explanation of weak subjectivity and its role in PoS Ethereum. lang: en sidebar: true -incomplete: false --- Subjectivity in blockchains refers to reliance upon social information to agree on the current state. There may be multiple valid forks that are chosen from according to information gathered from other peers on the network. The converse is objectivity which refers to chains where there is only one possible valid chain that all nodes will necessarily agree upon by applying their coded rules. There is also a third state, known as weak subjectivity. This refers to a chain that can progress objectively after some initial seed of information is retrieved socially. diff --git a/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/dagger-hashamoto/index.md b/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/dagger-hashamoto/index.md index d227087a977..3a07a0a0060 100644 --- a/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/dagger-hashamoto/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/dagger-hashamoto/index.md @@ -3,7 +3,6 @@ title: Dagger-Hashamoto description: A detailed look at the Dagger-Hashamoto algorithm. lang: en sidebar: true -incomplete: false --- Dagger-Hashimoto was the original research implementation and specification for Ethereum's mining algorithm. Dagger-Hashimoto was superseded by [Ethash](#ethash). diff --git a/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/ethash/index.md b/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/ethash/index.md index 9fa431de23a..57165179ac1 100644 --- a/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/ethash/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/ethash/index.md @@ -3,7 +3,6 @@ title: Ethash description: A detailed look at the Ethash algorithm. lang: en sidebar: true -incomplete: false --- [Ethash](https://github.com/ethereum/wiki/wiki/Ethash) is a modified version of the [Dagger-Hashimoto](/developers/docs/consensus-mechanisms/pow/mining-algorithms/dagger-hashamoto) algorithm. Ethash proof-of-work is [memory hard](https://wikipedia.org/wiki/Memory-hard_function), which was thought to make the algorithm ASIC resistant, but ASIC Ethash-mining has since been shown to be possible. Memory hardness is achieved with a proof of work algorithm that requires choosing subsets of a fixed resource dependent on the nonce and block header. This resource (a few gigabytes in size) is called a DAG. The DAG is changed every 30000 blocks, a 125-hour window called an epoch (roughly 5.2 days) and takes a while to generate. Since the DAG only depends on block height, it can be pre-generated but if it's not, the client needs to wait until the end of this process to produce a block. If clients do not pre-generate and cache DAGs ahead of time the network may experience massive block delay on each epoch transition. Note that the DAG does not need to be generated for verifying the proof-of-work essentially allowing for verification with both low CPU and small memory. diff --git a/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/index.md b/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/index.md index a5c7632d2f6..0f7ef8a6533 100644 --- a/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pow/mining-algorithms/index.md @@ -3,7 +3,6 @@ title: Mining algorithms description: A detailed look at the algorithms used for Ethereum mining. lang: en sidebar: true -incomplete: false --- Ethereum mining has used two mining algorithms, Dagger Hashimoto and Ethash. Dagger Hashimoto was never used to to mine Ethereum, being superseded by Ethash before mainet launched. It was a R&D minign algorithm that paved the way for Ethash. However, it has historical significance as an important innovation in Ethereum's development. Proof-of-work mining itself will be deprecated in favor of proof-of-stake during [The Merge](/merge/), which is forecast to happen in Q3-Q4 2022. diff --git a/src/content/developers/docs/consensus-mechanisms/pow/mining/index.md b/src/content/developers/docs/consensus-mechanisms/pow/mining/index.md index da7b25254a1..427f86f6cf7 100644 --- a/src/content/developers/docs/consensus-mechanisms/pow/mining/index.md +++ b/src/content/developers/docs/consensus-mechanisms/pow/mining/index.md @@ -3,7 +3,6 @@ title: Mining description: An explanation of how mining works in Ethereum and how it helps keep Ethereum secure and decentralized. lang: en sidebar: true -incomplete: false --- ## Prerequisites {#prerequisites}