Skip to content

Counterfactual Terminology

Jeff Coleman edited this page Sep 14, 2016 · 4 revisions

Introduction to counterfactual terminology

When discussing state channels, one often has to talk and reason about "things that can/could happen but don't/didn't". The terminology used to discuss these scenarios and methods is called "counterfactual terminology". In philosophical parlance, a counterfactual conditional is a statement of the form "if X then Y", but where X isn't actually the case, making it "counter to fact". In other words, it is a statement of the form "if X were the case, then Y would be, even if X isn't". In the context of blockchains, the term "counterfactual" has been adopted and slightly modified to refer to things that could happen on the blockchain, but don't. In other words, we use the term "counterfactual" to create statements roughly of the form "if X had happened on the blockchain, then Y would have, even if X didn't". Below are further explanations of a few specific phrases arising naturally from this paradigm (although it can be quickly seen how to extend the same concept beyond these phrases in a similar fashion).

Counterfactual Verification

Counterfactual verification is a technique used to verify a (possibly quite extensive) computation via a blockchain without needing the nodes of the blockchain to actually perform the computation themselves. Instead, the contract performing the counterfactual verification creates a set of incentives which would clearly result in certain responses if those responses existed. Based on which responses are actually received, the contract then determines the result of the computation by making reasonable assumptions about which actions the various parties would have taken in response to the incentives.

As a concrete example, suppose "A" wants to know the answer to a question, where the answer can be computed fairly cheaply by a computer with the appropriate hardware, but where paying the gas costs for performing the calculation on chain would be substantial (for example, verifying that the scrypt hash of some data matches a specific value). A has created a contract that could do the computation for her across many blocks at a cost of about 50 ether, but would much rather avoid spending such a high amount in transaction fees. So, A offers a reward of 10 ether for anyone who can provide her with the correct answer. In order to accept a proposed answer, she requires that anyone who submits an answer deposits an amount substantially higher than the gas costs to perform such a verification: perhaps 1000 ether. Finally, A also offers a reward of 950 ether for anyone who catches someone else submitting an incorrect answer, and requires such claims to include a 60 ether deposit.

If you have been tracking the amounts carefully, you can see that A only needs to deposit 10 ether in the contract that she publishes. After that, the logic works as follows:

  1. Presuming that 10 ether is worth substantially more than the cost of computing the operation (plus the capital cost of tying up the deposit for the duration of the process), it is very likely that a "bystander" would be willing to submit a claimed result. This claim may be either true or false, but if it is true they are guaranteed not to have anyone else catch them submitting an incorrect answer. Either way, suppose they submit a claim (along with their 1000 ether deposit).

  2. At this point, the contract will follow a simple rule: if no one contests the answer during a specified period, release the deposit and pay the submitter the 10 ether reward. If someone does contest the answer, spend the full 50 ether on computing the result, and then refund only the claimant who was proved right by the computation. No matter who it was, the deposit of the other party will cover this refund. Finally, pay out the 10 ether reward if the answer was proven correct, or else the 950 ether reward if the computation proved the answer was wrong.

  3. Given that such a set of rules and incentives exist, it is also very likely that another "bystander" will check any claimed result to see if it is wrong, since the reward for catching this error is much higher than the costs of checking. However, knowing this, it is very unlikely that the first bystander will risk skipping the computation and submitting a blind or incorrect guess. So the equilibrium result for this situation (assuming our numbers have been set correctly) should be that both the initial submitter and the second "double-checker" will perform the computation correctly, resulting in a contract that always gets the result at a much cheaper price than the raw gas costs. In actuality, the equilibrium depends on the exact costs, probabilities, and level of "computational liquidity" amongst bystanders, but a full game theoretic analysis is beyond the scope of this example.

The end result is that A has verified the computation without actually checking it on chain. From the blockchain's perspective, she has "counterfactually verified" the computation.

Three quick sidenotes. First, this is an intentional abuse of the philosophical terminology: the result was actually computed and verified. It just wasn't verified on chain, which is why we can say it was verified "counterfactually" in blockchain terms. Second, the technique of counterfactual verification sometimes goes under different names, mainly since no clear description of it has been published since the phrase was first coined in 2015. Nonetheless it is well understood among leading blockchain researchers (see here and here for example), sometimes under the name "interactive verification" (although that term doesn't always differentiate between on-chain and off-chain interactions within a verification process. Hence the need for a clear description here!). Finally, our example here is not yet heavily optimised. A more advanced version of this contract, for example, would allow multiple submissions and conclude that incentives have failed if conflicting answers go unchallenged, paying out all unchallenged submissions but reporting the computation as having failed due to improper incentivisation.

Beyond this specific example of a counterfactual verification, we can begin to see that state channels operate along a very similar principle at their core. In essence, they create the same pattern of incentive structure, except amongst a designated group of participants instead of amongst the more nebulous "bystanders" to the chain. This is one of the reasons that counterfactual terminology proves so helpful in designing and understanding state channels.

Counterfactual Instantiation

Counterfactual instantiation can be thought of as the creation of a smart contract object inside of a state channel (rather than on the blockchain). In actuality, no contract is necessarily being created (e.g. inside of some separate EVM shared between two people). Rather, a counterfactual instantiation happens when the parties to the channel have signed commitments which would allow a contract with the given properties to be created on chain if they were actually published there. With proper incentives, this is exceedingly unlikely to happen, so the instantiation is "counterfactual". Nonetheless, because the contract could be published to the blockchain at any time, the parties are incentivised to act as if it has already been instantiated there.

As an example, in the context of State Channels, it is necessary to use some type of "adjudicator" contract to determine what the latest state is if the parties are in disagreement. In our designs, however, this adjudicator does not need to be actually published until (or if) such a disagreement occurs. Instead, participants sign irrevocable commitments that could create such a contract at any time, but do not publish these commitments. We can then say that the adjudicator contract has been counterfactually instantiated, because although it does not exist on the blockchain yet, the fact that either party could call on the adjudicator at any time to make a guaranteed and accurate ruling should incentivise the participants of the channel to behave honestly in matters that can be adjudicated (assuming that the only outcome of calling on the adjudicator will be to increase costs for the misbehaving party, without changing the outcome).

In general, almost all contracts used in our state channel designs are counterfactually instantiated. That is, there exist commitments which would allow them to be instantiated on the blockchain at any time, but in normal usage this will never directly happen because it would only increase costs. Rather, they represent game theoretical leverage incentivising the participants to behave as if the contracts were already in place.

Counterfactual Addressing

Because counterfactually instantiated contracts do not necessarily have predictable blockchain addresses, some more advanced method of guaranteeing that such contracts can "find each other" is needed (should they be actually added to the chain for some reason). Such techniques are called "counterfactual addressing".

One of these techniques is to design the contracts so that if they are added to the blockchain their addresses will be automatically added to a registry, and that by looking in this registry they can obtain the addresses of any contracts they need to call. Following the larger pattern, when such a system is in place we can say that the contracts are "counterfactually registered". In our designs for fully abstracted state channels, counterfactual addressing uses an already existing Bulletin Board and enables advanced possibilities, such as the capability to move a running state channel to a different channel provider without halting its operation.