-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simpler alternative to BLOCKHASH extension (#210) #1218
Comments
Even if the client writes the account storage directly, it would be nice to still have a "precompile" at the address so that the semantics from user contract's perspective remain the same as in EIP210:
In this case there are two options:
The main benefit of all this is that without any significant protocol change at a future time the "precompile" could be rewritten in for example wasm. |
So the address would have code that could be called to read any particular storage key? Seems reasonable to me. I suppose we can also add a |
In theory yes, but let's say it's not actually created until block 7M, and without 'stuffing' it with old values, at what point in time can it actually be relied upon to provide such data? It will be messy if contracts have to mess about with determining if a certain blocknumber is in the registry or not, by checking against |
@holiman To solve the problem of 'stuffing', I suggest making a contract that is able to accept historical blockhashes (backwards, one by one), write them into storage, and send a fixed small payment (lets say, 0.0001 ETH) back to tx.origin if the new entry has been created. The dominant gas cost of such operation is SSTORE (20k). So 0.0001 ETH (100'000 GWei) can buy 20'000 gas for 5 GWei per unit of gas. And to stuff 7m block hashes, you'd need a bit more than 700 ETH. These 700 ETH can be dropped into the contract and the rest will be done by the miners... (it is literally throwing money at the problem) |
I have given it some thought and I do not think that "stuffing" 7M old block hashes into storage is worth the effort. I can not think of a use case where a contract would require proof of a block hash older than its own existence (and any such contract should only exist after the fork which includes this EIP). Proving old hashes might be useful for light clients but LES (and AFAIK also Parity's version) already uses checkpoints that enables servers to prove any block hash in a single step with a single Merkle proof: |
I also have a small extension proposal for this EIP: I think we should add the current TD (total difficulty) value to this contract. The main reason I'd like to champion this EIP is because it enables trustless checkpoint syncing of light clients. This process would look roughly like this:
Unfortunately it is really easy for an attacker to advertise a fake high TD while clients could only realize after a sufficient amount of random sampling that even though the chain PoWs are valid, the TD was a lie and the chain is probably an invalid fork or an attacker chain. Also since random sampling is a statistical method, small differences in TD could never be detected so the client could never be sure about the exact TD value belonging to a given block (which might also be imporant in some use cases). Having the TD as a part of the consensus would allow them to know the exact value and permit them to detect fraud very early in the second phase of the checkpoint syncing process. |
what is the status of this? |
@zmitton This came up on a Core Devs call a while back. Here is the agenda [from where it got dropped(https://github.com/ethereum/pm/blob/7d1028a632033e2b10e7ada6a97efa7b4ac20e59/All%20Core%20Devs%20Meetings/Meeting%2045.md) as a few potential implementations had been proposed but no commitment to thoroughly investigating was made. Recently this "Fly Paper" publication using this concept has started making its rounds. |
There has been no activity on this issue for two months. It will be closed in a week if no further activity occurs. If you would like to move this EIP forward, please respond to any outstanding feedback or add a comment indicating that you have addressed all required feedback and are ready for a review. |
This issue was closed due to inactivity. If you are still pursuing it, feel free to reopen it and respond to any feedback or request a review in a comment. |
Summary
Allows blocks to be directly aware of block hashes much older than the current hash.
Parameters
BLOCKHASH_CONTRACT_ADDR
: 0xf0 (ie. 240)Specification
At the start of processing any block, run the following algorithm, where
store(x, y)
stores value y in key x ofBLOCKHASH_CONTRACT_ADDR
:Extends the BLOCKHASH opcode so that if a given block height's hash is available in one of these storage keys, then this value is returned (ie. so sometimes block hashes with heights more than 256 blocks ago can be returned). That is, if BLOCKHASH is called with height equal to
block.number - (block.number % 2**k)
for some k < 32, thensload(k)
is returned.Explanation
Storage key 0 always stores the last blockhash, storage key 1 stores the last blockhash with an even blockheight, storage key 2 stores the last blockhash with a blockheight of 0 mod 4, etc etc.
Use cases
~log2(N2 - N) - 8
Merkle branches. It should not be too hard to use existing libraries to write a utility contract and library that produces and verifies these proofs.The text was updated successfully, but these errors were encountered: