-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spec: Bottom up interaction #899
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great to see this. Note, the epic mentioned (#211) has unchecked issues that were all "closed as not planned". Is that correct? There is at least one todo in the codebase that is closed.
|
||
- *Only the first submitter gets a reward.* To achieve redundancy the reward would have to be multiple of the cost. Depending on how likely it is to beat the fastest relayer (a parent validator might insert their own transaction to steal the rewards), it might make it unprofitable for multiple validators to operate. | ||
- *The first N submitters get rewards.* It is easy for any relayer to submit N transactions in a Sybil attack to reap all rewards, hampering redundancy. | ||
- *All submitters in a fixed time period get equal share of a fixed reward.* This takes out the competition aspect, and discourages Sybil attacks because the reward doesn’t grow. It should lead to a dynamic equilibrium of the number of relayers. However if the fixed time window is too wide, it encourages freeloaders who just repeat the first submission, which would make it look like there is redundancy where there isn’t. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this option because it implies that the goal should be that all honest validators share the burden of the cost of submitting a txn. ie, at equilibrium, shared_reward / num_validators ~= txn_cost
.
it encourages freeloaders who just repeat the first submission
Commit-reveal pattern could be helpful here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's what I thought, at equilibrium the reward covers the cost. Any more joiners and somebody leaves; too many leave and it's profitable to join, so you can target a 3x redundancy by giving out 3x transaction cost.
I haven't thought of a commit reveal pattern. It sounds promising; maybe the payload can be turned into a Merkle tree with a sender specific proof to show they have it, and then anyone can reveal the content a few blocks later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another option:
- Senders update a hash with the Payload and with a sender-chosen one-time-use "secret".
- Sender posts a txn with this final hash (the "commitment").
- In the next time window, senders post the actual payload plus their one-time secret (the "reveal").
- The contract validates this second txn by trying to recreate the commitment (hash the sent payload w/ secret and comparing to content of first txn).
The downside is needing some intermediate state (one bytes32
per sender) and multiple txns, but the first one is relatively inexpensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose you could make the first transaction also take the "reveal" from the prior time window. So, just one txn per window needed.
spec/bottom up interraction.md
Outdated
|
||
When the checkpoint is submitted to the Lotus rootnet, it is currently expected to either contain all bottom-up messages or that they accompany the checkpoint in a different way, and only a commitment is in the checkpoint. However in both cases the bottom-up messages would be executed and their gas cost paid for by the relayer. | ||
|
||
With a Fendermint parent network, the same thing works if that’s how the smart contracts are implemented. There was another way laid it in the epic above, which involved the IPLD Resolver procuring the checkpoint payload from the subnet based on a CID, and executing the messages implicitly when the validators decide that they all have the data available. In this case the relayer would have only paid for the *validation* of the checkpoint, not its *execution*. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was another way laid it in the epic above
Typo?
- In the case where a relayer executes the messages, they first have to estimate the gas cost, so at least it is known before the checkpoint-bearing transaction is included in a rootnet block, and thus the block gas limit can be observed. However, if the gas spent by the checkpoint would exceed the block gas limit, the checkpoint will never be included, but at the same time no other checkpoint can be produced by the subnet, and thus checkpointing stalls. | ||
- In the case where the validators execute messages implicitly, they can choose whether to include it in a block or not, but to do so they would need to estimate the cost at some point, and again it might exceed the limits. Implicit execution also makes it difficult to deal with errors, in particular there is no room for retries. | ||
|
||
To overcome this issue, ideally cross messages would *not* be executed in the block where the checkpoint is included. Instead either just a commitment would be stored, or messages would be parked in inboxes (e.g. organised by sender account). The senders could come later and initiate their own transactions to kick off the execution of the messages delivered as part of the IPC consensus mechanism, at which point they can pay for the gas and retry as many times as they see fit if they run out of gas. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
I think the numerous unticked URLs are a remnant of where this issue was migrated from the
I don't know why that was closed, I think @jsoares did some cleanup and might have closed it. It wasn't implemented because in the end they implemented the submission in pure Solidity where this CID based approach would have been a lot of extra work. If the actors were in Rust, we could have just pulled stuff from their state using CIDs, but with Ethereum actors this isn't possible due to how Solidity stores the data. Or at least I don't think it's as straight forward. So it's right to be closed as nobody is working on this or planning to finish it right now, but the code was left in there because it provides some talking points, should it be revived. |
Indeed, don't read anything into my closing it -- it was merely the auto-close stale issues policy in action (in line with the linear method). The logic is that if it's relevant, someone will reopen it, and we see that at play here :) That said, also note that the line is pre-monorepo, so the issue wouldn't match current numbering either. |
The bottom up interaction spec.