Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spec: Bottom up interaction #899

Merged
merged 2 commits into from
Sep 26, 2024
Merged

Spec: Bottom up interaction #899

merged 2 commits into from
Sep 26, 2024

Conversation

cryptoAtwill
Copy link
Contributor

The bottom up interaction spec.

Copy link
Contributor

@sanderpick sanderpick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great to see this. Note, the epic mentioned (#211) has unchecked issues that were all "closed as not planned". Is that correct? There is at least one todo in the codebase that is closed.


- *Only the first submitter gets a reward.* To achieve redundancy the reward would have to be multiple of the cost. Depending on how likely it is to beat the fastest relayer (a parent validator might insert their own transaction to steal the rewards), it might make it unprofitable for multiple validators to operate.
- *The first N submitters get rewards.* It is easy for any relayer to submit N transactions in a Sybil attack to reap all rewards, hampering redundancy.
- *All submitters in a fixed time period get equal share of a fixed reward.* This takes out the competition aspect, and discourages Sybil attacks because the reward doesn’t grow. It should lead to a dynamic equilibrium of the number of relayers. However if the fixed time window is too wide, it encourages freeloaders who just repeat the first submission, which would make it look like there is redundancy where there isn’t.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this option because it implies that the goal should be that all honest validators share the burden of the cost of submitting a txn. ie, at equilibrium, shared_reward / num_validators ~= txn_cost.

it encourages freeloaders who just repeat the first submission

Commit-reveal pattern could be helpful here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's what I thought, at equilibrium the reward covers the cost. Any more joiners and somebody leaves; too many leave and it's profitable to join, so you can target a 3x redundancy by giving out 3x transaction cost.

I haven't thought of a commit reveal pattern. It sounds promising; maybe the payload can be turned into a Merkle tree with a sender specific proof to show they have it, and then anyone can reveal the content a few blocks later?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option:

  • Senders update a hash with the Payload and with a sender-chosen one-time-use "secret".
  • Sender posts a txn with this final hash (the "commitment").
  • In the next time window, senders post the actual payload plus their one-time secret (the "reveal").
  • The contract validates this second txn by trying to recreate the commitment (hash the sent payload w/ secret and comparing to content of first txn).

The downside is needing some intermediate state (one bytes32 per sender) and multiple txns, but the first one is relatively inexpensive.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose you could make the first transaction also take the "reveal" from the prior time window. So, just one txn per window needed.


When the checkpoint is submitted to the Lotus rootnet, it is currently expected to either contain all bottom-up messages or that they accompany the checkpoint in a different way, and only a commitment is in the checkpoint. However in both cases the bottom-up messages would be executed and their gas cost paid for by the relayer.

With a Fendermint parent network, the same thing works if that’s how the smart contracts are implemented. There was another way laid it in the epic above, which involved the IPLD Resolver procuring the checkpoint payload from the subnet based on a CID, and executing the messages implicitly when the validators decide that they all have the data available. In this case the relayer would have only paid for the *validation* of the checkpoint, not its *execution*.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was another way laid it in the epic above

Typo?

- In the case where a relayer executes the messages, they first have to estimate the gas cost, so at least it is known before the checkpoint-bearing transaction is included in a rootnet block, and thus the block gas limit can be observed. However, if the gas spent by the checkpoint would exceed the block gas limit, the checkpoint will never be included, but at the same time no other checkpoint can be produced by the subnet, and thus checkpointing stalls.
- In the case where the validators execute messages implicitly, they can choose whether to include it in a block or not, but to do so they would need to estimate the cost at some point, and again it might exceed the limits. Implicit execution also makes it difficult to deal with errors, in particular there is no room for retries.

To overcome this issue, ideally cross messages would *not* be executed in the block where the checkpoint is included. Instead either just a commitment would be stored, or messages would be parked in inboxes (e.g. organised by sender account). The senders could come later and initiate their own transactions to kick off the execution of the messages delivered as part of the IPC consensus mechanism, at which point they can pay for the gas and retry as many times as they see fit if they run out of gas.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@aakoshh
Copy link
Contributor

aakoshh commented May 10, 2024

the epic mentioned (#211) has unchecked issues that were all "closed as not planned". Is that correct?

I think the numerous unticked URLs are a remnant of where this issue was migrated from the fendermint to the ipc monorepo.

There is at least one todo in the codebase that is closed.

I don't know why that was closed, I think @jsoares did some cleanup and might have closed it. It wasn't implemented because in the end they implemented the submission in pure Solidity where this CID based approach would have been a lot of extra work. If the actors were in Rust, we could have just pulled stuff from their state using CIDs, but with Ethereum actors this isn't possible due to how Solidity stores the data. Or at least I don't think it's as straight forward. So it's right to be closed as nobody is working on this or planning to finish it right now, but the code was left in there because it provides some talking points, should it be revived.

@jsoares
Copy link
Contributor

jsoares commented May 10, 2024

Indeed, don't read anything into my closing it -- it was merely the auto-close stale issues policy in action (in line with the linear method). The logic is that if it's relevant, someone will reopen it, and we see that at play here :)

That said, also note that the line is pre-monorepo, so the issue wouldn't match current numbering either.

@cryptoAtwill cryptoAtwill merged commit e93896e into main Sep 26, 2024
15 checks passed
@cryptoAtwill cryptoAtwill deleted the spec-bottomup branch September 26, 2024 14:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants