Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add claim rewards ica #961

Merged
merged 19 commits into from
Dec 8, 2023
Merged

Add claim rewards ica #961

merged 19 commits into from
Dec 8, 2023

Conversation

riley-stride
Copy link
Contributor

@riley-stride riley-stride commented Oct 25, 2023

Submit an epochly delegation claim rewards ICA

If you'd like to test this manually, you can run the integration tests, kill them before a second liquid stake is processed, then query the withdrawal account and watch a balance appear (the result of claiming rewards).

For reviewer: ignore the first 7 commits, they are stale git artifacts. Start at aeaf4ef.

Questions for the reviewer

  • Can we withdraw all rewards with a single msg (rather than the current multimsg approach)? I can't find a way in the SDK => Batched into sets of 10.
  • Will the current multimsg fit in a block if it includes many validators? Should we cap the number of validators? Does the validator iteration look good (it's drawn from here) => Batched into sets of 10.
  • In this implementation we skip withdrawing rewards from 0-delegation validators. Is that the right approach? => I think so.
  • Is executing the ICA at the top of every epoch (not day) the right cadence? => A bit frequent but why not? It's cheap. Leaving it this way.
  • Is there anything to unit test? Note we don't unittest SetWithdrawalAddressOnHost => We've added unit tests.

@riley-stride riley-stride marked this pull request as ready for review November 7, 2023 03:22
Copy link
Collaborator

@sampocs sampocs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we withdraw all rewards with a single msg (rather than the current multimsg approach)? I can't find a way in the SDK.

I can't find a message either. We probably have to do it by val


Will the current multimsg fit in a block if it includes many validators? Should we cap the number of validators?

I'm not sure, but my vote would be to play it safe and batch it


Does the validator iteration look good?

Lgtm, but I think the main question here is whether the correct validator address is used. But i'd imagine it uses the same address as MsgDelegate and other staking txs. And if you verified through the cli, it should be good!


In this implementation we skip withdrawing rewards from 0-delegation validators. Is that the right approach?

I think it is! By the time they actually get to delegation 0 in our record keeping, the undelegation message should have been processed and I think rewards should stop accruing while it's undelegating.


Is executing the ICA at the top of every epoch (not day) the right cadence?

It probably doesn't matter either way. I don't think we need it that frequent but it's also low cost to just do it each epoch.


Is there anything to unit test? Note we don't unittest SetWithdrawalAddressOnHost

I don't think we really need one; however, it's pretty low cost to add and can't hurt! You can just do it all it one function like:

func (s *KeeperTestSuite) ClaimAccruedStakingRewardsOnHost () {
   // Create a delegation ICA channel for the ICA submission
   owner := types.FormatICAAccountOwner(HostChainId, types.ICAAccountType_Delegation)
   channelId, portId := s.CreateICAChannel(owner)

   // Create validators
   validators := types.Validator{}
   for i := 0; i < {some number much larger than batch size}; i++ {
       validators = append(validators, types.Validator{
            Address: fmt.Sprintf("val-%d", i),
            Delegation: i
       })
   } 
   // ^ for the above you could do something like set the delegation to 0 for a subset of them

   // Create host zone 
   hostZone := types.HostZone{
       ChainId: HostChainId,
       DelegationIcaAccount: "delegation",
       WithdrawalIcaAccount: "withdrawal",
       Validators: validators,
   }

   // Create epoch tracker for ICA timeout
   strideEpoch := types.EpochTracker{
      EpochIdentifier:    epochtypes.STRIDE_EPOCH,
       NextEpochStartTime: uint64(s.Coordinator.CurrentTime.UnixNano() + 30_000_000_000), // used for timeout
   }
   s.App.StakeibcKeeper.SetEpochTracker(s.Ctx, strideEpoch)

   // Get start sequence number to confirm ICA was set
   startSequence, found := s.App.IBCKeeper.ChannelKeeper.GetNextSequenceSend(s.Ctx, portId, channelId)
   s.Require().True(found)

   // Call claim accrued rewards to submit ICAs
   err := s.App.StakeibcKeeper.ClaimAccruedRewardsForHost(s.Ctx, hostZone)
   s.Require().NoError(err, "no error expected when accruing rewards")

   // Confirm sequence number incremented by the number of txs 
   // where the number of txs is equal: 
   // (total_validators - validators_with_zero_delegation) / batch_size
   expectedEndSequence := startSequence + XXX
   actualEndSequence, found := s.App.IBCKeeper.ChannelKeeper.GetNextSequenceSend(s.Ctx, portId, channelId)
   s.Require().True(found)
   s.Require().Equal(expectedEndSequence, actualEndSequence, "sequence number should have incremented")

   // Attempt to call it with a host zone without a delegation ICA address, it should fail
   invalidHostZone := hostZone 
   invalidHostZone.DelegationIcaAddress = ""
   err := s.App.StakeibcKeeper.ClaimAccruedRewardsForHost(s.Ctx, hostZone)
   s.Require().ErrorContains(err, "ICA account not found")
   
  //  same for withdrawal address

  // Attempt to call claim with an invalid connection ID on the host zone so the ica fails
}

x/stakeibc/keeper/msg_server_submit_tx.go Outdated Show resolved Hide resolved
x/stakeibc/keeper/msg_server_submit_tx.go Outdated Show resolved Hide resolved
x/stakeibc/keeper/msg_server_submit_tx.go Outdated Show resolved Hide resolved
x/stakeibc/keeper/msg_server_submit_tx.go Outdated Show resolved Hide resolved
x/stakeibc/keeper/msg_server_submit_tx.go Outdated Show resolved Hide resolved
Copy link

This pull request has been automatically marked as stale because it has not had any recent activity. It will be closed if no further activity occurs. Thank you!

@riley-stride
Copy link
Contributor Author

riley-stride commented Nov 30, 2023

Will the current multimsg fit in a block if it includes many validators? Should we cap the number of validators?

I'm not sure, but my vote would be to play it safe and batch it

Does "batch it" mean use a cap?

@sampocs

@github-actions github-actions bot removed the Stale label Dec 1, 2023
Copy link
Contributor

@shellvish shellvish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me! I think there's one very nit if statement that we should add, otherwise this looks perfect

// batching txs into groups of ClaimRewardsICABatchSize messages, to ensure they will fit in the host's blockSize

// Calculate the number of batches
numBatches := (len(validators) + ClaimRewardsICABatchSize - 1) / ClaimRewardsICABatchSize
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe a dumb question, can we do numBatches := len(validators) / ClaimRewardsICABatchSize ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha yes good point, I think so. But I removed this now in favor of the approach Sam recommended above.

x/stakeibc/keeper/msg_server_submit_tx.go Outdated Show resolved Hide resolved
msgs = append(msgs, msg)
}

_, err := k.SubmitTxsStrideEpoch(ctx, hostZone.ConnectionId, msgs, types.ICAAccountType_DELEGATION, "", nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: because we skip assembling the msg if val.Delegation.IsZero, should we have a check here of if len(msgs) > 0 then submit the Txs, and otherwise just continue with the loop?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good eye, added here.

@sampocs sampocs added the v17 label Dec 6, 2023
@riley-stride riley-stride merged commit 32fd245 into main Dec 8, 2023
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants