You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.
Currently a transaction signer will not know with certainty whether the transaction they signed will (assuming it does not expire) be immediately executed in the block that it is included in or be delayed for some time before (maybe) being executed in a later block. It would be nice for the transaction signer(s) to know with certainty that a transaction they sign will not be delayed (more than the time until expiration of the transaction) unless it ends up never being included in a block because it expires.
More critically, the delay that will be used can only be determined after running through check_authorization which requires the set of signing public keys as an input (in addition to the state of the permission graph at the point in which the transaction was executed). This prevents the signatures from being pruned since they would still be needed by a validator (even though they trust the block) just to recover the delay value in order to achieve the same database state transitions as every other node.
This particular requirement of being able to apply blocks one trusts without needing pruned data could be resolved (with respect to the new delayed transaction changes) by requiring the delay value to be placed into the transaction_receipt (at the cost of a few extra bytes per delayed transaction in the signed_block_summary), but both this requirement and the earlier one of providing some delay certainty to transaction signers could be achieved by requiring the delay value to be included in the transaction header. If a varint is used, then transactions which are not delayed (which we expect to be the vast majority of them) only require one extra byte in the header.
With this change, the check_authorization computation would still need to calculate a delay value. For the transaction to be valid, that calculated delay value would need to be no greater than the delay value in the transaction header. But the delayed transaction would use a delay based on the value in the transaction header. This also means that the current approach of increasing the delay of a transaction beyond the minimum required value via the special mindelay context-free action would no longer be necessary.
Replaying nodes can then simply use the delay value in transaction header without needing to run the check_authorization computation (which they could not do anyway if the signatures had been pruned by then).
With the original delay model, there was an open question regarding who to bill for the additional database memory used to store the delayed transaction until it was either scheduled and executed or dropped. The answer we were most likely going to go with was to bill the authorizer with the highest delay. But once again, finding out which account that actually is requires running through the check_authorization computation which nodes cannot be expected to do on replay of irreversible blocks since the necessary signatures may have already been pruned. Now, we can simply follow the similar model described in #1999 and #2000 (for figuring out which account to bill for CPU and network bandwidth respectively): by default bill the actor of the first authorization of the first context-aware action of the delayed transaction; but, allow this default account to bill to be overridden with a special context-aware action (that is a task left for another issue).
Open question: How does this approach affect deferred transactions generated in a contract code? Is it too much to demand that contract code pick a delay that will be no smaller than the minimum delay required by check_authorization? Or would deferred transactions be a special case where the delay is automatically calculated for it? And in that case can the enforced delay be the maximum of the automatically calculated one and the one specified in the transaction header of the deferred transaction? Could/should contracts have a way to prevent dispatch of deferred transaction they want to generate if the required delay would be too large? (It is important to note that deferred transactions generated in a contract do not need to commit to a delay in order to allow replaying without pruned data; this is because check_authorization does not use signing public keys for a deferred transaction generated in contract code.)
The text was updated successfully, but these errors were encountered:
PR #2084 resolves most of this issue. For generated transactions the enforced effective delay is currently the maximum one of all the various ways delays can be imposed (delay_sec, execute_after, check authorization). We may want to change this behavior and/or simplify it later.
@wanderingbort just needs to make changes to bill the memory for delayed transactions to the actor of the first authorization of the transaction, which is already included in PR #2042.
There were already unit tests of the delay feature which were adapted in PR #2084 to work with the change to delay_sec, but because of the changed semantics it would be good to have additional tests of this feature. (ATC TBD)
Related to #1022.
Currently a transaction signer will not know with certainty whether the transaction they signed will (assuming it does not expire) be immediately executed in the block that it is included in or be delayed for some time before (maybe) being executed in a later block. It would be nice for the transaction signer(s) to know with certainty that a transaction they sign will not be delayed (more than the time until expiration of the transaction) unless it ends up never being included in a block because it expires.
More critically, the delay that will be used can only be determined after running through
check_authorization
which requires the set of signing public keys as an input (in addition to the state of the permission graph at the point in which the transaction was executed). This prevents the signatures from being pruned since they would still be needed by a validator (even though they trust the block) just to recover the delay value in order to achieve the same database state transitions as every other node.This particular requirement of being able to apply blocks one trusts without needing pruned data could be resolved (with respect to the new delayed transaction changes) by requiring the delay value to be placed into the
transaction_receipt
(at the cost of a few extra bytes per delayed transaction in thesigned_block_summary
), but both this requirement and the earlier one of providing some delay certainty to transaction signers could be achieved by requiring the delay value to be included in the transaction header. If a varint is used, then transactions which are not delayed (which we expect to be the vast majority of them) only require one extra byte in the header.With this change, the
check_authorization
computation would still need to calculate a delay value. For the transaction to be valid, that calculated delay value would need to be no greater than the delay value in the transaction header. But the delayed transaction would use a delay based on the value in the transaction header. This also means that the current approach of increasing the delay of a transaction beyond the minimum required value via the specialmindelay
context-free action would no longer be necessary.Replaying nodes can then simply use the delay value in transaction header without needing to run the
check_authorization
computation (which they could not do anyway if the signatures had been pruned by then).With the original delay model, there was an open question regarding who to bill for the additional database memory used to store the delayed transaction until it was either scheduled and executed or dropped. The answer we were most likely going to go with was to bill the authorizer with the highest delay. But once again, finding out which account that actually is requires running through the
check_authorization
computation which nodes cannot be expected to do on replay of irreversible blocks since the necessary signatures may have already been pruned. Now, we can simply follow the similar model described in #1999 and #2000 (for figuring out which account to bill for CPU and network bandwidth respectively): by default bill the actor of the first authorization of the first context-aware action of the delayed transaction; but, allow this default account to bill to be overridden with a special context-aware action (that is a task left for another issue).Open question: How does this approach affect deferred transactions generated in a contract code? Is it too much to demand that contract code pick a delay that will be no smaller than the minimum delay required by
check_authorization
? Or would deferred transactions be a special case where the delay is automatically calculated for it? And in that case can the enforced delay be the maximum of the automatically calculated one and the one specified in the transaction header of the deferred transaction? Could/should contracts have a way to prevent dispatch of deferred transaction they want to generate if the required delay would be too large? (It is important to note that deferred transactions generated in a contract do not need to commit to a delay in order to allow replaying without pruned data; this is becausecheck_authorization
does not use signing public keys for a deferred transaction generated in contract code.)The text was updated successfully, but these errors were encountered: