Replies: 11 comments 21 replies
-
Would be useful for enabling soul bound tokens, eg chain together a CryptoTransfer & TokenFreeze txs. |
Beta Was this translation helpful? Give feedback.
-
hip: <HIP number (this is determined by the HIP editor)>
|
Beta Was this translation helpful? Give feedback.
-
Looking at Abstracts in other hips and trying to mirror that..., what do you think about the following for an Abstract? "This HIP defines a mechanism to execute atomic transaction chains such that a series of transactions depending on each other can be rolled into one transaction that passes the ACID test (atomicity, consistency, isolation, and durability)" Something like this |
Beta Was this translation helpful? Give feedback.
-
I like the idea behind this. I have been thinking of something similar, which I was calling "BatchTransaction". All transactions are atomic, and in the name I wanted to capture the notion of a batch of transactions all being submitted together. Not sure if I like "Batch" better than "Chain", but "BatchTransaction" is short and keeps to our convention of ending transaction message names with Transaction. What do you think? |
Beta Was this translation helpful? Give feedback.
-
Another challenge to think through is there is a 6K limit to the size of any gRPC request. This limit can be raised, but only with careful considering and not very far (i.e. it will never be in the megabytes). There are a few reasons for this. First, we need to protect against a DOS attack. If a legitimate gRPC request can be 1MB, then a bad gRPC request can also be 1MB and we won't know the difference between the two until we have gone through the effort of decoding the protobuf. It might even be valid protobuf up until the very last byte and then turns out to be bogus. An attacker can attempt a DDOS attack by sending many smaller requests, but they will be interleaved with legitimate requests, whereas if they can send fewer massive requests, then they can squeeze out legitimate requests more easily. Second, these transactions are batched up into events and gossiped. Big events mean higher latency in gossip, and if we support transactions (and by extension events) that are too big, then our finality and e2e latency times may go past our SLA. Keys make up a significant portion of the size of any given transaction. If each tx in the batch has its own sigmap, and if the sigmap is the same for many transactions, then we will hit the size limit very quickly -- maybe only a few transactions per batch. Also the payer is repeated, and each "inner" transaction has other metadata that is repeated. Is it all necessary? Which brings me to another point, we need to think about what the maximum number of transactions per batch will be. This is a difficult number to come to objectively, unless we work backwards from whatever our tx size limit should be and if we want each tx in the batch to be independent or not. |
Beta Was this translation helpful? Give feedback.
-
There is another challenge which was mentioned by @rocketmay, which is thinking through the failure scenario. We have three components of the fees: node, network, and service. The node fee is the fee collected by the node that processed your tx request and submitted it for consensus. The network fee is the fee paid to all nodes for having done the work of consensus. And the service fee is the fee paid to all nodes for handling your transaction. To prevent DOS attacks where an attacker causes the network to do work that it is not compensated for, we need to include in the HIP language for how to calculate the service fee in the event that the batch of transactions fail. And they can fail for several reasons. The transactions in the batch may fail because they are internally inconsistent. They may have the wrong signatures, or invalid protobuf bytes, or other problems with their own structure or internal state. They may also fail because at the time we get to handling those transactions the throttle limits for one or more of them has been exceeded. If one or more of the inner transactions are smart contract calls, they may fail by running out of gas, which we cannot determine ahead of time. They may also fail because by the time they execute, the working state on the node make them invalid. For example, at the time the batch transaction was submitted some account had plenty of hbars in it to perform the transfers and work denoted in the batch transactions, but by the time we actually get to processing those transactions, the account no longer has enough hbars. In all these cases, the nodes are going to do a lot of work only to find out that partway through the list of transactions it encounters a failure. In this case, we need to determine how to charge the user. Do we charge them full price for transactions that we didn't actually commit to state, or do we charge them a smaller price for the prep-work we had to do? For example, if a token mint is $1, and the batch has 20 token mints followed by a crypto transfer, and all the token mints are done (but not committed to state) and the crypto transfer fails, then do we charge $20 for those token mints that we're going to rollback or do we charge some smaller nominal fee for having prepared to do the token mints, but not having actually done them? I think we should do the latter, because maybe the crypto transfer failed due to a throttle or something else completely outside the hands of the person who submitted the batch transaction and they'd be irate if they lost $20 because a throttle was exceeded. Yet we have to charge something equivalent to the work the nodes actually did or it is an attack vector. |
Beta Was this translation helpful? Give feedback.
-
Another thing we should be really explicit about in the HIP is what we are not going to do, and decide up front how this feature would grow and take shape over time. Right now, it is proposed as a simple list of transactions (yay!). But somebody, somewhere, is going to say "I'd like to have this set of transactions succeed together or fail together, and this other set to execute only if the first failed for some reason". In other words, somebody is going to propose arrays of transactions with different semantics, like "run the first batch of transactions that actually succeeds and then stop processing the other batches". We should be clear that this feature is limited in nature, it will never be as flexible as a smart contract, and shouldn't be. It is a simple mechanism to enable some additional use cases but will never be capable of doing everything a smart contract can do. |
Beta Was this translation helpful? Give feedback.
-
I broke all that feedback into separate comments so it would be easy to have threads on each individual area. Given all the different things we need to think through, we should also ask ourselves whether this is a feature we want to add, or whether what we really need are more system contracts that can be called through a smart contract and let that be the mechanism for this kind of thing. It is far more powerful, so it will scale from simple use cases to very complex ones, whereas this feature will not scale in such a way. To be honest, I'm not sure. |
Beta Was this translation helpful? Give feedback.
-
Folks, this is still not ready. The protobuf is unworkable. If you don't want to hurt the service devs and the sdk devs; you need to create an |
Beta Was this translation helpful? Give feedback.
-
So, how does this affect the rate limit on HAPI? If I submit the max allowed create account transactions, what does that do to the rest of the network? |
Beta Was this translation helpful? Give feedback.
-
Just bumping this excellent hip and looking forward to it coming to fruition, is there any eta? btw I can imagine a dynamic market in layer 2 services building on this, eg forecasting transaction execution status, tracking reputation of signers etc |
Beta Was this translation helpful? Give feedback.
-
Opening up the discussion around atomic transaction chains on Hedera.
From @se7enarianelabs:
"Example:
TokenUnfreezeTransaction -> TransferTransaction -> TokenFreezeTransaction
The whole transaction chain should be executed atomically.
This would allow the creation of more complex flows that must occur in sequence, without using smart contracts, and listening to mirror nodes."
Beta Was this translation helpful? Give feedback.
All reactions