-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CIP-0128? | Preserving Order of Transaction Inputs #758
Conversation
Co-authored-by: Adam Dean <63186174+Crypto2099@users.noreply.github.com>
This proposal would benefit with comments from the IOG Plutus and Ledger teams 🙏 |
I don't fully understand the use-case. Why is the list of indices necessary? Can you not filter the set of inputs by their redeemer? Is that too slow? Also, how does having a list of inputs fix this issue? If your script expects all relevant inputs in the beginning of the list, it still needs to check that this is in fact the case, which should be almost as expensive as filtering them. So if you need a list of indices without duplicates, the easiest way to check this is by also requiring the list to be sorted (failing if it isn't). Then it can be done easily in linear time. Is that maybe the problem? Do you want a list of indices that selects a sublist in an arbitrary ordering? Like @michaelpj, I also don't understand the two points in the Alternatives section. The first looks like it may relate to some of the things I mentioned, but I'm not sure about that. |
In general I am in favor of this change for reasons unrelated to how Plutus see those inputs. I haven't gone through the CIP in detail, since that is unfortunately not going to be on our priority list until the next era. However, here is just a quick feedback. First of all, it should not be a list of inputs, but an ordered set of inputs, because we can't allow duplicates in that field. Furthermore this is a fairly complicated topic in general and we'd have to be very careful in order to get it right, because:
So, as the CIP currently stands it is definitely far away from how it should be addressed. |
The benefit of being able to control the ordering of the inputs is not to move everything to the beginning of the inputs list. It is to be able to order the elements with respect to how they will be processed so that it becomes unnecessary for a validator to traverse the tx inputs multiple times to find all the inputs they need (and sort them into the order that the validator needs them in). Additionally, the benefit is that having full control over the order of the inputs vastly simplifies the following design pattern and makes it accessible to DApps that do not have extremely specialized (proprietary) offchain tooling:
The above script is extremely inefficient. For every input and output we are searching for, we apply expensive checks to each element in the inputs / outputs until we find the elements that pass the checks. It can be vastly improved to: validatorB :: AssetClass -> BuiltinData -> (Integer, Integer, Integer) -> ScriptContext -> ()
validatorB stateToken (inputIdx, outputIdx, authIdx) _ ctx =
let ownInput = elemAt inputIdx
authInput = elemAt authIdx
ownOutput = elemAt outputIdx
in
(assetClassValueOf stateToken (txOutValue (txInInfoOutput authInput)) == 1) -- check that element at authIndex does indeed have the auth token
&& (ownOutRef == txOutRef ownInput) -- check that element at input index is indeed the input that is being unlocked
&& (criteria ownOutput) -- check that the output at outputIdx does indeed satisfy the criteria required to unlock ownInput
where
txInfo = scriptContextTxInfo context
inputs = txInfoInputs txInfo
outputs = txInfoOutputs txInfo
Spending ownOutRef = scriptContextPurpose ctx
findOutputWithCriteria :: ScriptContext -> Maybe TxInInfo
findOutputWithCriteria ScriptContext{scriptContextTxInfo=TxInfo{txInfoOutputs}} = find (\txOut -> criteria txOut) txInfoOutputs You can read more about the above: This whole design pattern is designed to take advantage of the The issue is that without this CIP, this design pattern is extremely difficult to implement because of the complexity this introduces to offchain code. There is currently no open-source off-chain transaction framework that is capable of taking advantage of this design pattern. The issue that you run into when you attempt to use this pattern with existing offchain frameworks is as follows: When building the tx offchain, you search through the inputs, add the indices of the inputs you are looking for to the redeemer in the above example we pass redeemer as Filtering by redeemer is far too expensive. Right now, nearly every smart contract protocol on Cardano has a RequestValidator: The validator that users interact with. User sends a UTxO to the request validator and the datum of that UTxO descibes the action that the user wants to perform (and thus the action that the protocol is allowed to perform with that UTxO). Technical users run bots to continuously process these UTxOs in bulk (in some protocols this action can only be performed by a permissioned actor, where-as in others anyone can perform this action). For these protocols, the number of The criteria that is required to process a request depends on the content of the datum of the request. For instance, for a request UTxO for any of the major DEXs, the most common request type is swap. The criteria to process a swap request is: For each request UTxO in the inputs (ie being processed by the transaction) there must be a corresponding output that pays out the requested amount of tokens (or an amount that is within some acceptable slippage percentage of that amount in which case the slippage percent allowed is also present in the request UTxO) to the address provided in the request UTxO's datum. A corresponding output that fulfills the request is not in and by itself sufficient. To process a request UTxO, you must also modify some global state (global in the sense it is shared across all request UTxOs in the transaction, ie a pool UTxO) correctly based on the content of the request. In general, the criteria for processing any request UTxO (of any type on any DApp in Cardano) typically requires that there must be one or more outputs (often just one) that correspond the to request UTxO, these outputs "fulfill" the request, and are commonly referred to as "destination outputs" or "payouts". When processing requests in bulk, this means each request UTxO in the transaction inputs needs to be matched with one or more "destination outputs" / "payouts" in the transaction outputs. Because each request has some impact on a shared state (ie pool UTxO), the order in which requests are processed is important and DApps need to be able to efficiently control this. The common way to "match" these request UTxOs with the outputs that fulfill them is to create two lists:
Then we traverse the lists together and check that the conditions required to fulfill each request are indeed fulfilled by the corresponding UTxO in the payouts list and that the pool state is adjusted properly with respect to each request. Currently locating this list of payouts is easy, we provide an index in the redeemer to the location of the first payout and then grab n elements starting at that index in the tx outputs list where n is the number of requests, this is linear time. The problem of creating the request UTxOs list is much harder. We cannot control the order of the tx inputs, so we cannot enforce that the first request utxos to be processed must be before the other request UTxOs in the list. This means we have to traverse the tx inputs list once for each request UTxO, thus the time complexity here is quadratic, or we have to get all the request inputs and then sort them which is also quadratic. This problem (of matching inputs to corresponding outputs) is broadly described in the following CPS:
Yes, that would be great to have on top of this. But this will still be more efficient since we can group the related elements together, provide the index to the first element and the grab the N elements that we starting from there. |
I'm sorry I struggle to see what problem does this CIP solve. The inputs are ordered on the ledger, but don't have to be in the transaction. The plutus script context is constructed based on the order of the inputs in the CBOR, not on the ledger. I a transaction is built with a certain order, the order will be preserved in the script context, and will be ordered on the ledger (but that doesn't matter for plutus) so the property you are looking for is already there. Perhaps the tools you are using to build transactions will order the inputs, but that is not necessary; plu-ts-offchain preserves the inputs order as they are specified and I have never had problems with the transaction ordering. P.S. redeemer indexing is handled by the ledger, so the redeemer index will need to be the one of the sorted set, not the one of the order of the transactio CBOR |
That is actually not true. The order in which inputs or redeemers are placed into the transaction is not preserver. It will always be sorted by ledger. Which is a problem that we would like to fix in ledger and a way to solve this would be to preserve the order of inputs in which they were placed on the wire. |
I'll run some test transaction to make sure my understanding is correct |
@lehins that is infact true, I was wrong thank you for your correction. At this point though it is strange that a transaction that is not ordered in the inputs succeeds phase 1. I believe either this CIP is implemented or else some restriction should be implemented in the ledger for consistency. |
Ok, so from the above discussion it seems to me like this would really solve the following two problems:
I think these are somewhat good points, but for the sake of completeness, here are some counterarguments to those points:
Now maybe this is one of these cases where a specific use-case trumps somewhat abstract arguments. I still consider the case for this somewhat weak, but I think I'm now at the point where I'd regard it as a nuisance rather than an anti-feature. For completeness, let me repeat my main argument against preserving the order: It will likely make future scripts completely uninteroperable. It is likely that every script will simply assume that the inputs it cares about will be at the beginning, and two such scripts will be unable to run at the same time, even if it would have been perfectly fine to do it otherwise. For example, I know that (at least some versions of) the MuesliSwap script don't care at all about other inputs. So any user can do swaps more efficiently by combining them with some other transaction if they want to. I think that implementing this CIP closes the door on that opportunity for future scripts. |
I would add that it could turn incredibly useful to extend the CIP to reference inputs, also a set in the babbage.cddl hence ordered in the script context |
I think this argument proves too much, since the same argument applies today to outputs. Very many scripts care about outputs, if your argument holds then today they would all just assume that their outputs came first, rendering them non-interoperable. I think this means that making inputs a list can't make things much worse. |
I think it makes sense, I'll update the cddl accordingly |
I just wanted to chime in on this proposal and mention that the hydra protocol would benefit from this too as we have a quite constrained and not expected to compose validator. Furthermore, I remember we too were quite puzzled and annoyed of inputs being "re-ordered". @lehins I see the |
I've update the CIP as we are getting close to Conway transition, addressing the concerns from @lehins @michaelpj @michele-nuzzi @rphair |
thanks @solidsnakedev ... have put on CIP meeting agenda for Review next time; hope you can make it (cc @michele-nuzzi @MicroProofs): https://hackmd.io/@cip-editors/93 |
@WhatisRT I don't think this is true. I think this misses the economic utility that is possible with DApp composability. Imagine if there was an options contract you wanted to buy, but the sale price was in WMT while you only had DJED. You have your assets in DJED to protect yourself from market volatility. If you had to first convert your DJED to WMT in one transaction and then buy the options contract in another, you are exposing yourself to the market volatility of WMT while you wait to see if you can actually get the options contract (before someone else buys it). What if someone beats you to the options contract after you converted your DJED to WMT? You now need to convert it back to DJED, and you likely lost money (due to the tx fees + DApp fees + market volatility). The above scenario is entirely avoidable if you just compose converting DJED to WMT with buying the options contract. If the options contract is bought before your transaction is processed, your composed transaction will fail due to the options contract UTxO being missing (no collateral is lost either since no scripts need to be run). This also means your DJED wasn't unnecessarily converted to WMT. Composing the actions guarantees that your DJED will only be converted to WMT if, and only if, you successfully buy the options contract. The risk of loss in the case where you don't get the options contract is entirely eliminated due to DApp composability. Risk management plays a huge role in economics (and regulations) and I think DeFi DApps that compose will out-compete those that do not. If corporations are also going to eventually use DeFi, they need to be able to manage their economic risk as much as possible. For the big players (governments, corporations, etc), throughput is no where near as important as risk management. Another example that doesn't deal very much with risk management is the ability to unify the liquidity across all stablecoins. Currently, if you have DJED but need USDC to buy something, you need to take the same two step approach as in the previous example. The reason is, despite both DJED and USDC effectively being USD, smart contracts cannot securely know this and therefore, must treat them differently. As a consequence, the USD liquidity in DeFi is fractured across all stablecoins. But what if you could convert DJED to USDC (for a slight conversion fee) in the same transaction where you buy the item? If you could, the liquidity would no longer be fractured across all stablecoins; the ability to compose converting them with other DApps means DeFi would effectively have a single "meta" stablecoin. And again, the conversion would only happen if you successfully buy the item at the end of the composed chain of actions. DApps that sacrifice composability in the name of throughput will be cutting themselves off from this "meta" stablecoin liquidity. IMO, DApp composability is the killer feature for eUTxO. You can compose 5-10 separate DApps on Cardano right now; AFAIU this would be prohibitively expensive on an account style blockchain. I think this composability will make things possible in DeFI that aren't even possible in TradFi. I'm sure some DApps will sacrifice composability for throughput in the short-term (like they are currently doing), but I think the economic utility strongly favors composable DApps to the point where future DApps will prioritize composability. I am personally in favor of this CIP because I think it will actually help composability in certain scenarios. For example, for some of my protocols, the order of the protocol's required outputs depends on the order of the protocol's inputs. The only thing that matters is the order of the protocol's inputs/outputs: other inputs/outputs can be interspersed and the required inputs/outputs can appear at any point in their respective lists. (This is for throughput reasons since it allows me to only traverse the inputs and outputs list once each.) Right now, I can't control the order of the inputs which means I can't control the order of the outputs. This doesn't stop me from being able to compose my protocols, but if there was another DApp where the input/output requirements were more strict (for whatever reason), that DApp would be more likely to compose with my protocols if the order of the inputs could be controlled (and therefore the order of the outputs could also be controlled). Even if the customized ordering is less efficient overall, composing the actions in a single transaction can actually save the end-user money on net, and decrease their overall economic risk. |
@fallen-icarus I fully agree that composability is a fantastic feature, but that's the point I was making: the average script would be less composable with this feature. The cheapest way to get the inputs for your script is just to require that they are at the beginning of the list. So if you want your script to be composable, you now have to argue that it's better to linearly search through the list of inputs (or maybe include that information in the redeemer), both of which increases execution cost. So now the cheapest way to implement your script is non-composable. |
This is what I was trying to argue is false. I think you are thinking about a specific script in isolation which I do not think is realistic. If Alice is considering buying an options contract with an asset she doesn't have, what matters to her is the total cost of the high-level action (DEX conversion + options purchase). Whether this requires one transaction or two transactions is (mostly) irrelevant to her. In an extreme case, if Alice chooses to sacrifice composability, she could possibly save 1 ADA in execution costs for this high-level action. But since she couldn't compose, she is now subject to market volatility which could easily be 5%. If 1000 ADA was involved, the total extra cost to Alice is the 50 ADA volatility loss. However, if she chooses to compose, she pays the extra 1 ADA in execution costs, but doesn't have to deal with the market volatility so the total extra cost to her is just the 1 ADA in extra tx fees. Alice saves 49 ADA by composing! To break even would require market volatility of 0.1% which is extremely likely to be exceeded by pretty much all trading pairs... The math seems to clearly favor composable DApps. The cheapest way for end-users to use DApps is to compose them, even if individually the DApps are slightly more expensive.
I don't think you need to argue anything. The market will naturally punish (through increased costs) those who choose to not compose DApps. The same goes for DApp developers who sacrifice composability. Users will gravitate towards DApps that allow them to accomplish their high-level actions more cheaply, which likely means composable DApps. |
Well, you're making the argument that users want composable scripts, but the question is whether authors will make them. Writing and auditing a script is very expensive, and script authors might not care much about composability themselves. Also, users might simply not have the choice here: if somebody wants to execute a particular script, and it happens to be non-composable, then they only have the choice of not using it. Maybe they'll complain to the script author, but what's the realistic chance that the author will spend a bunch of extra money and effort on making a composable version? And will it be adopted properly? There are still massive amounts of Plutus V1 transactions being made. So I think it's unlikely that market forces are strong enough to ensure composability. Companies really like to make walled gardens for all sorts of things, and lots of people dislike it but the dislike it clearly not strong enough to put pressure on them. I don't see why it would be different here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last CIP meeting was in favour of giving this a CIP number sooner rather than later... as I recall, changes (like this one) requiring a hard fork are generally considered around the time of the previous hard fork so it seems a good time to robustly discuss this idea.
@solidsnakedev one of the things that should be discussed & resolved early on is the question not only of the now oppositely-sensed title but the changes in the text that would be required to resolve this crucial ambiguity (#758 (comment)).
Please change the directory name to CIP-0128
and update the link that points to the rendered of your proposal with the new pathname. 🎉
Yes the author of this CIP wants composability. This CIP in general actually helps facilitate composability and I don't think that any developers in the ecosystem willing to invest money and development hours to get an application to mainnet and pay for an audit would sacrifice composability (a powerful feature that attracts liquidity and users) to save an extremely small amount of ex-units by enforcing that the expected inputs must be at the beginning of the list. If there was any demand to make that trade-off then the DApps that are currently on mainnet today would be doing this except for the outputs. The amount of ex-units you save by enforcing all inputs relevant to validation must appear at the beginning of the inputs list is completely negligible. What developers actually care about is that ordering of inputs can be preserved. In practice what you will see is that developers will require that relevant inputs are a continuous sub-list within the tx inputs list, and it will not matter where this sub-list starts because the start of the sub-list will be indexed via the redeemer (this is a design pattern that is already in practice today in nearly every major DApp protocol except it is a thousand times more error prone because the redeemer must contain a list of indices to the inputs which represents each index of each relevant input in the canonical ordering instead of just the start of the contiguous already ordered sub-list). The cost of indexing the start of the relevant inputs is extremely trivial |
Co-authored-by: Alexey Kuleshevich <lehins@yandex.ru>
Co-authored-by: Alexey Kuleshevich <lehins@yandex.ru>
Co-authored-by: Alexey Kuleshevich <lehins@yandex.ru>
Co-authored-by: Robert Phair <rphair@cosd.com>
@rphair I’ve made the updates based on the feedback. Please let me know if there’s anything else that needs attention. |
thanks @solidsnakedev - I think all the points have been addressed; we don't have much CIP bandwidth now so I've put it on the agenda again with updated title & assigned CIP number, so hopefully can move this to cc @Ryun1 @Crypto2099 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pending categorisation as per #758 (comment) the CIP meeting has decided this should be Last Check
for the next meeting (https://hackmd.io/@cip-editors/95): since the construction & validity of the proposal itself were considered satisfactory. I'll ✅ this as soon as the pending conversations are resolved.
Co-authored-by: Ryan <44342099+Ryun1@users.noreply.github.com>
Co-authored-by: Robert Phair <rphair@cosd.com>
Co-authored-by: Ryan <44342099+Ryun1@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just went back and checked that all previously raised issues have been resolved... mainly the confirmation of the Ledger
category. Looking forward to seeing this merged at next CIP meeting unless there are any further reservations.
This CIP is desired by Plutus developers to improve the validation efficiency of transaction inputs.
We propose the introduction of a new structure for transaction inputs aimed at significantly enhancing the execution efficiency of Plutus contracts.
This CIP facilitates explicit ordering of transaction inputs, diverging from the current state. This explicit ordering enables seamless arrangement of input scripts intended for utilization within the application's business logic.
This implementation was somehow discussed in an old CIP:
#231
Rendered Version