Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better communication/clarity on phase-1 validation and serialization criteria #2941

Open
6 tasks
peter-mlabs opened this issue Jul 28, 2022 · 20 comments
Open
6 tasks

Comments

@peter-mlabs
Copy link

Overview

A perennial issue on a team I manage has been understanding what exactly comprises phase-1 validation. I would like to see a clear, plain English listing of the criteria that must be satisfied by a script context or transaction in order to actually function on the main net.

Full disclaimer: I've not fully read the specs or the implementation myself and this issue is, in part, to request a resource to which I can direct developers to in lieu of a code-dive or spec-read.

Current Understanding

The consensus among my coworkers is that:

  • Phase-1 validation is "everything besides script execution" -- which is too vague to actually build off of.
  • Phase-1 validation is specified in the formal specifications -- which is too much of a deep dive
  • Certain invariants/constraints are ensured by serialization, and these are neither explicitly specified nor readily apparent in the code base. These also appear to be defined on a Tx type rather than a TxInfo, meaning there's a lot of extra things that would be unsuitable for simply validating a TxInfo or ScriptContext.
  • Alternate implementations (even if the behavior is technically "specification compliant") would be considered invalid by IO, meaning there may be some de facto "standard" behavior that is not specified.

Our use case

The reason for why we need this is to be able to construct script contexts in Haskell and know whether or not they are realistic to what we would see in practice. My understanding is that tools like EmulatorTrace have failed to perform this sufficiently -- failing to completely normalize Values, for example. This leads to situations where developers are unsure whether they need to check for phase-1 validity in phase-2.

We would, in particular, like to be able to have a set of checks like isValueNormalized, is transactionBalanced, areOutputsSorted, and so forth, that we can combine into something that corresponds to isPhase1Valid. The final property should be that isPhase1Valid is true if and only if a given transaction/script context can

  • A.) Be constructed by the actual implementation
  • B.) Pass through to phase 2 validation

We'd be happy to write these if needed, but they would ideally be provided for us. It's insufficient to fully serialize a transaction using the various utilities available, because we want to be able to define partially-invalid script contexts at certain stages in the development process, but also understand which validation checks would fail on a given object.

Finally, we'd want to partially auto-generate script contexts, possibly exhaustively within certain bounds, so we'd want these checks to be fast. Full serialization would ostensibly be much slower than a pure isPhase1Valid function, and partial validation would certainly be faster than either if we only cared about a subset of checks.

Ideal Outcome

  • Detail exactly what checks are performed during phase-1 validation, in plain English, with back-references to implementation/spec if applicable, and add this to the "Ledger Explanations" webpage
  • Detail exactly what invariants emerge during serialization, and to what extent these are considered mandatory -- i.e., whether alternate implementations SHOULD or MUST uphold these conditions -- and add this to the "Ledger Explanations".
    • If alternate implementations are required to uphold these invariants and if they are not listed in the formal spec, add them.
  • Provide helper utilities and tests that developers can utilize to build libraries such as those described above
  • Add these checks to EmulatorTrace and other IO utilities.
  • Test these functions/findings against the actual implementation.
@JaredCorduan
Copy link
Contributor

Hi @peter-mlabs , thanks for bring this up, this is a great topic.


What is "phase 1 validation"

The definition really is "everything besides script execution", but that's not vague at all. If you look at the Alonzo specification, Figure 9, the line:

evalScripts tx sLst

is what we mean by phase 2. The rest of the spec is phase 1. The specs contain a lot of plain English prose describing everything, but please let us know if you think something is missing. You say that the specs are too much of a deep dive, but that is our best effort of explaining the ledger validation. For folks creating transactions (as opposed to blocks), the topmost rule for validation is the LEDGER rule from the specs. Any failure in this rule, except for evalScripts tx sLst, is a phase 1 failure. Most Plutus developers can probably ignore the stake delegation and governance rules, leaving just UTXOW, UTXO, and UTXOS (see Figure 78 of the Shelley spec for the rule hierarchy, but note that the Shelley spec has no UTXOS rule). Moreover, you may not care much about UTXOW (the checks surrounding the authorizations of the transactions), and since UTXOS is mostly just the phase 2 check (it also calls as governance sub-rule), perhaps the UTXO rule is nearly all you care about.

I am myself unfamiliar with the "Ledger Explanations" webpage and EmulatorTrace. If I am given the time, I've dreamed of writing a ledger wiki myself, with high level explanations that I could give to, for example, a new ledger team member, etc. Could you provide me a link to what you mean by the "Ledger Explanations" webpage?

Invariants/constraints ensured by serialization

This is a very interesting topic, and one that comes up a lot when we make the rules. We do have a wire specification for every ledger era (See the table at the top of this repo's readme, the CDDL column provides links to all the wire spec), but sometimes our schema cannot be fully captured by CDDL. We try to list these constraints as comments in the CDDL file, but I'm sure there are huge gaps. Filling in these gaps sounds like a great idea to me.

I'm confused, though, why you mention validating TxInfo or ScriptContext. Those are not really validated in any sense I can figure, they are constructed by the ledger. Perhaps you are after "the serialization constraints which effect the script context"?

Alternate implementations considered invalid by IO due to behavior that is not specified.

Our goal has always been that the formal ledger specs, together with the wire specification, provide all the details needed for an alternate implementation. We may fall short of this goal, but that's always been the motivation. If you find gaps, I would be thrilled to get github issues for them. Having an alternate implementation would extremely healthy, as we'd have more confidence in how well we've achieved our goal.

Set of checks

Since phase 1 validation is the bulk of this codebase, this quite a task. That said, we do have a lot of re-usable code, and maybe in the future we can try steer things towards re-usable checks for Plutus developers. For example, here is a hunk of the Alonzo UTXO rule, which probably includes several of the "phase 1 checks" that you care about:

https://github.com/input-output-hk/cardano-ledger/blob/dd4b6e31a05e3c02c6f00f0d53cf20e3b8b74468/eras/alonzo/impl/src/Cardano/Ledger/Alonzo/Rules/Utxo.hs#L533-L553

@asutherlandus
Copy link

Regarding EmulatorTrace there has been significant effort in the last couple months to make it more faithful to the actual ledger rules. Previously it had a separate implementation that had diverged from the ledger and we know how much trouble this caused developers. We have changed it to use the code directly from ledger to make sure it is faithful to the behavior on chain. Its probably not perfect yet, but hopefully much better than before.

I realize the response to github issues has been spotty, but we really do want feedback on problems with the emulator. We have increased staffing and are trying to stay on top of the github issues.

@peter-mlabs
Copy link
Author

Thank you both for your prompt responses.


The specs contain a lot of plain English prose describing everything, but please let us know if you think something is missing. You say that the specs are too much of a deep dive, but that is our best effort of explaining the ledger validation.

Understood. I'll be taking a closer look at these myself, but its been a perennial re-hashing that any time someone's been tasked on "figuring out what phase-1 is", its been a struggle -- perhaps after I've had a look myself, I'll have more direct insight into what, if anything, could be improved.


Could you provide me a link to what you mean by the "Ledger Explanations" webpage?

Just the Read The Docs page here (linked in the README of this repo). I think it would be a logical place for saying things "Values are normalized" and explaining exactly what that means -- with respect to non-duplicate entries, sorted entries, always containing an ADA entry, etc.


I'm confused, though, why you mention validating TxInfo or ScriptContext.

I am aware -- and fully agree -- that this terminology is imprecise, but I don't have a great vocabulary to distinguish my intended meaning. What I mean by "validating a TxInfo" or "validating a ScriptContext" is:

  • partitioning the set of inhabitants of those types into ones that could actually be seen on-chain during script execution and ones that couldn't,
  • having a function (for example) isPhase1Valid :: TxInfo -> Bool that returns true if it can be seen during phase 2 validation, and false otherwise.

For instance, including an invalid currency symbol in a transaction would produce an TxInfo that is invalid. A txInfoMint that doesn't actually reflect the delta between inputs an outputs would be invalid. A TxInInfo that contains a Value with a negative amount would be invalid.

The crux of the issue, as I see it, is that there are inhabitants of the types that (in a perfect world) wouldn't be possible to construct; but we don't live in that world, and thus there are certain values of those types that are "invalid" for a given use case.

And to be explicit, the reason this is difficult to develop around is because it's misleading about what we do need to check for in our scripts. If it is legal for non-normalized Values -- which are totally possible to construct via PlutusLedgerApi.V1 -- to reach the chain, then we end up needing to verify within our scripts that the assumed normalization holds. Otherwise we end up with issues like this, where testing environments diverge from the actual ledger in ways that we may not be able to feasibly detect without a full dive into the ledger spec and CDDL wire format.

I was under the impression that doing so wasn't intended to be a prerequisite for contributing to production-worthy script development. If you disagree, and do think that the specification and CDDL should be required reading for all smart contract developers, I won't take offense and will adjust my expectations accordingly.

This extends beyond just the fully-implemented validator or minting policy level, too -- there's no reason to be quickcheck-gening invalid Values, TxInfos, or ScriptContexts in most cases, since it (ideally) shouldn't be possible to construct such values in the first place, and there's correspondingly no reason to be writing logic to guard against these cases that are impossible in practice.

Does this all make sense?

Another relevant issue from a coworker is here.

Set of checks

Continuing on the above, and forgive my vague understanding of all of this: the "phase 1 checks I care about" seem like they extend beyond things that are represented in the code base as actual checks, and do indeed get into a more vague territory of serialization. It's not so much a is-this-transaction-valid "validation" that I'm concerned with, as it is a "is this value of this type something that my plutus script could possibly ever encounter".

So I suppose that what I'm looking for may not be what is traditionally considered phase 1 validation, but still falls under the definition of "everything before phase 2". What vocabulary would you use to describe the situation here?


EmulatorTrace updates

Very glad to hear that more effort is being put into this. We've been working with (and on) various tools that were meant to fill in some of the gaps, but none have quite lived up what we hoped EmulatorTrace could be.

We're working on PCB, and this is the area where the issues above have bitten us most recently. We're finding it useful to partially specify incomplete, obviously invalid script contexts to feed to scripts at certain stages in development. For instance, if a script should validate based only on what is contained in txInfoMint, then we don't want to force uses to fully specify a valid set of TxInInfo and TxOut, or if a validator delegates its logic to another script we don't even necessarily need the Spending script purpose to refer to a TxOutRef present as an input; but sometimes we do want all of these sorts of checks to have a guarantee that a known-invalid ScriptContext is invalid for the particular reasons we think it is.

@JaredCorduan
Copy link
Contributor

any time someone's been tasked on "figuring out what phase-1 is", its been a struggle

It's probably true that we never explicitly state that "phase-1 failures" are all the failures except for the single phase-2 failure. It's not something that needed a formal definition in the spec, but it is language that we use a lot and I agree that that point should be explicitly stated somewhere.


Read The Docs page

Ah! That documentation is very incomplete and really only explains the multi-assets. The point about "Values are normalized" is mentioned in the spec (you can find in page "CAUTION" in the Alonzo spec to find it), or at least the ADA part. Perhaps we can add more details there.


partitioning the set of inhabitants of those types into ones that could actually be seen on-chain during script execution and ones that couldn't

So you would like to know which parts of the transaction context are in the ledger state vs those that are in the transaction? (and those in the middle, like how transaction inputs are "resolved" but the UtxO to include the outputs inside the context, for example).


function (for example) isPhase1Valid :: TxInfo -> Bool that returns true if it can be seen during phase 2 validation

I think I am following, and I'm sorry about all your struggles. I don't think such a function is possible without changing the signature to something like isPhase1Valid :: Tx -> UTxO -> StuffImForgetting -> TxInfo -> Bool, and applying many of the ledger rules. I think the heart of the issue is:

where testing environments diverge from the actual ledger

When developing the ledger code for the Alonzo era, we were assuming that the Plutus tooling would save folks from the nuances of things like worrying about Value being in cannonical form, etc.

I was under the impression that doing so wasn't intended to be a prerequisite for contributing to production-worthy script development.

I agree with you, as I said above, the plan was for the tooling to be a shield from worrying about these details.


Does this all make sense?

It does, I fully agree that there is an incredible amount of technical details surrounding the transaction context. I'm sorry it's still not made easy by the tooling.


"phase 1 checks I care about" seem like they extend beyond things that are represented in the code base as actual checks, and do indeed get into a more vague territory of serialization.

You very well might be correct. A couple example, if you have them easily at hand, would be nice.


may not be what is traditionally considered phase 1 validation, but still falls under the definition of "everything before phase 2". What vocabulary would you use to describe the situation here?

I myself probably say "phase 1 validation" somewhat informally to mean any of the ledger rules except running the Plutus evaluator. And I say things like "that is guarded on the wire" referring to things you cannot do because of the logic in the deserialization. I think the Plutus team uses the same language. It's very enlightening to see the troubles you are having, and concepts that were not obviously name-worth to use when developing the ledger for Alonzo. I myself have only written pretty trivial Plutus scripts, but I would love to do more and gain more empathy for the hard parts.

@peter-mlabs
Copy link
Author

I think we're getting on the same page here!


So you would like to know which parts of the transaction context are in the ledger state vs those that are in the transaction? (and those in the middle, like how transaction inputs are "resolved" but the UtxO to include the outputs inside the context, for example).

I don't think that is what I mean.

I want to know which values/inhabitants of certain types or in certain contexts are "invalid" from either the perspective of (using your terminology) "phase 1 validation" or "guarded on the wire", and specifically why they are invalid.

For instance, (pseudo-code)

v :: Value
v = { "ADA" : 
         { "" : 0 }
    , "FOO" :
         { "BAR" : 100
         , "BAZ" : -100
         }
     }

would be a "valid" inhabitant of the PlutusLedgerApi.V1 Value type if it appeared in the txInfoMint, representing the burning of 100 tokens with currency symbol FOO and token name BAZ, and the minting of 100 tokens with the currency symbol FOO and token name BAR.

But this would be "invalid" if it was used as the value of a TxOut, since a TxOut can't carry a negative value. T

This means that when we're writing scripts, and doing something akin to v' = ctx.txInfo.txInfoInputs.txInInfoResolved.txOutValue on-chain, we're never going to have v == v'; but if we have v'' = ctx.txInfo.txInfoMint, we could have v == v''.

And I want to know this, because if I'm writing a function that operates on Values that was something like

if 
  v has a negative value
then
  launch the nukes
else 
  don't

this function would be valid to include in scripts if it's called only on values like v'' (i.e., as derived from txInfoMint); but generating arbitrary TxInfos and running them through a test suite might flag such a usage as world-ending.

So your comment that

I don't think such a function is possible without changing the signature to something like isPhase1Valid :: Tx -> UTxO -> StuffImForgetting -> TxInfo -> Bool, and applying many of the ledger rules

makes sense, but I don't know if we're talking about the exact same thing here. I want to know if, given any possible ledger state, and any possible spec-compliant implementation, whether I'd ever launch the nukes; I can tell just from the fact that t :: TxOut whether or not I will ever have negative Value entries from t.txOutValue.

I'll try to keep this issue updated as I read through the specs and possibly get some more concrete examples.

@JaredCorduan
Copy link
Contributor

I think we're getting on the same page here!

yes indeed, it's becoming clearer and clearer!

I think the following might help (and I can try to get to it early next week): Go through each type inside TxInfo and think about the space of values in each type, and which one can be ruled out without all the context of the ledger state. So things like 1) blanket rules (no negatives unless minting, etc), and 2) things forbidden on the wire. I can list this out in this issue, and maybe that could be the start of a helpful document or a helper function.

@JaredCorduan
Copy link
Contributor

For reference, the Plutus V2 context looks like:

https://github.com/input-output-hk/plutus/blob/9ef6a65067893b4f9099215ff7947da00c5cd7ac/plutus-ledger-api/src/PlutusLedgerApi/V2/Contexts.hs#L78-L92

data TxInfo = TxInfo
    { txInfoInputs          :: [TxInInfo]
    , txInfoReferenceInputs :: [TxInInfo]
    , txInfoOutputs         :: [TxOut]
    , txInfoFee             :: Value
    , txInfoMint            :: Value
    , txInfoDCert           :: [DCert]
    , txInfoWdrl            :: Map StakingCredential Integer
    , txInfoValidRange      :: POSIXTimeRange
    , txInfoSignatories     :: [PubKeyHash]
    , txInfoRedeemers       :: Map ScriptPurpose Redeemer
    , txInfoData            :: Map DatumHash Datum
    , txInfoId              :: TxId
    }

These types are from the Plutus library, and the ledger translation into them. The Plutus types do protect the users from some of the details mentioned in this issue (such as whether or not a Value is in canonical form or not). I'm less familiar with all the protections in the Plutus library, but I can give you the perspective from the ledger code.

  • TxInInfo - This type is what I (and maybe this is just me) like to call a "resolved" input, ie a transaction input together with the corresponding transaction output from the UTxO. I will ignore the transaction outputs for the moment, since we will be addressing them when we get to txInfoOutputs. A transaction input is a transaction ID and an index. A priori (in other words, without looking at the surrounding ledger state, only looking at the values in the type) there are no values to rule out. The ID is a hash, anything is possible, and the index can in theory be anything.
  • TxOut - A transaction output is an address, a Value, possible a datum or a datum hash, and possibly a script. Addresses use a custom binary format (ie not describable in CBOOR) defined here. A priori, I cannot think of any values that meet the custom binary representation that we could rule out without the ledger state. We go over Value below. A priori, I also do not see any values in Datum, DatumHash, or Script that we can rule out. We can rule out, however, that an output contains both a datum hash and a datum (enforced by the wire spec).
    • Values in an output - We can rule out any negative values.
  • txInfoFee :: Value - The fee must contain only lovelace.
  • txInfoMint :: Value - The lovelace field must be zero.
  • [DCert] - I'll skip this one for now, since 1) it's rather involved, 2) I'm not sure anyone is interested in it, and 3) I suspect the answer is "there isn't anything interesting to rule out a priori (but we can come back to this if it turns out it is relevant).
  • Map StakingCredential Integer - The integer must be non-negative, and would certainly never be bigger than the total number of lovelace. The stake credential can a priori be any value in the type.
  • POSIXTimeRange. The lower bound could be anything at or below the upper bound. If you knew the current POSIX time, you could throw out any value above the current time plus the time horizon (I'd need to lookup what that is, I think it is something like 3k/f-many slots).
  • [PubKeyHash] - We can't rule out anything here without seeing the transaction.
  • Map ScriptPurpose Redeemer - We can't rule out anything here without seeing the transaction, except for the empty map.
  • Map DatumHash Datum - This one is perhaps a bit tricky. We do require that the datum hash be correct (ie if you hash the datum, you get the corresponding key in the map), but this depends on how the datum is serialized (CBOR is not unique).
  • TxId - anything goes (as mentioned above in TxInInfo).

I think the TLDR here is that Value is the type that contains nearly all of the checks that y'all want.

Did this help?

@peter-mlabs
Copy link
Author

Thanks, Jared -- this is a very good start to what we're looking for.

A few comments follow below. I'm making some assumptions, so please flag if anything seems out of place -- its entirely possible that things that I'm inferring as "today's convention" were intentionally left open-ended.


These types are from the Plutus library, and the ledger translation into them. The Plutus types do protect the users from some of the details mentioned in this issue (such as whether or not a Value is in canonical form or not). I'm less familiar with all the protections in the Plutus library, but I can give you the perspective from the ledger code.

Just to clarify, the Value type from PlutusLedgerApi.V2 does not ensure canonical form (or at least not how I understand it). This may be a better issue to file against that library, but a few notable things:

-- Non-hex currency symbols are partial; they can be constructed, but lead to 
-- exceptions when evaluated
ghci> Value $ fromList [("Not Hex", fromList [("",100)])]
Value (Map [(*** Exception: NotHexit 'N'

-- Currency symbols must have an even number of digits, or lead to a runtime exception:
ghci> Value $ fromList [("ABC", fromList [("",100)])]
Value (Map [(*** Exception: UnpairedDigit

-- Duplicate currency symbol entries are not combined
ghci> Value $ fromList [("BCDE", fromList [("",100)]),("BCDE", fromList [("",10)])]
Value (Map [(bcde,Map [("",10)]),(bcde,Map [("",100)])])

-- Duplicate token name entires are not combined
ghci> Value $ fromList [("BCDE", fromList [("",10), ("",100)])]
Value (Map [(bcde,Map [("",10),("",100)])])

Also:

  • I'm sure there are other bounds on the currency symbol/token name being within certain bound with respect to length, but I'm don't know the details off the top of my head
  • 0 ADA entries should be present in all values, but are not (although this might be fixed already in the issue linked above)

Are you saying that there is another Plutus library with a Value type that does ensure these sorts of things? In Plutarch (which is what we use for most of our on-chain code), there are phantom types for sortedness and negative/non-negative/positive guarantees, but nothing at the PlutusLedgerApi.V* level AFAIK.


TxInInfo

Should the txOutRefId have the format of a SHA256 hash? It's type is just BuiltinByteString. This, to me, would indicate the the byte string must be of a certain length to be valid. If this is handled somewhere under the hood, what happens if a too-long or too-short byte string is passed (note: I could be totally misunderstanding how BuiltinByteString works).

Shouldn't the txOutRefIdx be non-negative? It's type is Integer.

Additionally, the txInfoInputs :: [TxInInfo] are sorted lexically. This is important to note, because

  • A.) It's not documented anywhere as far as I've seen, except in a github comment
  • B.) There's a CIP trying to remove this restriction
  • C.) This doesn't hold for outputs

Also, the txInfoInputs shouldn't contain duplicates.

This also can't be empty -- this was noted in one of the 30-something CIPs, because unless this is said explicitly, there wouldn't be anything preventing someone from paying fees via rewards withdrawals.


TxOut

As above, it seems like there are restrictions on "valid" Addresses, but its type is, essentially, BuiltinByteString. I don't know of any isValidAddress function, and my local hoogle searches don't turn anything like this up.

WRT datums, the OutputDatum type seems to cover the cases fine.

WRT scripts, ScriptHash is again a BuiltinByteString; the same comments as for CurrencySymbol apply.

Also, (in contrast with the txInfoInputs field), the txInfoOutputs field is not sorted.

This probably can't be empty, right? Is there some weird situations with staking, perhaps, where a UTxO containing the exact amount needed for the fee could be passed as the input and not need to list an output?

Values in an output

Must contain minAda.


txInfoFee

Also must be positive


[DCert]

Agreed that this is lower priority for now, but part of the reason nobody cares about these is because nobody understands them 😂. Some immediate notes:

  • The various data constructors end up wrapping bytestrings. I think most of these are addresses, and end up having the same considerations with SHA-256 hashes as above.
  • Staking pointers (which lack even basic haddocks, but that's another story) is a product type of three Integers. I'm pretty sure these all have to be either non-negative or strictly positive.
  • DCertPoolRetire has an Integer referring to an epoch. This should likely be positive.
  • Ostensibly, we'd never see duplicates in the list on-chain?

Map StakingCredential Integer

I don't think there's much new here that hasn't been covered above, but to be explicit:

  • There are ostensibly restrictions on shape, duplicates, and perhaps ordering of these. These may be more of a criticism of the Map type, however.

POSIXTimeRange

I've not dealt with this type much, but it is defined as type POSIXTimeRange = Interval POSIXTime, and it appears that it doesn't actually enforce that the lower bound is indeed earlier than the upper bound -- I assume this is guarded by the wire on-chain?

ghci> Interval (LowerBound (Finite $ POSIXTime 100) True ) (UpperBound (Finite $ POSIXTime 10) False)
Interval {ivFrom = LowerBound (Finite (POSIXTime {getPOSIXTime = 100})) True, ivTo = UpperBound (Finite (POSIXTime {getPOSIXTime = 10})) False}

[PubKeyHash]

Aside from the comments about hashes vs. any BuiltinByteString above, these should probably not contain duplicates. Are they ordered? Must this list be NonEmpty?


Map ScriptPurpose Redeemer

I disagree that we can't rule things out.

For ScriptPurpose, generally:

  • The Minting currency symbol has to appear in the txInfoFee value.
  • The Spending TxOutRef has to appear in the transaction inputs
  • Does the Rewarding staking credential need to appear in txInfoWdrl?
  • Does the Certifying DCert need to appear in the txInfoDCert list?

For the Redeemers, I assume that this is indeed unrestricted (assuming the constructor(s) for BuiltinData do not allow invalid terms to be constructed).

But for the map in general, I'm assuming there are additional restrictions. Off the top of my head:

  • I'm assuming that including one or more redeemers is up to the discretion of the transaction submitter, but this map should not include more entries than there are transaction inputs.
  • Do Reward and Certifying scripts actually take redeemers?
  • Is there sorting?
  • No duplicates?

Map DatumHash Datum

This does sound tricky. I'm not sure of the details, but perhaps a validation function could take a witness function or phantom type to ensure DatumHash <-> Datum coherence?

I think I also read in the inline-datum CIP that "extra datums" could be passed. There's been some recurring folklore for new comers that this means

"The transaction submitter can include extra data that is not associated with any UTxO in `txInfoInputs"

but what it really means (as I've been told) is that a datum that is not strictly necessary for validation can be included provided it is attached to a transaction input or output.

So the DatumHash must match something in txInfoInputs, txInfoReferenceInputs, or txInfoOutputs; and thus, there is also a restriction on the maximum number of entries in this map.

Is this map sorted?

Duplicates are probably not allowed, right?


The actual rectification of this issue might be more in the domain of PlutusLedgerApi, and I'd be happy to open an issue there as well -- but first we'd need to narrow down:

  1. Which of these are "forbidden on the wire"
  2. Which of these are blanket rules
  3. Which of these are just not having expressive enough types (i.e., BuiltinByteString)
  4. Which of these are genuinely validated (in the form of "there exists a function in the ledger that checks if a given value is valid, and rejects it if not")

And regardless of whether the PlutusLedgerApi library implements these sorts of things or not, it'd be extremely helpful to have a straight-forward list of these somewhere for alternate implementations of these types/terms. It seems like most of these should be in the spec, but my impression is that some really are undocumented and down to the implementation (like [TxInInfo] being sorted, although I could be wrong.)

It seems, to me, that the overall solution here is probably:

  • Better documentation (both from the ledger side and in the haddocks of PlutusLedgerApi)
  • Newtypes and smart constructors where we can easily check things
  • validation functions where newtypes and smart constructors won't work

Thoughts?

@JaredCorduan
Copy link
Contributor

the Value type from PlutusLedgerApi.V2 does not ensure canonical form

I was vague, sorry. By canonical form, I meant handling the zero valued fields. I thought this what was normalizeValue was for, but poking around I don't see it being used. I see master is already different than the release branch, so yea, maybe bring that up with the Plutus folks.


Should the txOutRefId have the format of a SHA256 hash?

The transaction ID is a blake2b 256 hash, so 32 bytes.


Shouldn't the txOutRefIdx be non-negative? It's type is Integer.

You are correct. apologies that I'm more familiar with the ledger types (where it is a Word64) and was mostly going from memory above.


This also can't be empty -- this was noted in one of the 30-something CIPs, because unless this is said explicitly, there wouldn't be anything preventing someone from paying fees via rewards withdrawals.

That's true, I was focusing on the TxIn and forgot about the container. this is a ledger rule.

It's not enforced to preventing someone from paying fees via rewards withdrawals. It's the replay protection for everything inside a transaction.


Additionally, the txInfoInputs :: [TxInInfo] are sorted lexically. This is important to note, because

A.) It's not documented anywhere as far as I've seen, except in a github comment
B.) There's a CIP trying to remove this restriction
C.) This doesn't hold for outputs

Also, the txInfoInputs shouldn't contain duplicates.

That's true, in the ledger it is a set, making these properties much more obvious. But this information is lost in the translation to the script context.


it seems like there are restrictions on "valid" Addresses, but its type is, essentially, BuiltinByteString. I don't know of any isValidAddress function, and my local hoogle searches don't turn anything like this up.

The ledger may not (unless you count fromCBOR, which would be wonky). Such a function probably exists in many other places (the wallet, for instance), but we could add something to ledger and give it a clear name like you suggest.


Also, (in contrast with the txInfoInputs field), the txInfoOutputs field is not sorted.

indeed, it is a list in the ledger


This probably can't be empty, right? Is there some weird situations with staking, perhaps, where a UTxO containing the exact amount needed for the fee could be passed as the input and not need to list an output?

It can indeed be empty, for the reason you mention. but it's not too weird, you can always pay more in fees that you are required.


Must contain minAda.

That's true, but I thought that this was too much toward the "ledger rule" side of the spectrum to be considered here. it requires know the current protocol parameters, you can't just look at the value to make the determination.


Also must be positive

true.


[DCert]

The various data constructors end up wrapping bytestrings. I think most of these are addresses, and end up having the same considerations with SHA-256 hashes as above

If you replace SHA-256 with blake2b 224 (28 bytes), then yes.

Staking pointers (which lack even basic haddocks, but that's another story) is a product type of three Integers. I'm pretty sure these all have to be either non-negative or strictly positive

non-negative, yes. Pointer addresses are nearly un-used, and will almost certainly be deprecated soon.

DCertPoolRetire has an Integer referring to an epoch. This should likely be positive.

yep

Ostensibly, we'd never see duplicates in the list on-chain?

unfortunately not, which is arguably a bug. we currently allow you to de-register a stake credential, re-register it, then deregister it again, all in the same transaction. which leads to duplicates. maybe we will disallow this in a future era.


There are ostensibly restrictions on shape, duplicates, and perhaps ordering of these. These may be more of a criticism of the Map type, however.

yep, most of this comes from Map.


POSIXTimeRange

I've not dealt with this type much, but it is defined as type POSIXTimeRange = Interval POSIXTime, and it appears that it doesn't actually enforce that the lower bound is indeed earlier than the upper bound -- I assume this is guarded by the wire on-chain?

this is guarded by the ledger rules, the transaction must fall between the (slot) interval.


[PubKeyHash]

Aside from the comments about hashes vs. any BuiltinByteString above, these should probably not contain duplicates. Are they ordered? Must this list be NonEmpty?

It's a set in the ledger, so correct, no dups and ordered lexicographically. it can be empty.


For ScriptPurpose, generally:

The Minting currency symbol has to appear in the txInfoFee value.

did you mean txInfoFee? the fee is just lovelace. did you mean an output? maybe you mean that the policy ID must show up in a resolved input or an output?

The Spending TxOutRef has to appear in the transaction inputs

true!

Does the Rewarding staking credential need to appear in txInfoWdrl

yes

Does the Certifying DCert need to appear in the txInfoDCert list?

yes


But for the map in general, I'm assuming there are additional restrictions. Off the top of my head:

I'm assuming that including one or more redeemers is up to the discretion of the transaction submitter, but this map should not include more entries than there are transaction inputs

I'm not sure I follow. The number of redeemers is dictated by the number of plutus script hashes in the transaction. But note that you can't tell from looking at a script hash if it is a plutus script or not, you need the transaction witnesses to figure that out. There's a CIP right now to deal with the awkwardness of this fact. But you'd need exactly one redeemer for each of: plutus input, plutus mint, plutus cert, plutus withdrawal.


Map DatumHash Datum

This does sound tricky. I'm not sure of the details, but perhaps a validation function could take a witness function or phantom type to ensure DatumHash <-> Datum coherence?

The ledger does guarantee that the hash matches, but I assume you want to make a context yourself and know that it is valid. I don't know what the best thing would be to do, you need to cache the serialization somehow.


I think I also read in the inline-datum CIP that "extra datums" could be passed. There's been some recurring folklore for new comers that this means

The only "supplementary" datums that are allowed are those that correspond to newly created output, and those that are in an output corresponding (in the UTxO) to a reference input used in the transaction.


Is this map sorted?

Duplicates are probably not allowed, right?

it's a map in the ledger, so yes it is sorted and no dups are not allowed (remember, the ledger caches the serialization of the datums and would never reserialize).


The actual rectification of this issue might be more in the domain of PlutusLedgerApi, and I'd be happy to open an issue there as well -- but first we'd need to narrow down:

Which of these are "forbidden on the wire"
Which of these are blanket rules
Which of these are just not having expressive enough types (i.e., BuiltinByteString)
Which of these are genuinely validated (in the form of "there exists a function in the ledger that checks if a given value is valid, and rejects it if not")

I think once our list settles down, we could go through each one and answer these questions. It's a fun exercise, but it definitely requires me to look at everything from a perspective that I am not myself used to.


And regardless of whether the PlutusLedgerApi library implements these sorts of things or not, it'd be extremely helpful to have a straight-forward list of these somewhere for alternate implementations of these types/terms. It seems like most of these should be in the spec, but my impression is that some really are undocumented and down to the implementation (like [TxInInfo] being sorted, although I could be wrong.)

Most of the things we have discussed can be derived from the formal spec and the wire spec. But, as I mention above, the perspective here is very different, and so things are not stated in the way that is helpful for you and the other folks looking for the possible space of values. Exacerbating the problem is the fact that the ledger spec is written operationally, and the denotational semantics can sometimes be obscured (though it makes it much easier to compare the spec with the code).

It seems, to me, that the overall solution here is probably:

Better documentation (both from the ledger side and in the haddocks of PlutusLedgerApi)
Newtypes and smart constructors where we can easily check things
validation functions where newtypes and smart constructors won't work

Thoughts?

The ledger team has never yet had the breathing room to build out a proper, user-friendly API. our biggest "consumer" is the consensus layer, and the bulk of our effort has gone to that. But I realize that many folks have struggled to use the ledger code as an API. I think your suggestions are good, and would be a part of the API. The cardano-api (in the node repository) does have an API for some of the ledger functionality, but it also has not been able yet to spend the time add in all the features that are really needed.

I guess I (probably naively) thought that most of these issues that we've talked about were not really ledger issues, but issues for the Plutus tooling.

My biggest take-away from your last post is: the ledger has more restrictive types that those in the context, and this information is lost during the conversion.

I understand why you would want all this validation, since you want to make arbitrary contexts so that you can test Plutus scripts. There's another conversation about what is and is not a good idea to assume inside the context of a Plutus script. This came up in the CIP that you mentioned, about using lists instead of sets for inputs, but all the same concerns apply here.

@peter-mlabs
Copy link
Author

Thanks again Jared! I'll try to keep this response shorter to not take up too much more of your time.


Can you re-confirm this statement for me?

The transaction ID is a blake2b 256 hash, so 32 bytes.

The haddocks say SHA256. Is this incorrect? If so, I'll open a ticket on that repo.


RE: minAda

of course, your comments make sense. But I think my point was more that we know some ADA has to be present.


RE: DCert, s/sha256/blake2b

As above, going down and unwrapping the types leads to a validator hash being listed as a SHA256. Can you confirm that this is incorrect? If so, I'll open a ticket on that repo.


Did you mean txInfoFee?

Sorry, no -- I meant txInfoMint and also that the txInfoMint must reflect the delta between inputs and outputs.


RE: Redeemers

I think my point was that if we had r redeemer entries in the map, and x1, x2, x3, x4 plutus inputs, plutus mints, plutus certs, and plutus withdrawals, one of the two following situations would probably hold (but I'm not sure which, off the top of my head -- I'm sure this is explained adequately elsewhere)

  • Either r = x1 + x2 + x3 + x4, or
  • 0 <= r <= x1 + x2 + x3 + x4

I.e., a TxInfo would be invalid if r didn't obey whichever of these actually holds.


RE: Closing thoughts

Agreed with everything you said. I think the line gets blurred between ledger vs. tooling, because when the official tooling is inadequate and can't be relied on as a ground-truth, we have to turn back to the ledger. And, as you've mentioned, it seems like the ledger team has not only a different set of priorities due to different consumers, but also a different set of types they're working with (sets vs lists, etc.)

But yes, I think we've now got a solid understanding of what each other are working with. My mind is currently wondering whether this repo was the wrong place for this, though -- this was where I was pointed to when I asked the question "what is phase-1 validation", but it seems like most of my issues are actually more related to the plutus-ledger-api types being too permissive and lacking utility functions where specific types would be unfeasible.

In your opinion:

  • Is there still something worthwhile for the cardano-ledger team to do with respect to the above?
    • If so, what do you think the steps are, and is there any help needed?
    • If not, whose door should I knock on next?
  • Is it realistic to expect the plutus-ledger-api team to make these changes? I realize this might be out of your wheelhouse, but perhaps you'd know better who to ask (short of me opening up a generic issue and linking to this on that repo). My concern is that simply changing lists to Set where needed would break quite a bit.

@JaredCorduan
Copy link
Contributor

I'll try to keep this response shorter to not take up too much more of your time.

Conversations like this are my favorite part of my job. And improving life for Plutus developers is arguably the most important thing we can do for Cardano right now. So don't slow down on my account! :)


Can you re-confirm this statement for me?

Absolutely. The only hash that we use in the cardano ledger is blake2b. The link that you shared points to incorrect documentation, thank you very much for letting us know (opening an issue would be grand, if you don't I will). Note that the Shelley ledger spec specifies the hashes in appendix A.1 (sometimes we use 224, and sometimes we use 256).


RE: minAda

ah, yes, it is safe to assume it is not zero. (unless you are on some testnet where the protocol parameter dictating the cost was set to zero).


I meant txInfoMint and also that the txInfoMint must reflect the delta between inputs and outputs.

ah yes, that is another good check. relatedly, you could do the whole "preservation of ADA" check as well, if you are okay with the validation function taking the protocol parameters that determine the deposits amounts (stake cert registration and stake pool registration). This is the "consumed == produced" check in the specs.


Either r = x1 + x2 + x3 + x4, or

Yes, that is another good check. The trouble, though, will be determining which script hashes correspond to Plutus scripts (in the absence of have the scripts in the context). And there is one gotcha to be aware of, though it's a rather odd case. If there is a repeated certificate inside a transaction (such as the situation I mentioned earlier, where someone de-registers, re-registers, and de-registers again), only the first such certificate needs the redeemer.


My mind is currently wondering whether this repo was the wrong place for this, though

I'm very appreciative of seeing y'all's perspective, I'm glad you raised this here so that I was able to gain insight. It's definitely also
a good topic for plutus-ledger-api.


Is there still something worthwhile for the cardano-ledger team to do with respect to the above?

If so, what do you think the steps are, and is there any help needed?

Thank you for the offer! I think we are definitely honing in on a helper validation function that someone could write without too much effort. After we've settled on what it should look like (maybe we're there now), I can make a clear list of what to check. I can also check with the Plutus team and see if they have an opinion on where this should live. I love the idea of gaining new contributors, though, so if you want to do it to that would be fun too.

My concern is that simply changing lists to Set where needed would break quite a bit.

indeed, we can only do that with a new plutus language version.

@peter-mlabs
Copy link
Author

Opened the issue, thanks for confirming.

But yes, in terms of the helper validation function, the dream would be having a suite of such functions that can cover all of the different bits. I mentioned this above briefly, but the reason why we'd want a suite rather than (only) one big function is because the checks we're after are broadly split into:

  • Things that don't make sense under any circumstances (i.e., negative value entries in the txInfoFee, 0 entries in the txInfoMint, etc.)
  • Things that only make sense in certain contexts, but that context might not be fully specified (i.e., your txInfoMint has to actually reflect the value minted in the TxInfo, but if you've only specified the txInfoMint and not inputs or outputs, there's not any a priori reason why a non-zero txInfoMint would necessarily be invalid.)

When our testing utilities are building a TxInfo for the purpose of testing a script, we'd want to avoid the nonsensical cases in the former, but not always the latter (as opposed to say, tooling, that might need to detect those nonsense cases itself).

So something like (psuedo-code) txInfoMint == [("ABCD", "FOO", 0])" would be rejected in all cases, but we'd have some nuance as to whether

TxInfo {
  txInfoMint = [("ABCD", "FOO", 100)],
  txInfoInputs = [],
  txInfoOutputs = [],
  (...)
}

would be rejected, since we might not want to bother fully specifying the entire TxInfo if our script only looks at txInfoMint.

This extends to your comment about protocol parameters and ledger state, as well. Sometimes we might want to pass those explicitly, sometimes we might want to rely on sensible defaults, or sometimes we might want to turn off all checks that can't be conclusively determined without the extra info.

In terms of our own contributions, I'd have to check with the client and see where priorities lie. I think there's a strong case to be made that many of our testing functions could be made much better with these sorts of checks -- 10000 quickcheck cases don't pack as big of a punch if half of them are nonsense! :)

Let me discuss with my team and get back to you; but collaboration with the plutus team would be good, having it totally handled for us would be great!

Thanks again, Jared

@peter-mlabs
Copy link
Author

Related discussion.

@michaelpj
Copy link
Contributor

This is a very long thread, I'm going to attempt to try to respond to it below, please correct me if I've misrepresented anyone.

I think Peter's primary desires are:

  1. To know whether certain ScriptContexts cannot occur in practice, because the transactions that would give rise to them are invalid, or for other reasons. This is in order to simplify his validators by not handling the impossible cases.
  2. To test validator scripts by generating (manually or automatically) ScriptContexts that match what will be produced on-chain.

I think these are quite different desires so I'll address them separately.


In order to tell people what ScriptContexts can't exist, I think we have three options:

  1. Enforce properties in the types, e.g. use NonEmpty if a list can't be empty.
    • This is somewhat useful, we largely don't do it because we are trying to keep the types relatively simple. But maybe something we could look into more. We'd definitely be open to suggestions on this front!
    • However, this doesn't help people (much) who're using something like Plutarch where you're doing everything in an untyped way anyway (see the below discussion of Value for an example).
    • It's also difficult to do this for more involved properties like "this value must appear in this other map", or at least not without some type gymnastics that probably complicate things much more than is warranted.
  2. Document any invariants in the Haddock.
    • This is good to do. We should probably do this more. The reason we have not done this more is partly because TxInfo is "just" a translation of what's in the ledger. So if you know that, you know "how transactions work" then you know how TxInfo works (mostly). We have generally assumed that dapp developers will know how transactions work.
    • I think what this conversation has shown me is that this assumption is too strong. Yes, you need to know something about how transactions work in order to do anything useful. But finding that information (particularly if you need precision) is difficult and amounts to option 3 below. It's probably worth duplicating the information to make it easier to access if nothing else.
  3. Tell people to read the spec.

I would definitely be in favour of doing more of 1 and 2. However, I think we can't cover everything. The ledger spec is big, and almost every aspect of it bears on what you can see in a TxInfo, because the question of "what is a valid TxInfo?" is pretty much "what is a valid transaction?" and that's the subject of a large part of the ledger spec! "What kinds of withdrawal can coexist?", "What kinds of reference input are legal?", there are a very large number of such questions, depending on what you care about, and ultimately the ledger spec is the source of truth. I don't think there's a way to summarise that easily without losing precision (which you might need!).

I don't really know what to do here. I think many of the specific questions asked by Peter are low-enough-hanging fruit that it's worth including the answers in more places, but ultimately if you want to know the details of how some corner of transaction validation works, you're going to have to go to the spec.


I don't really know what to do about testing. I think the approach of generating ScriptContexts is reasonable, but it's not the approach that we thought people would take (or that we've taken ourselves), largely because it is difficult to generate a correct ScriptContext, as you've observed. The attitude we have historically taken is more like this:

  • Generating a ScriptContext is hard, let the ledger do it.
  • How do we make the ledger do it? By submitting transactions.
  • How do we generate valid transactions? Well, we have to generate valid transactions as part of our application, so we can generate actions and test that the resulting transactions work.

i.e. test via submitting transactions that actually look like the transactions your application will submit.

This is unsatisfying in a number of ways. It would be nice to be able to test the validator script more in isolation, and it would be nice to be able to test variants and malicious interactions. But I don't think anyone has an easy answer for that today, although people are thinking about it.

To be honest, I am not optimistic about the idea of generating valid TxInfos. However, the ledger does have some support for generating random transactions. It's possible that you might be able to use that, I know that e.g. the Hydra team is.


Finally, a special note on Value. Value is an interesting type, because it is supposed to be treated as a finitely-supported function. The implementation as a map is an implementation detail, and Values with or without 0 entries are equivalent.

The right answer here IMO is to write your operations at the correct level of abstraction. The plutus helpers for Value do this: they are "finitely-supported-function operations" implemented for the map representation, and hence they don't care about 0 entries. And in that spirit, the question of whether or not the representation contains them is a free choice for the ledger.

It's unfortunate that tools such as Plutarch force people to break the abstraction and operate on the representation. Ironically, this is a case where we did try to present people with types that enforce the correct usage.

@michaelpj
Copy link
Contributor

Another thing: I think it is a real awkward point that this sits between the ledger and plutus. It really is the case that if you want to write a dapp, you need to know quite a lot about the ledger, more than any other kind of user, and the only way to get that information is by reading the spec. I have been arguing for some time that I would like to have some kind of medium-detail ledger conceptual documentation that at least covered some of the concepts, even if it didn't go into the full detail that you need the spec for.

@JaredCorduan
Copy link
Contributor

I have been arguing for some time that I would like to have some kind of medium-detail ledger conceptual documentation that at least covered some of the concepts, even if it didn't go into the full detail that you need the spec for.

indeed! @michaelpj knows I am also very fond of this idea. (You can find in page "dreamed of" in this long conversation to see me expressing the same thing).

@peter-mlabs
Copy link
Author

Thanks for your response @michaelpj; I'm very appreciative that you've obviously taken the time to read through the extended discussion between Jared and I.

I think that your first three points are spot on, and I would say that they're basically a spectrum from "a perfect world" to "what we'll suffer through if we must" from the perspective of developers. One of the things that drew me to Haskell, which I am admittedly more new to than most of my colleagues, is that illegal states can be made un-representable. Doing so at the type level is the holy grail, because then you know a total function is only operating on exactly what it needs to. You don't need any error or Left "Error: Impossible state"'s in this world.

Newtype with smart constructors can help at the value level, but then you're still littering the code with Mabyes or Either's that you wouldn't need in a perfect world.

Reading the spec, IMO, should be probably something that every Cardano dApp developer should do once, but its hard to find the time when we're struggling against so many other things. But its very difficult to keep all of that in our heads -- and that's another reason I was drawn to Haskell. I don't have a great memory, and having types around -- expressive, precise types -- meant that I no longer had to worry about exactly whether I needed to check that my strings were a certain length before passing them to a certain function.

I totally agree that there are some things -- particularly the interdependencies in the TxInfo -- that would be difficult to express in Haskell without resorting a bunch of tricks for hacking in pseudo-dependent types, and causing an unfriendly API in the process. But for places where this happens, having validation functions and smart constructors would be paramount to making sure we're not forgetting anything.

Your comment

However, this doesn't help people (much) who're using something like Plutarch where you're doing everything in an untyped way anyway (see the below discussion of Value for an example).

I wouldn't say Plutarch does anything in an untyped way at all. Plutarch carries around information in the type level for values now to indicate whether they're sorted, positive, non-zero, or neither, and this makes things much easier to work with than the last time I used plutus-tx.

And regardless, the abstractions in plutus-tx are leaky: it it perfectly legal to construct illegal Values since the newtype data constructors are exported, and perfectly legal to construct a TxInfo with, say, a fee field with an unsorted negative value. In Plutarch, this is prohibited at the type level. I don't want to have to check during code reviews whether every usage of a Value is properly sorted or normalized when the type system can do that for me.


The issue that I have with your comments regarding generating script contexts and TxInfo is that this is something we need. We have an inordinate amount of utility functions that operate on script contexts and TxInfos that don't qualify as complete scripts that would not make sense to do on chain. And the constraints of the actual chain (e.g., using something like plutip) mean that we're now writing double or triple the code, because we've got to write on-chain, off-chain, infrastructure, and wrappers to bundle everything into validators and can't get nearly the same level of expressiveness in error messages.

I don't have actual measurements, but I am positive that applying parameters to a plutus function and evaluating it will be much faster than building an entire transaction just to test one helper function. And even more so, we don't want to generate fully valid transactions. We want to generate just the parts of the TxInfo that we care about. We want to generate Values and CurrencySymbols and values for the txInfoMint field that we know make sense.

And just to be clear -- testing Validator scripts and minting policies is a very small percentage of what we want to do. It's important, for sure, but testing the utility functions are more critical. Its very difficult to compose a program with confidence when all of your requisite parts are totally untested.


And for your final comment:

I don't disagree that dApp development teams need SMEs on the spec, but it is a waste of time for every developer to need to have a copy of the spec sitting on their desk at all times just to write a legal inhabitant of a type. We're putting in more and more time working around this and demonstrating that it is possible to represent a large swath of this low-hanging fruit computationally.

Of course, the spec will always be the source of truth. A conceptual document will help, but its not as useful as wrapping that knowledge up and expressing in computationally.

@L-as
Copy link

L-as commented Aug 9, 2022

For testing, I can recommend looking into https://github.com/mlabs-haskell/plutus-simple-model, it uses cardano-ledger directly but is otherwise really just like EmulatorTrace AFAICT.

@L-as
Copy link

L-as commented Aug 9, 2022

It's unfortunate that tools such as Plutarch force people to break the abstraction and operate on the representation. Ironically, this is a case where we did try to present people with types that enforce the correct usage.

There's a very good reason for it: Substantial efficiency improvements.
Dealing with Value with no guarantees is much less efficient than if you have guarantees. E.g. IntersectMBO/plutus#4710 happened because there was no guarantee about 0s in the Value. In Plutarch, we (ab)use this information to give you efficient operations over Values. E.g. checking real equality is trivial if you know both are sorted, similar things are true for comparisons.
The operations in plutus-ledger-api use union on the underlying Maps before operating on them, but this is obviously quite inefficient.
If you have unnormalised maps, you can generally do the comparisons in O(n * log n) time for n entries.
If you have normalised maps, it's generally O(n).

@michaelpj
Copy link
Contributor

michaelpj commented Aug 10, 2022

EDIT: damn, accidentally posted before I was done, some more added afterwards.

I think it would be helpful to try and extract from this thread a list of specific places where we can adopt strategy 1 or 2.

But its very difficult to keep all of that in our heads

I don't disagree that dApp development teams need SMEs on the spec, but it is a waste of time for every developer to need to have a copy of the spec sitting on their desk at all times just to write a legal inhabitant of a type.

I mean, you only need to remember about the pieces you actually work with. A developer who has 0 domain knowledge can't realistically do anything with transactions. And you don't need much to get the basics down. Inputs, outputs, values, none of this is that weird or needs you to read the spec.

But also it is simply true that full Cardano transactions are pretty complicated, and if you are in the game of making Cardano transactions you can't really avoid that, although it depends how much of the complexity you use.

We want to generate just the parts of the TxInfo that we care about. We want to generate Values and CurrencySymbols and values for the txInfoMint field that we know make sense.

I think this is perhaps more promising angle. The local conditions on specific fields are more likely to be documentable or enforceable. The non-local conditions on the transaction as a whole would be quite difficult to replicate without duplicating the entirety of the ledger rules. Examples:

  • A type/smart constructors for valid CurrencySymbols: easy enough
  • A smart constructor for TxInfo that checks that the transaction balances etc. etc.: difficult without replicating the ledger rules entirely

We're putting in more and more time working around this and demonstrating that it is possible to represent a large swath of this low-hanging fruit computationally.

To be clear, there are two sources of pain in this duplication of the ledger logic:

  1. Working it out and writing it
  2. Maintaining it

I think 2 is the worse problem over time, and is why I'm very reluctant to get into this.

And regardless, the abstractions in plutus-tx are leaky: it it perfectly legal to construct illegal Values since the newtype data constructors are exported

I guess we thought people would use that power responsibly. Perhaps it just needs a bigger warning.


Also on the subject of generation... I think a collection of quickcheck/hedgehog generators for the types in plutus-ledger-api would be very useful, and is something we'd be happy to maintain upstream. I mention this because I suspect you might already have written such a thing :)


I'm not up-to-date on the latest in Plutarch.

Plutarch carries around information in the type level for values now to indicate whether they're sorted, positive, non-zero, or neither, and this makes things much easier to work with than the last time I used plutus-tx.

My question is: why do you care about such things? My guess is: because you're operating on the underlying representation, and you want to write code that is faster by making use of those properties, rather than writing the more generic code that would work regardless.

I am fundamentally torn here. I think it's better software engineering to not write code that's so exposed to the tiny implementation details, but also I understand that performance is a key concern.

Also... those are some pretty ferocious types in Plutarch. For better or worse, we have tried to keep the types relatively simple. As ever in Haskell, that's a judgement call.

There's a very good reason for it: Substantial efficiency improvements.

I'm not saying there's no no reason for it, I'm just pointing out that you're breaking the abstraction, and that has consequences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants