-
Notifications
You must be signed in to change notification settings - Fork 482
Cost models for LookupCoin, ValueContains, ValueData, UnValueData builtins
#7344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
528ebcd to
69f1d6f
Compare
LookupCoin, ValueContains, ValueData, UnValueData builtins
53d9ea1 to
5b60cfc
Compare
5b60cfc to
7eebe28
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some initial comments. I'll come back and add some more later. I need to look at the benchmarks properly though.
plutus-core/plutus-core/src/PlutusCore/Evaluation/Machine/ExMemoryUsage.hs
Outdated
Show resolved
Hide resolved
plutus-core/plutus-core/src/PlutusCore/Evaluation/Machine/ExMemoryUsage.hs
Outdated
Show resolved
Hide resolved
plutus-core/plutus-core/src/PlutusCore/Evaluation/Machine/ExMemoryUsage.hs
Outdated
Show resolved
Hide resolved
b1a6bf1 to
6afef50
Compare
|
|
||
| -- | Generate random key as ByteString (for lookup arguments) | ||
| generateKeyBS :: (StatefulGen g m) => g -> m ByteString | ||
| generateKeyBS = uniformByteStringM Value.maxKeyLen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the keys are completely random, then lookupCoin will probably never hit an existing entry, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lookupCoin will probably never hit an existing entry,
Maybe that's what we want? Do we know if finding out that something's not in the map is the worst case? Naively you might think that the time taken to discover that some key is not in the map is always greater or equal to the time taken to find a key that is in the map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't agree. I think we should actively include both the case when the map contains the key and when it doesn't. Otherwise we're not really measuring this case, and that's the whole point of benchmarking, right? Otherwise we would just use the, analytically discovered, worst-time complexity of the algorithm and pick a function from that category for its cost, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, as I mentioned above, you won't have a good idea of the actual size of the Value if you don't enforce uniqueness of the keys.
3cee663 to
86d645a
Compare
|
I have simplified the generators (less fixed values, more randomly generated samples, quantities are all After that I've re-benchmarked and re-generated cost models. This is how I view them: LookupCoin
ValueContains
ValueData
UnValueData
CC: @kwxm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to benchmark the worst case, I think you should also ensure that lookupCoin always hits the largest inner map (or at least, such cases should be well-represented).
Also, we'll need to re-run benchmarking for unValueData after adding the enforcement of integer range.
plutus-core/cost-model/create-cost-model/BuiltinMemoryModels.hs
Outdated
Show resolved
Hide resolved
| @@ -12094,203 +12094,710 @@ IndexArray/42/1,1.075506579052359e-6,1.0748433439930302e-6,1.0762684407023462e-6 | |||
| IndexArray/46/1,1.0697135554442532e-6,1.0690902192698813e-6,1.0704133377013816e-6,2.2124820728450233e-9,1.8581237858977844e-9,2.6526943923047553e-9 | |||
| IndexArray/98/1,1.0700747499373992e-6,1.0693842628239684e-6,1.070727062396803e-6,2.2506114869928674e-9,1.9376849028666025e-9,2.7564941558204088e-9 | |||
| IndexArray/82/1,1.0755056682976695e-6,1.0750405368241111e-6,1.076102212770973e-6,1.8355219893844098e-9,1.5161640335164335e-9,2.4443625958006994e-9 | |||
| Bls12_381_G1_multiScalarMul/1/1,8.232134704712041e-5,8.228195390475752e-5,8.23582682466318e-5,1.224261187989977e-7,9.011720721178711e-8,1.843107342917502e-7 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GitHub seeems to think that the data for all of the BLS functions has changed, but I don't think they have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file on master contains Windows-style line terminators (\r\n) for BLS lines:
git show master:plutus-core/cost-model/data/benching-conway.csv | grep "Bls12_381_G1_multiScalarMul/1/1" | od -c | grep -C1 "\r"
0000000 B l s 1 2 _ 3 8 1 _ G 1 _ m u l
0000020 t i S c a l a r M u l / 1 / 1 ,
0000040 8 . 2 3 2 1 3 4 7 0 4 7 1 2 0 4
--
0000200 8 7 1 1 e - 8 , 1 . 8 4 3 1 0 7
0000220 3 4 2 9 1 7 5 0 2 e - 7 \r \nThis PR changes \r\n to \n .
Extends the cost modeling framework to support lookupCoin, valueContains, valueData, and unValueData builtins. Adds parameter definitions, arity specifications, and integrates with the cost model generation system. Establishes foundation for accurate costing of Value operations in Plutus Core execution.
Creates Values.hs benchmark module with systematic test generation for lookupCoin, valueContains, valueData, and unValueData operations. Includes value generation utilities, individual benchmark functions, and edge case testing with empty values. Enables data collection for accurate cost model parameter fitting.
Implements optimal statistical models for Value operations based on performance characteristics: linear models for lookupCoin and valueContains (size-dependent), constant model for valueData (uniform performance), and linear model for unValueData. Provides accurate cost parameters across all builtin cost model configurations and updates test expectations.
Removes unimplementedCostingFun placeholders for Value builtins and connects them to their respective cost model parameters (paramLookupCoin, paramValueContains, paramValueData, paramUnValueData). Enables accurate execution cost calculation for Value operations in Plutus Core scripts.
Includes extensive benchmark results covering various input sizes and edge cases for lookupCoin, valueContains, valueData, and unValueData. Data validates the chosen statistical models and cost parameters. Provides empirical foundation confirming model accuracy across different operation profiles.
Add a new Logarithmic newtype wrapper in ExMemoryUsage that transforms size measures logarithmically. This enables linear cost models to effectively capture O(log n) runtime behavior by measuring log(size) instead of size directly. The wrapper computes max(1, floor(log2(size) + 1)) from any wrapped ExMemoryUsage instance, making it composable with existing size measures like ValueOuterOrMaxInner for operations with logarithmic complexity. This infrastructure supports proper costing of Value builtins like lookupCoin which has O(log max(m, k)) complexity.
Refactor the Value benchmarking suite to use Cardano-compliant key sizes (32-byte max) and leverage the new Logarithmic wrapper for accurate modeling of logarithmic operations. Key changes: - Apply Logarithmic wrapper to lookupCoin and valueContains benchmarks for proper O(log n) cost modeling - Consolidate key generators from 4 functions to 2, eliminating duplication - Remove obsolete key size parameters throughout (keys always maxKeyLen) - Extract withSearchKeys pattern to eliminate repetitive code - Simplify test generation by removing arbitrary key size variations - Clean up lookupCoinArgs structure for better readability The refactoring reduces the module from 359 to 298 lines while improving clarity and ensuring all generated Values comply with Cardano's 32-byte key length limit.
Simplify the R model definitions for Value-related builtins by replacing custom linear model implementation with standard linearInY wrapper for valueContains. This maintains the same statistical behavior while improving code maintainability. Add inline comments documenting the parameter wrapping strategy used for each model (Logarithmic wrapping for lookupCoin/valueContains, ValueTotalSize for contains operand, unwrapped for valueData/unValueData). Clean up formatting inconsistencies in model definitions.
Refreshed benchmarking data for lookupCoin, valueContains, valueData, and unValueData with improved statistical coverage and sampling. This data serves as the foundation for the refined cost model parameters applied in the subsequent commit.
Updated cost parameters based on fresh benchmark data analysis: - lookupCoin: Adjusted intercept (284421→179661) and slope (1→7151) to better reflect actual performance with varying currency counts - valueContains: Changed from added_sizes to linear_in_y model with refined parameters (intercept 42125119→1000, slope 30→130383) - valueData: Reduced constant cost (205465→153844) based on updated profiling results - unValueData: Switched to linear_in_x model with refined parameters (intercept 10532326261→1000, slope 431→33094) All three cost model variants (A, B, C) updated for consistency.
Modernize logarithm calculation in the Logarithmic ExMemoryUsage instance by switching from the compatibility module GHC.Integer.Logarithms to the modern GHC.Num.Integer API. Changes: - Replace integerLog2# (unboxed, from GHC.Integer.Logarithms) with integerLog2 (boxed, from GHC.Num.Integer) - Simplify code by removing unboxing boilerplate: I# (integerLog2# x) becomes integerLog2 x - Keep other imports (GHC.Integer.Logarithms, GHC.Exts) as they are still used elsewhere in the file (memoryUsageInteger function) This addresses code review feedback to use the modern ghc-bignum API instead of the legacy compatibility module, while maintaining the same computational semantics. Cost model regeneration verified no regression in derived parameters.
Address Kenneth's review comment by ensuring builtins use the same size measure wrappers as their budgeting benchmarks. Changes: - Add LogValueOuterOrMaxInner newtype combining logarithmic transformation with outer/max inner size measurement - Update lookupCoin and valueContains to use size measure wrappers - Add KnownTypeAst instances for ValueTotalSize and LogValueOuterOrMaxInner - Update benchmarks to use new combined wrapper type This ensures the cost model accurately reflects runtime behavior by using identical size measures in both denotations and benchmarks.
Regenerate cost model parameters based on fresh benchmark runs for the four Value-related built-in functions: lookupCoin, valueContains, valueData, and unValueData. New cost models: - lookupCoin: linear_in_z (intercept: 209937, slope: 7181) - valueContains: linear_in_y (intercept: 1000, slope: 131959) - valueData: constant_cost (182815) - unValueData: linear_in_x (intercept: 1000, slope: 33361) The benchmark data includes 350 measurement points across varying input sizes to ensure accurate cost estimation. All three cost model variants (A, B, C) have been updated consistently with identical parameters.
Document the regeneration of benchmark data and cost model parameters for the four Value-related built-in functions following fresh benchmark measurements.
…verhead Regenerate cost model parameters based on fresh benchmark runs after rebasing on master. This accounts for the negative amount validation added to valueContains in commit 531f1b8. Updated cost models: - lookupCoin: linear_in_z (intercept: 203599, slope: 7256) - valueContains: linear_in_y (intercept: 1000, slope: 130720) - valueData: constant_cost (156990) - unValueData: linear_in_x (intercept: 1000, slope: 36194) The benchmark data includes 350 measurement points across varying input sizes. All three cost model variants (A, B, C) have been updated consistently with identical parameters.
Replace local benchmark data with results from GitHub Actions remote execution and regenerate cost model parameters for the four Value-related builtins: lookupCoin, valueContains, valueData, and unValueData. Remote benchmarking provides more consistent and reliable measurements by running on standardized infrastructure, eliminating local environment variations that could affect cost model accuracy. Updated parameters across all cost model versions (A, B, C): - lookupCoin: intercept 203599→210606, slope 7256→8019 - valueContains: slope 130720→94161 - valueData: constant 156990→162241 - unValueData: slope 36194→15417
Reformat builtin cost model JSON files to use consistent 4-space indentation instead of 2-space indentation. This improves readability and aligns with common JSON formatting conventions for configuration files. No semantic changes - only whitespace formatting updated. Files affected: - builtinCostModelA.json - builtinCostModelB.json - builtinCostModelC.json
680af99 to
bff4bf2
Compare
Co-authored-by: Kenneth MacKenzie <kenneth.mackenzie@iohk.io>
| "cpu": { | ||
| "arguments": { | ||
| "intercept": 107878, | ||
| "intercept": 107878, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a 3-space indent, inconsistent with 4-space used in all other lines.
Apply the same optimization used in the Logarithmic instance to memoryUsageInteger, using integerLog2 directly instead of unboxed integerLog2# and quotInt# operations. This allows us to remove: - MagicHash language extension - GHC.Exts imports (Int (I#), quotInt#) - GHC.Integer and GHC.Integer.Logarithms imports The refactoring maintains identical functionality while making the code more consistent and simpler.
Simplifies the memory usage measurement by consolidating three separate types (Logarithmic, ValueOuterOrMaxInner, LogValueOuterOrMaxInner) into a single ValueLogOuterOrMaxInner type. This reduces complexity while maintaining the same functionality for measuring logarithmic Value sizes. The new type directly encodes the intended semantics: size = log(max(outer, maxInner)), making the code more maintainable and producing clearer type signatures in builtin function definitions.
Replaces unsafe fromJust usage with explicit error messages and HasCallStack constraint in costModelParamsForTesting. This provides better debugging context when cost model parameter extraction fails, including stack traces that pinpoint the exact call site.
Adds cost model parameter names for LookupCoin, ValueContains, ValueData, and UnValueData builtins (11 new parameters per ledger API version). Updates parameter count expectations to reflect the expanded parameter set. Updates golden type signatures and conformance test budget expectations to reflect the refined ValueLogOuterOrMaxInner type signature, ensuring accurate cost accounting for Value-based operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm only just understanding the philosophy behind costing, so I might be wrong. I think it's important we both generate random inputs and inputs which hit the various edge-cases the algorithm has. The benchmarking data we generate should be a sample which describes the algorithm's behavior as completely as possible.
| commonWithKeys <- mapM (withSearchKeys g . pure) testValues | ||
|
|
||
| -- Additional tests specific to lookupCoin | ||
| let valueSizes = [(100, 10), (500, 20), (1_000, 50), (2_000, 100)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reasoning behind picking these specific sizes?
Most importantly, however, you're not actually generating Values of these sizes, because you're not checking whether the keys you generate are unique per map or not.
|
|
||
| -- | Generate random key as ByteString (for lookup arguments) | ||
| generateKeyBS :: (StatefulGen g m) => g -> m ByteString | ||
| generateKeyBS = uniformByteStringM Value.maxKeyLen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't agree. I think we should actively include both the case when the map contains the key and when it doesn't. Otherwise we're not really measuring this case, and that's the whole point of benchmarking, right? Otherwise we would just use the, analytically discovered, worst-time complexity of the algorithm and pick a function from that category for its cost, right?
|
|
||
| -- | Generate random key as ByteString (for lookup arguments) | ||
| generateKeyBS :: (StatefulGen g m) => g -> m ByteString | ||
| generateKeyBS = uniformByteStringM Value.maxKeyLen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, as I mentioned above, you won't have a good idea of the actual size of the Value if you don't enforce uniqueness of the keys.
In general we want the costing function to describe the worst-case behaviour of the builtin, which means that the benchmarks should be run with inputs which produce worst-case behaviour. If we feed them totally random inputs then we'll fit a costing function which describes the average-case behaviour, and this could be dangerous. For example if a function takes 1ms on average but there are particular inputs which cause it to take 20ms then the cost model has to charge 20ms for all inputs so that we don't undercharge for scripts that do actually exercise the worst case (perhaps repeatedly) [An example of this kind of behaviour would be for In the case of Addendum. It's often necessary to do a lot of exploratory benchmarking to check that the builtin behaves as you're expecting, and this isn't generally reflected in the final costing benchmarks and costing function (for example, see the costing branches for |
Like what I said earlier, for |
What about the case when the outer map is really big compared to the inner ones? Also, we could make all of the inner maps the same size. We probably don't want to be benchmarking anything that isn't the worst case. |
Isn't the worst case in that case still performing lookup in both the outer map and inner map? If you are concerned that the outer map key is not at the leaf, you can use either the smallest key or the largest key. By the way, to make key comparisons more expensive, it would also be useful to fix first 30 or 31 bytes, and only vary the last byte. |




Summary
This PR implements cost modeling for Value-related builtins:
lookupCoin,valueContains,valueData, andunValueData.Implementation
Complete cost modeling pipeline:
Cost models:
valueData: Uses constant cost model based on uniform performance analysislookupCoin: Linear cost model with dimension reduction for 3+ parametersvalueContains: Linear cost model for container/contained size dependencyunValueData: Linear cost model for size-dependent deserializationAll functions now have proper cost models instead of unimplemented placeholders.