-
Notifications
You must be signed in to change notification settings - Fork 315
/
Copy pathcap-0046-07.md
730 lines (480 loc) · 42.6 KB
/
cap-0046-07.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
```
CAP: 0046-07 (formerly 0055)
Title: Fee and resource model in smart contracts
Working Group:
Owner: MonsieurNicolas
Authors: dmkozh
Consulted:
Status: Implemented
Created: 2022-06-03
Discussion: TBD
Protocol version: 20
```
## Simple Summary
This CAP defines the mechanism used to determine fees when using smart contracts on the Stellar network.
## Motivation
With the introduction of smart contracts on the network, the existing fee model of the "classic" transaction system is too simplistic: it requires careful design of the code that runs "on chain" as to ensure that all operations have a similar cost and performance profile, which is not possible with arbitrary code running in contracts.
### Goals Alignment
Goals of the updated fee model are to:
* ensure fairness between users and use cases.
* promote scalable patterns on the network, doing more with the same amount of overall resources.
* ensure that the network operates in a sustainable way, network operators should be in control of their operating cost.
## Abstract
This CAP proposes various network level parameters (voted on by validators), and fee structure for the different kinds of resources involved on the network.
The fee structure is designed to discourage "spam" traffic and overall waste of infrastructure capacity.
## Specification
### XDR changes
See the full XDR diffs in the Soroban overview CAP.
Fee and resource limit configuration is specified via the following network parameters (in some cases increments are used to mitigate for rounding errors):
```
// General “Soroban execution lane” settings
struct ConfigSettingContractExecutionLanesV0
{
// maximum number of Soroban transactions per ledger
uint32 ledgerMaxTxCount;
};
// "Compute" settings for contracts (instructions and memory).
struct ConfigSettingContractComputeV0
{
// Maximum instructions per ledger
int64 ledgerMaxInstructions;
// Maximum instructions per transaction
int64 txMaxInstructions;
// Cost of 10000 instructions
int64 feeRatePerInstructionsIncrement;
// Memory limit per transaction. Unlike instructions, there is no fee
// for memory, just the limit.
uint32 txMemoryLimit;
};
// Ledger access settings for contracts.
struct ConfigSettingContractLedgerCostV0
{
// Maximum number of ledger entry read operations per ledger
uint32 ledgerMaxReadLedgerEntries;
// Maximum number of bytes that can be read per ledger
uint32 ledgerMaxReadBytes;
// Maximum number of ledger entry write operations per ledger
uint32 ledgerMaxWriteLedgerEntries;
// Maximum number of bytes that can be written per ledger
uint32 ledgerMaxWriteBytes;
// Maximum number of ledger entry read operations per transaction
uint32 txMaxReadLedgerEntries;
// Maximum number of bytes that can be read per transaction
uint32 txMaxReadBytes;
// Maximum number of ledger entry write operations per transaction
uint32 txMaxWriteLedgerEntries;
// Maximum number of bytes that can be written per transaction
uint32 txMaxWriteBytes;
int64 feeReadLedgerEntry; // Fee per ledger entry read
int64 feeWriteLedgerEntry; // Fee per ledger entry write
int64 feeRead1KB; // Fee for reading 1KB
// The following parameters determine the write fee per 1KB.
// Write fee grows linearly until bucket list reaches this size
int64 bucketListTargetSizeBytes;
// Fee per 1KB write when the bucket list is empty
int64 writeFee1KBBucketListLow;
// Fee per 1KB write when the bucket list has reached `bucketListTargetSizeBytes`
int64 writeFee1KBBucketListHigh;
// Write fee multiplier for any additional data past the first `bucketListTargetSizeBytes`
uint32 bucketListWriteFeeGrowthFactor;
};
// Historical data (pushed to core archives) settings for contracts.
struct ConfigSettingContractHistoricalDataV0
{
int64 feeHistorical1KB; // Fee for storing 1KB in archives
};
// Contract event-related settings.
struct ConfigSettingContractEventsV0
{
// Maximum size of events that a contract call can emit.
uint32 txMaxContractEventsSizeBytes;
// Fee for generating 1KB of contract events.
int64 feeContractEvents1KB;
};
// Bandwidth related data settings for contracts.
// We consider bandwidth to only be consumed by the transaction envelopes, hence
// this concerns only transaction sizes.
struct ConfigSettingContractBandwidthV0
{
// Maximum sum of all transaction sizes in the ledger in bytes
uint32 ledgerMaxTxsSizeBytes;
// Maximum size in bytes for a transaction
uint32 txMaxSizeBytes;
// Fee for 1 KB of transaction size
int64 feeTxSize1KB;
};
```
Soroban resources are provided in a `SorobanTransactionData` extension of
transaction:
```
// Resource limits for a Soroban transaction.
// The transaction will fail if it exceeds any of these limits.
struct SorobanResources
{
// The ledger footprint of the transaction.
LedgerFootprint footprint;
// The maximum number of instructions this transaction can use
uint32 instructions;
// The maximum number of bytes this transaction can read from ledger
uint32 readBytes;
// The maximum number of bytes this transaction can write to ledger
uint32 writeBytes;
};
// The transaction extension for Soroban.
struct SorobanTransactionData
{
ExtensionPoint ext;
SorobanResources resources;
// Portion of transaction `fee` allocated to resource fees.
int64 resourceFee;
};
```
### Semantics
#### Fee model overview
The approach taken in this proposal is to decompose the total transaction fee into the following additive components:
* `competitiveResourcesFee` - the fee for 'competitive' network resources (defined below) and non-refundable resources, based on the values *declared* in transaction and network-defined fee rates.
* `refundableResourcesFee` - the maximum fee for resources that don't need to be strictly restricted per ledger and thus are charged based on the actual usage.
* `inclusionFeeBid` - this is the "social value" part of the fee, it represents the intrinsic value that the submitter puts on that transaction.
The 'competitive' resources are resources that have to be limited per ledger in order to ensure reasonable close time and prevent network from overloading. These resources are bounded on different dimensions, i.e. there is no single 'proxy' resource that could be used to restrict them. On a high level, these resources are:
* instructions (virtual CPU instructions to execute)
* ledger data access (ledger IO metrics)
* network propagation (bandwidth usage)
Soroban transaction fee has to cover all three components, but only `inclusionFeeBid` is used for transaction prioritization.
#### TransactionSet semantics
All Soroban transactions must be present in phase `1` of `GeneralizedTransactionSet` (all the remaining 'classic' transactions must be in phase `0`). The Soroban phase must contain only a single `TXSET_COMP_TXS_MAYBE_DISCOUNTED_FEE` component. Refer to [`CAP-0042`](./cap-0042.md) for details on `GeneralizedTransactionSet` and phases.
While transactions bid specific `inclusionFeeBid`, the effective bid may be lowered within a transaction set component by setting `baseFee` in `txsMaybeDiscountedFee` component.
When set:
* all transactions within the component must bid not less than `baseFee`, i.e. for each transaction `inclusionFeeBid >= baseFee`
* the effective inclusion bid for transactions in that group is `baseFee`
The total resource consumption for every one the 'competitive' resources must not exceed the ledger-wide limits. The specific limits are specified in sections below on per-resource basis.
The usual `GeneralizedTransactionSet` validity and comparison rules also apply to Soroban corresponding to the semantics described in [CAP-0042](./cap-0042.md).
#### Transaction validation
All Soroban transactions must have `ext.sorobanData()` extension present and populated.
`resources` contain the declared values of resources that the transaction is paying the fee for. These values have to not exceed the limits specified by the network settings.
`resourceFee` is computed based on the `resources` declared in `tx` and transaction envelope size:
`resourceFee(tx) = Instructions_fee(resources.instructions) + LedgerDataAccess_fee(resources) + NetworkData_fee(size(txEnvelope)) + Historical_flat_fee(size(txEnvelope))`
Note, that `Historical_flat_fee` is a 'competitive' resource, but it's constant for any transaction execution result and thus is a part of non-refundable fee (as its refund is always 0).
`resourceFee` corresponds to the sum of `competetiveResourcesFee` and `refundableResourcesFee` components.
The rules for limits and fee computation per-resource are specified in dedicated sections below.
At validation time total transaction fee (`tx.fee`) has to cover the fee components based only on the values declared in transaction:
`tx.fee = sorobanData.resourceFee + inclusionFeeBid`
Minimum valid `inclusionFeeBid` value is 100 stroops, thus the following condition has to be true:
`tx.fee >= sorobanData.resourceFee + 100`
`sorobanData.resourceFee` value has to cover the 'competetive' resource fee computed based on the declared resource values specified in `sorobanData` and transaction envelope size:
`sorobanData.resourceFee >= resourceFee(tx)`
The remaining value of `sorobanData.resourceFee - resourceFee(tx)` is considered to be a refundable part of the resource fee and has to cover the refundable resources consumed at apply time.
Similarly to 'classic' transactions, source account must be able to pay for the total fee (`tx.fee`) for the transaction.
#### Fee computation while applying transactions
As in classic, total fees are taken from the source account balance before applying transactions.
Total fee charged is equal to `tx.fee` if `baseFee` is not set in the transaction set component, and `tx.fee - inclusionFeeBid + baseFee` if `baseFee` is set in the transaction set component.
During transaction execution the resource limits declared by transaction are enforced and exceeding any one of the limits leads to transaction failure with `<OP_NAME>_RESOURCE_LIMIT_EXCEEDED` operation error code (every Soroban operation defines a separate error for this, such as `INVOKE_HOST_FUNCTION_RESOURCE_LIMIT_EXCEEDED`).
The per-resource failure conditions are specified in the sections below.
At the end of the transaction execution, compute the final refundable fee for successful transaction as follows:
`effectiveRefundableFee = Events_fee(emittedContractEventsSizeBytes) + Rent_fee`
where `emittedContractEventsSizeBytes` is the size of the emitted contract events and invocation return value, and `Rent_fee` is the fee for the rent bumps performed by the transaction (if any). If `effectiveRefundableFee > sorobanData.resourceFee - resourceFee(tx)` (i.e. if actual required refundable fee is greater than the `refundableResourcesFee` component defined above), the transaction fails.
In case if transaction fails `effectiveRefundableFee` is set to `0`.
After executing the transaction, the refund amount is computed as `sorobanData.resourceFee - resourceFee(tx) - effectiveRefundableFee`. Protocol refunds that amount (when non-zero) to the transaction source account. The ledger modification due to refund is reflected under `txChangesAfter` in the meta.
Note, that refund happens for the failed transactions as well.
#### Per-resource specifications
This section describes the fee contributions, per-transaction/per-ledger maximum limits and apply-time enforcement for all the transaction resources.
#### Instructions
Instructions bound the execution time of the transactions in the ledger.
A transaction contains:
* maximum number of CPU instructions that transaction may use `sorobanData.resources.instructions`
All the configuration values come from `ConfigSettingContractComputeV0`.
Fee: `Instructions_fee(instructions) = round_up(instructions * feeRatePerInstructionsIncrement / 10000)`
Validity constraints:
* per transaction
* `resources.instructions <= txMaxInstructions`.
* ledger wide (`GeneralizedTransactionSet`)
* sum of all `resources.instructions` <= `ledgerMaxInstructions`.
Apply-time enforcement: instructions metered during the contract execution may not exceed `instructions` declared in the transaction. Refer to [CAP-0046-10](./cap-0046-10.md) for metering details.
#### Ledger data
Ledger data resources bounds the amount and size of ledger reads and writes.
A transaction contains:
* the read `sorobanData.resources.footprint.readOnly` and read/write `sorobanData.resources.readWrite` sets of ledger keys.
* the maximum total amount of data that can be read from the ledger in bytes `sorobanData.resources.readBytes`
* the maximum total amount of data that can be written to the ledger in bytes `sorobanData.resources.writeBytes`
All the configuration values come from `ConfigSettingContractLedgerCostV0`.
Fee:
```
LedgerDataAccess_fee(resources) =
(length(resources.footprint.readOnly)+length(resources.footprint.readWrite))*feeReadLedgerEntry + // cost of reading ledger entries
length(resources.footprint.readWrite)*feeWriteLedgerEntry + // cost of writing ledger entries
round_up(resources.readBytes * feeRead1KB / 1024) + // cost of processing reads
round_up(write_fee_per_1kb(BucketListSize)* resources.writeBytes / 1024) // cost of adding to the bucket list
```
where `BucketListSize` is the average size of the bucket list over the moving window. Refer to the [State Archival CAP](cap-0046-12.md) for details, and `write_fee_per_1kb` is a function that determines the ledger write fee per 1024 bytes based on the bucket list size and is defined as follows:
```
// this is the fee rate slope
// feeRate1KB = (writeFee1KBBucketListHigh - writeFee1KBBucketListLow)/bucketListTargetSizeBytes
// in all cases, rate is clamped as to not fall under MINIMUM_WRITE_FEE_PER_1KB in case
// writeFee1KBBucketListLow or writeFee1KBBucketListHigh are too low
// if s < bucketListTargetSizeBytes,
// grow by feeRate1KB until we reach writeFee1KBBucketListHigh
write_fee_per_1kb(s) = max(MINIMUM_WRITE_FEE_PER_1KB,
(writeFee1KBBucketListHigh - writeFee1KBBucketListLow)*s/bucketListTargetSizeBytes)
// else (s >= bucketListTargetSizeBytes),
// grow by bucketListWriteFeeGrowthFactor*feeRate1KB from writeFee1KBBucketListHigh
write_fee_per_1kb(s) = max(MINIMUM_WRITE_FEE_PER_1KB,
writeFee1KBBucketListHigh +
bucketListWriteFeeGrowthFactor*(writeFee1KBBucketListHigh - writeFee1KBBucketListLow)*
(s-bucketListTargetSizeBytes)/bucketListTargetSizeBytes)
```
Validity constraints:
* per transaction
* `length(resources.footprint.readOnly) + length(resources.footprint.readWrite) <= txMaxReadLedgerEntries`.
* `resources.readBytes <= txMaxReadBytes`.
* `length(resources.footprint.readWrite) <= txMaxWriteLedgerEntries`.
* `resources.writeBytes <= txMaxWriteBytes`.
* ledger wide (`GeneralizedTransactionSet`)
* `sum(length(resources.footprint.readOnly) + length(resources.footprint.readWrite)) <= ledgerMaxReadLedgerEntries`.
* `sum(length(resources.footprint.readWrite)) <= ledgerMaxWriteLedgerEntries`.
* `sum(resources.readBytes) <= ledgerMaxReadBytes`.
* `sum(resources.writeBytes) <= ledgerMaxWriteBytes`.
Apply-time enforcement:
* Before executing the transaction logic all the entries in the footprint (both read-only and read-write) are read from the ledger and the total read size is computed by adding the size of the key and size of the entry read (if any) to the total value. If total read size exceeds `resources.readBytes`, transaction fails.
* During the host function execution any read/write of a ledger key outside of the footprint (or write of a read-only entry) leads immediately to a transaction failure.
* After the execution the total size of the writes is computed by adding sizes of the keys and values of the non-removed entries. If the total write size exceeds `resources.writeBytes`, transaction fails. Entry deletion is 'free' and not counted towards the total write size.
#### Bandwidth related
Bandwidth utilization is bounded by the total size of the transactions flooded and included to the ledger.
All the configuration values come from `ConfigSettingContractBandwidthV0`.
A transaction contains:
* implicitly, its impact in terms of bandwidth utilization, the size (in bytes) of the `TransactionEnvelope`
Fee: `NetworkData_fee(txEnvelope) = round_up(size(txEnvelope) * feeTxSize1KB / 1024)`
Validity constraints:
* per transaction
* `size(txEnvelope) <= txMaxSizeBytes`
* ledger wide
* sum of all `size(txEnvelope)` <= `ledgerMaxTxsSizeBytes`.
Apply-time enforcement: _None_
#### Historical storage
Historical storage is utilized for any transaction result and hence the fee has to be paid unconditionally. The fee depends on `TransactionEnvelope` size.
All the configuration values come from `ConfigSettingContractHistoricalDataV0`.
Fee: `Historical_flat_fee(txEnvelope) = round_up((size(txEnvelope)+TX_BASE_RESULT_SIZE) * feeHistorical1KB / 1024)`
Where `TX_BASE_RESULT_SIZE` is a constant approximating the size in bytes of transaction results published to archives and is set to `300`.
Validity constraints: _None_
Apply-time enforcement: _None_
#### Contract events and return value
Contract events are a 'side' output of the transaction that is written to metadata and not to ledger. Invocation return value has the same properties and thus is included into this as well.
Note, that ledger changes are also emitted in metadata for transaction, but their size is bounded by proxy with ledger access limits and we can consider write fees to also cover metadata writes as well.
All the configuration values come from `ConfigSettingContractEventsV0`.
Fee: `Events_fee(eventsBytes) = round_up(eventsBytes * feeContractEvents1KB / 1024)`
Validity constraints: _None_
Apply-time enforcement:
* compute the consumed events size as the sum of events emitted during the host function invocation and its return value. If total size exceeds `ConfigSettingContractEventsV0.txMaxContractEventsSizeBytes`, the transaction fails
#### Rent fee
Rent fee has to be paid if operation increases the lifetime of the ledger entries and/or increases entry size.
Rent fee is computed only at transaction application time and it depends on the state of the ledger entries before and after the transaction has been applied.
Fee: `Rent_fee = sum(rent_fee_per_entry_change(entry_before, entry_after)) + ttl_write_fee` for all the ledger entry changes.
Entry rent fee consists of two components: fee for renting new ledgers with the new entry size and fee for renting the old ledgers with increased size. If `entry_before` does not exist, we treat its size as `0` and `live_until_ledger` as `0` for the sake of this formula.
```
rent_fee_per_entry_change(entry_before_entry_after) =
if (entry_after.live_until_ledger > entry_before.live_until_ledger,
rent_fee_for_size_and_ledgers(
entry_after.is_persistent,
size(entry_after),
new_live_until_ledger - max(entry_before.live_until_ledger, current_ledger - 1)),
0) +
if (exists(entry_before) && size(entry_after) > size(entry_before),
rent_fee_for_size_and_ledgers(
entry_after.is_persistent,
size(entry_after) - size(entry_before),
entry_before.live_until_ledger - current_ledger + 1),
0)
```
`rent_fee_for_size_and_ledgers` is the main rent primitive that computes the fee for renting `S` bytes of ledger space for the period of `L` ledgers:
```
rent_fee_for_size_and_ledgers(is_persistent, S, L) = round_up(
S * L * write_fee_per_1kb(BucketListSize) /
(1024 *
if (is_persistent, persistentRentRateDenominator, tempRentRateDenominator))
)
```
Settings values come from `StateArchivalSettings`.
Additionally, we charge for the `TTLEntry` writes of entries that had `liveUntilLedgerSeq` changed using the same rate as for any other entry write:
```
ttl_write_fee =
num_ttl_updates * feeWriteLedgerEntry +
round_up(write_fee_per_1kb(BucketListSize) * TTL_ENTRY_SIZE / 1024)
```
where `num_ttl_updates` is the number of ledger entries that had `live_until_ledger` updated and `TTL_ENTRY_SIZE` is size of `TTLEntry` with its key and is set to `68` bytes.
Validity constraints: _None_
Apply-time enforcement: _None_
#### Operations
Every Soroban transaction must contain exactly 1 operation. There is no fee for operations, but there is a ledger-wide limit on transactions (and thus operations) defined by `ConfigSettingContractExecutionLanesV0.ledgerMaxTxCount`.
## 'Fee bump' semantics
Soroban transactions are compatible with the 'fee bump' mechanism via `FeeBumpTransactionEnvelope`. Total transaction fee can be increased in this way in order to account for the higher network contention. However, fee bump transactions can only modify the overall fee of transaction and their semantics is independent of the inner ('bumped') transaction. This leads to the following of the Soroban 'fee bumps':
* `sorobanData.resourceFee` can not be increased via `FeeBumpTransactionEnvelope`, so only the inclusion fee can be raised
* `sorobanData.resources` can not be modified either, which is why the fee bump envelope is transparent for the resource accounting, i.e. it is not accounted for when computing the transaction size for the sake of enforcing limits/charging the fees
* The point former also applies to the `TransactionSet` validation: `ledgerMaxTxsSizeBytes` limit enforcement only includes sizes of the inner envelopes of the fee bump transactions
The relation between the resouce and inclusion fees for Soroban 'fee bumps' is defined in the same fashion as for regular Soroban transactions:
`feeBumpTx.fee = feeBumpTx.innerTx.sorobanData.resourceFee + fullInclusionFee`
Protocol treats 'fee bump' as an additional operation. Thus the effective inclusion fee bid used for transaction prioritization is defined as follows:
`inclusionFeeBid = fullInclusionFee / 2 = (feeBumpTx.fee - feeBumpTx.innerTx.sorobanData.resourceFee) / 2`
Soroban transactions might fail at apply time due to too low declared resource values or too low refundable fee. We don't provide any built-in way for re-using the failed transactions in the first version of Soroban. However, the user experience can be significantly improved by decoupling the transaction signature from the signatures used for the host function invocation itself, specifically by using the Soroban Authorization Framework ([CAP-0046-11](./cap-0046-11.md)). If all the signatures are decoupled, then any party can pay the transaction fees and sign new transactions in case of failure and there is no need to use `FeeBumpTransactionEnvelope` at all (which is cheaper). Soroban nonces will only be consumed on transaction success, so the signatures can be re-used as many times as needed until the transaction succeeds.
### Future work
Initial implementation of 'fee bumps' follows the 'classic' rules, which simplifies the protocol design, but comes with a number of shortcomings:
* It's not possible to increase the resource fee
* It's not possible to increase the declared resources
* The inclusion fee has to be 2x of the inclusion fee for the regular transactions
Future protocol versions may fix these shortcomings by introducing the new type of the 'fee bump' transaction envelope that addresses these shortcomings. The envelope will need to have the `SorobanData` extension that overrides the `SorobanData` of the inner transaction, so that every relevant value can be increased. The new envelope may also have a different inclusion fee semantics that wouldn't count the 'fee bump' as an additonal operation.
## Design Rationale
### Fee estimation
This proposal relies heavily on the existence of a "preflight" mechanism to determine all parameters needed to compute fees.
Additional logic (not covered in this CAP), will be needed to determine the market rate of resources based for example on historical data (see below).
### Resources
Fees are used to ensure fair and balanced utilization of resources.
For each resource type, we're assuming a model where we can define:
* the maximum resource consumption for a transaction, as to protect the network.
* a reasonable price for any given transaction, as to ensure that there are no broken markets
* additional constraints may include
* a "ledger wide" maximum as to protect the network and downstream systems when producing blocks.
* "execution lane" maximum, as to ensure that execution lanes (executed in parallel), are balanced. This CAP does not attempt to define actual semantics or fee models related to parallel execution, and is mentioned here for context.
We’re also assuming that resource allocation is done independently of “classic” transactions (ie: the amount of resources allocated to smart contract execution is independent of other traffic). This points to “smart contract transactions” being managed as their own “phase” (in `GeneralizedTransactionSet` terminology) and having its own dedicated capacity expressed in terms of transactions (`ledgerMaxTxCount`).
Reasonable fees should be more than some minimum (on top of "on chain market dynamics") both to combat "spam" transactions and ensure that there is no strange incentive to perform certain operations on chain instead of performing them on other systems with worse properties (like centralized cloud infrastructure).
Validators are expected to vote regularly (once a quarter for example) to ensure that fees are set correctly for the broader ecosystem. The exact way fee parameters are established is outside the scope of this document.
#### Compute
[CAP-0046: WebAssembly Smart Contract Runtime Environment](https://github.com/stellar/stellar-protocol/blob/master/core/cap-0046-01.md) introduces the notion of virtual instructions. In the context of this CAP, the only thing that matters is that an "instruction" represents an arbitrary base unit for "execution time".
As a consequence, the "goal" for validators is to construct a `GeneralizedTransactionSet` that uses up to `lcl.ConfigSettingContractComputeV0.ledgerMaxInstructions`.
#### Ledger data
##### Read traffic
Reads are logically performed *before* transaction execution.
When performing reads of a ledger entry:
* The ledger entry needs to be located via some index in the ledger and the entry loaded. Depending on the underlying database technology, this translates to at least 1 disk operation.
* The bucket entry needs to be xdr decoded.
The resources to allocate in this context are therefore:
* a maximum number of ledger entry read operations in a ledger `ledgerMaxReadLedgerEntries`.
* a maximum number of bytes that can be read in a ledger `ledgerMaxReadBytes`.
The cost of a "ledger entry read" is fairly open ended, and depends on many variables. In this proposal, we give it a "base cost" for simplicity even if it translates to multiple disk operations (which is typically the case when using B-Trees for example, or if the ledger entry is retrieved by lookup over multiple buckets).
That "base cost" is defined by validators as `feeReadLedgerEntry`. This proposal does not let transactions compete directly on the number of ledger entry read operations, therefore the cost of a read operation is `feeReadLedgerEntry` (validators must still construct transaction sets that keep the number of reads below a maximum).
Transactions contain the total number of bytes that they will read from the bucket list as well at a fee bid for reading those bytes.
The number of bytes read corresponds to the size of the latest `BucketEntry` for that ledger entry (and does not take into account the possibility that an implementation may read stale entries in buckets or may have to read other entries from a bucket).
The fee is determined based on the rate `feeRead1KB` expressed for reading 1 KB (1024 bytes) worth of data.
As transactions compete for the total read capacity `ledgerMaxReadBytes` for a given ledger, the inclusion fee goes up.
##### Write traffic and ledger size
Writes are performed *after* transaction execution, and are blocking the actual closing of a ledger.
When writing a ledger entry:
* The bucket entry is marshaled to binary.
* The bucket entry is appended to the topmost bucket serially.
* The bucket entry is read, hashed and written back with every level merge operation.
In this proposal, we're modeling "worst case": a bucket entry gets added to the bucket list and has to travel all the way to the bottom bucket, contributing as many bytes as the bucket entry itself.
In that case, the overhead is dominated by the size of buckets and bucket entries, and the number of bucket entries is not really a factor when merging.
Consequently, we can model the cost of a write as an append to the overall bucket list and charge a "base rate" for adding a bucket entry.
For allocating ledger entry writes, the model is analogous to "reads": a ledger is constructed as to not exceed `ledgerMaxWriteLedgerEntry` writes and each write contributes `feeWriteLedgerEntry` to the overall fee for that transaction (no market dynamics here).
As for "bytes written", the model that was chosen is:
* use the total bucket list size as the main resource to track.
* a cost function, allows to price the cost of expanding ledger size.
* ledger size, and therefore price of storage, goes down as bucket entries get merged/deleted.
The cost function that was selected is similar to what was proposed in Ethereum's [make EIP 1559 more like an AMM curve](https://ethresear.ch/t/make-eip-1559-more-like-an-amm-curve/9082).
The main point being that the fee for adding `b` bytes to a bucket list of size `s` is calculated as `fee(b,s) = lfee(s + b) - lfee(s)`, where `lfee` is the "total cost to build a bucket list of a given size".
When designing for specific properties of that function, it's useful to see the "fee rate": `fee_rate(s) = lim b->0, fee(b, s)/ b = (lfee(s+b) - lfee(b))/b`, is the derivative of `lfee`, ie `fee_rate(s) = lfee'(s)`.
Properties that we're looking for:
* validators should be able to pick parameters such that total bucket list size can grow to size `M_base` (that is deemed manageable by the ecosystem), but puts up a lot of resistance to grow to size `M_base+M_buffer` and beyond.
* `fee_rate(s)` should provide enough feedback for users and use cases to self-correct. It would not be desirable at the extreme to have very low fees up to `M_base` and suddenly "hit a wall" where fees shoot up to extremely high numbers after that.
Given those, the choice for `fee_rate` is constructed as the superposition of the following 2 functions (integrating yields the respective `lfee` component):
* `(feeRateM - feeRate)*s/M_base + feeRate` --> `(feeRateM - feeRate)*s^2/(2*M_base) + feeRate*s`
* `if s > M_base, exp(K*(s-M_base)/B_buffer)` --> `exp(K*(s-M_base)/B_buffer)*B_buffer/K`
Where `feeRate` and `feeRateM` are the fee rate at size 0 and `M_base` respectively.
Which together yields:
`lfee(s) = (feeRateM - feeRate)*s^2/(2*M_base) + feeRate*s + (if s > M_base, exp(K*(s-M_base)/B_buffer), 0)`.
With `K` picked such that `fee(1, M_base+M_buffer)` is orders of magnitude larger than what the market would be willing to pay.
We simplify those functions further by charging fees linearly to the number of bytes within a ledger (see rationale below).
As a consequence the final formula looks like this:
`fee(b) = round_up(b*fee_rate(s))`
With
`fee_rate(s) = (feeRateM - feeRate)*s/M_base + feeRate + if (s > M_base, exp(K*(s-M_base)/B_buffer), 0)`
We can simplify this even further by replacing the exponential component by a steep linear slope that causes fees to be "extremely high" at `M_buffer`, which turns the formula into what is specified above:
`fee_rate(s) = (feeRateM - feeRate)*s/M_base + feeRate + if (s > M_base, K*(s-M_base)/B_buffer, 0)`
where `K >= 1`.
##### Ledger size averaging
Tracking the ledger size for every ledger introduces unnecessary noise that leads to the following issues:
* flooding might be somewhat imprecise due to fees changing every ledger with a risk of transaction becoming invalid
* wrong incentives, such as trying to pay the rent for a long time period right after the bucket list merge ledger
* fee estimations are harder for the clients
To alleviate all of these issues, instead of using the current ledger size, this proposal uses the average of the ledger size over the sliding window, that is large enough to average out most of the noise coming from short-term merges and rather representing the ledger size change trends rather than actual size at any moment.
##### Putting it together
"read/write" operations need to first read data before writing it. The amount of data written back can be larger or smaller than what was read, as consequence:
* The number of ledger entry reads is the size of ledger entries referenced in ledger footprints (both read and read/write).
* The number of bytes to read is the size of bucket entries from both the read and read/write footprints.
* The number of bytes to write is the number of bytes associated with bucket entries referenced by the readWrite footprint.
* The number of ledger entries to write is the size of the read/write footprint.
##### Ledger size reduction
So far we've established a model for deriving fees based on the bucket list size, but there needs to be a mechanism to ensure that the cost of storage does not grow indefinitely, hurting usability of the network.
Core ideas and principles:
* Ledger space is a shared public resource, policies should be set to ensure fair use.
* cost of using ledger space should converge towards market rate over time
* in particular creating spam ledger entries should cost market rate over the long term.
* abandoned entries should not cost anything to network participants over the long term.
This proposal therefore depends on a solution with the following high level properties:
* ledger entries have to periodically pay for "rent", where the rent amount is adjusted on a per period basis (as to approximate "market rate")
* ledger entries that do not want to pay for rent anymore should be purged from the ledger, freeing up space for other entries (and lowering the overall price of storage)
* purged entries may be recoverable by relying on external recovery nodes that can reconstruct proofs that validators can verify.
#### Historical storage
Historical storage corresponds to data that needs to be persisted by full validators outside of the bucket list.
This includes transactions and their result.
As the data is stored only once but for "eternity", it has to be priced accordingly (at a minimum, this data has to be made available as to allow validators to catch up to the network).
The model retained in the context of this CAP is to just have the validators set a flat rate per byte for this kind of data (updated on a regular basis as to track cost of storage over time).
##### Transaction Result
In order to reduce the base cost of transactions, the "result" published to archive is fixed size and the actual detailed transaction result is emitted in the meta and accounted for in the same way as contract events. See [CAP-0046: Smart Contract Events](https://github.com/stellar/stellar-protocol/blob/master/core/cap-0046-08.md) for more details.
#### Extended meta data
Extended meta data here refers to parts of the meta data (produced when closing ledgers) that are not related to ledger changes:
* Smart contracts generate "events"
* `TransactionResult`
See [CAP-0046: Smart Contract Events](https://github.com/stellar/stellar-protocol/blob/master/core/cap-0046-08.md) for more details.
Fees are needed to control for the overhead in those systems.
The model retained in this CAP is a flat rate per byte model for simplicity. It is expected that this fee would be orders of magnitude smaller than what is needed to persist data on chain.
#### Bandwidth
Transactions need to be propagated to peers on the network.
At the networking layer, transactions compete for bandwidth on a per ledger basis (`ledgerMaxPropagateSizeBytes`).
Note that validators may apply additional market dynamics due to implementation constraints, especially when trying to balance propagating large transactions vs smaller ones. See [CAP-0042: Multi-Part Transaction Sets](https://github.com/stellar/stellar-protocol/blob/master/core/cap-0042.md).
##### Ephemeral payload
In the future, it may be possible to attach a `ephemeralPayload` (Hash + size), that gets cleared before applying transactions (used in the context of proofs of availability).
Further reading: [blob transactions in Ethereum](https://notes.ethereum.org/@vbuterin/blob_transactions).
#### Refunds on “flat rate” resources
Some resources are priced determined on a per ledger basis, independently of transaction set composition.
For such resources, a transaction gets charged the “worst case” utilization at the beginning of the transaction execution, and gets refunded based on actual usage at the end of the execution.
#### No refund for unused capacity on market based resources
If a transaction declares that it wants to use up to X units of a given resource, nominators assemble a transaction set with that information, potentially excluding other transactions because of this.
As a consequence, there should not be any refund for unused capacity. Specifically, if a resource was priced at a given rate by validators, the fee charged will be for the entire capacity (note that this still lets validators provide discounts on the rate).
#### Transaction fees and prioritization
This proposal assumes that fees charged for resources based on network settings are "fair", and that market dynamics should be shifted towards the "intent" of any given transaction (also called "social value" of a transaction).
This implies that:
* transactions are flooded/included purely based on their social value.
* additional throttling at the overlay layer may occur when some resources are scarce (similar to how in classic, the rate of operations that can be flooded is capped).
Note that the inclusion fee is *not* related to the amount of work that a transaction does. In other words, a transaction performing twice as much work than another but with the same inclusion fee bid are considered to have the same priority.
This simplification allows to remove entirely the need to model on chain a "synthetic universal resource" that can be used to represent the amount of work a given transaction performs (such as "gas" in Ethereum for example).
The following notable properties are expected with this model:
* adjustment to fee rates can be done using arbitrary models based on historical data, outside of the network
* in the future, additional logic can be added to have some price adjustment based on historical usage (similar to what is done for ledger space)
* validators (via CAP-0042 components) can still group similar transactions together.
#### Alternate fee model considered: multidimensional and uniform fees
Another way considered at some point was to try to dynamically price resources as to attain some sort of market rate as quickly as possible. This section goes over the approaches to implement "resource markets".
Note that we’re excluding “flat rate” resources where there is no competition from this section.
There are two ways to do it:
* have a separate market for each dimension. Transactions need to explicitly bid on each dimension.
* This allows accurate price discovery for all resources. For example, if there is a lot of contention on "Instructions", this allows to discover the price of an instruction.
* Relative priority between transactions is flexible, this is good (more room for innovation by nominators) and bad (harder for clients to know what to do to “get ahead”).
* transactions just specify a "fee bid" for multiple dimensions at once (potentially all markets at once)
* there needs to be a function that computes the "minimum fee" for a given transaction, mixing all dimensions somehow (polynomial of sorts for example). Effectively creating a "synthetic universal resource".
* comparing transactions can be done by comparing the ratio between the fee bid and the minimum fee, which is simple.
* There is no price discovery of individual dimensions as people automatically bid more on all dimensions at once. That said, nominators can just pick "market prices" for each dimension that fits recent network conditions.
Both solutions require nominators to price resources (in much the same way that CAP-0042 allows nominators to price operations in the classic protocol).
The bidding is more complicated with the first approach. In order to come up with a reasonable bid, clients need not only to have 'market prices' for every resource, but also need to take into account the comparison algorithm used during transaction set building. For example, validators may consider ordering transactions by a tuple of bid-to-min-fee ratios for every resource (e.g. (instructions, IO, bandwidth)) and in order to prevent abuse of the fixed order, they would dynamically come up with that order depending on the current contents of the transaction queue. It's not obvious how to bid optimally for such an algorithm, as every ledger priorities might change several times.
For the second approach the bidding is comparable with the classic transactions: there is just a single 'market rate' for the smart contract transactions, that can be both used as a part of the bidding strategy and for comparison. The downside is that it requires maintaining parameters used to give different weights to the various resources as to come up with a "synthetic universal resource" that the network can reason about.
Related work:
* Ethereum [Multidimensional EIP-1559](https://ethresear.ch/t/multidimensional-eip-1559/11651).
## Protocol Upgrade Transition
None, this fee model will only apply to smart contract transactions.
A subsequent CAP may update the fee model for the existing classic transaction subsystem as to be more consistent with this CAP.
### Resource Utilization
There are no significant resource utilization changes compared to the classic fee model.
## Security Concerns
The resource fees and limits are introduced to maintain network health and therefore the all the risks are around the network liveness and DOS possibility, but not necessarily security.
Incorrect configuration or incorrect enforcement calibration might lead to high ledger close times or spam.
## Test Cases
The fees are covered in most of the Soroban-related test cases.
## Implementation
[TransactionFrame::validateSorobanResources](https://github.com/stellar/stellar-core/blob/0df2e0c6f80d2c461870e837fbe50fa16f9048f3/src/transactions/TransactionFrame.cpp#L588) enforces the limts at transaction validation time.
[InvokeHostFunctionOpFrame::doApply](https://github.com/stellar/stellar-core/blob/0df2e0c6f80d2c461870e837fbe50fa16f9048f3/src/transactions/InvokeHostFunctionOpFrame.cpp#L379) performs most of the apply-time resource limit enforcement.
[`fees.rs`][https://github.com/stellar/rs-soroban-env/blob/d92944576e2301c9866215efcdc4bbd24a5f3981/soroban-env-host/src/fees.rs] file of Soroban host contains all the fee computation logic specified here.