-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update "surge pricing" to be more fair/limit based on operations #75
Comments
I'm not sure that
Comment |
Oh I see - I missed that part. It makes the change even simpler. As part of this change we should ditch the fee ratio and implement proper comparison between transactions, converting to a ratio (double) is suboptimal. I am editing the proposal to reflect that. |
Does this mean that for a given account, you will include only the transaction with the lowest sequence number (i.e. the sequence number that is the account's current sequence number plus one)? Or (as the next step leads me to believe) will you include any transaction whose sequence number is part of a continuous range starting with the account's current sequence number plus one?
I'm not sure lexicographic sort works here. If I understand this correctly, it would result in Alice's transaction 6 being ordered before Alice's transaction 5, if the former has a higher fee-per-operation. You could skip over such out-of-order transactions, but then you'll have to do another sweep to include them, leading to the counterintuitive property that setting a higher fee on transaction 6 would mean that it gets included later than if it had the same fee as transaction 5. I think the best sorting algorithm might be what Geth does—for each account, construct a queue of transactions for that account, sorted with the lowest sequence number first. Then construct a heap of those queues, sorted based on the feerate of the first transaction in each queue (so that the queue whose first transaction has the highest feerate is on top of the heap). After each transaction, shift that transaction out of the queue, then fix the heap. This suggests another possible tiebreaker: the feerate of the second transaction in the queue for that account (and then the third, and so on and so forth). But this wouldn't even get you to complete optimization (which is a pretty complex knapsack problem), and you would still need to ultimately resort to a final psuedo-random tiebreaker like xored source ID. |
yes you are correct that if somebody submits a transaction with higher fee it may cause artificial delay in their account's queue. Reason I kept the simple sort (we do a similar sort today) as opposed to a more complex scheme is that it seems to be the wrong trade off: we can avoid making our implementation more complex and less efficient by not accommodating something that clients can easily deal with (in this case, submit batches of transactions at the same price). |
Something that I forgot to mention in the changes that we should make: With this change, we should keep track of that limit as we add transactions to the txset (instead of post processing). |
I have been thinking about this a bit more. As we're looking at fairness, I think that it's actually important to preserve the "round robin" property that we have when applying transactions (for transactions with the same fee ratio) otherwise we may end allocating too many transactions from the same account in the same ledger. This ends up being pretty much what @robdenison described (reworded below to summarize). I would suggest not using the next transaction fee ratio to break ties, but instead use pseudo-randomness. So the algorithm is:
|
Would it be detrimental to order the queues in the heap randomly after sorting by fee rate, that is without referring to sourceID? To eliminate predictability in the filter? I was thinking of a hypothetical spammer with many (tens of thousands) accounts "racing" the sort by choosing the particular one that results in the lowest |
ah yes, when I first wrote the proposal I thought we would need surge pricing at the consensus layer (we don't). Instead, we can use any random seed for sorting accounts (ie, replace The problem that is left is to design a good function to pick between two transaction sets during consensus (which txset to pick when there are several nominated): right now we pick the one with the highest number of transactions first then the highest hash. |
I put together a first draft of https://github.com/stellar/stellar-protocol/blob/master/core/cap-0005.md that should address this particular issue. I had to add a policy to disallow arbitrary increase of |
I thought of an issue in transaction submission that needs to be addressed. With the current proposal, the replace by fee mechanism is actually not a real deterrent against DoSing the network: The problem can be mitigated in two ways:
|
Closing this issue, as CAP-0005 has been accepted. |
I think that the implementation to enforce a maximum number of transactions predated the addition of operations.
How this works today:
the "surge pricing" logic kicks in when a validator encounters more transaction candidates than
maxTxPerLedger
(defined in the ledger header). The first pass at filtering is done by sorting transaction candidates by fee ratio (defined asfee/min_fee
wheremin_fee
isbase_fee*nb_operations
).The idea is that in a situation where too many transactions are pending, we prioritize the ones with higher average fees.
If fees are equal, the
sourceID
of the transactions is used as a tie breaker.Proposal for new behavior:
maxOperationsPerLedger
defined by the networkleft.fee * right.nb_ops
withright.fee * left.nb_ops
)sourceID XOR hash_ofPreviousLedger
seems to fit the bill.In order to implement 1 and 2, the candidate transaction set is build as follows:
source ID
(ascending), sequence number (ascending)newTxSetOpCount <= maxOperationsPerLedger
), add it to the setSomething else to discuss here is what to do with the ledger header.
Right now the XDR is:
in particular the line
There are two options on how we would change the ledger header:
maxTxSetSize
to be more generic, something liketxSetSize
, that maps to "number of transactions" in older versions of the protocol and "number of operations" in newer versions of the protocol (binary format stays the same).ext
union supportv = 1
, we would add the new fieldmaxTxSize
would be set tonumeric_limits<uint32>::max()
I prefer option 1 as it keeps the ledger header tidy.
There are very few consumers of this field, and for the most part only use it for reporting. Code that actually cares about it can, just like core's implementation, use it based on what the protocol version is.
This issue was first opened as a core issue stellar/stellar-core#1030
The text was updated successfully, but these errors were encountered: