-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add eviction ordering for blob transactions #5335
Conversation
462f3f0
to
d99a3e5
Compare
Ok, I sort of underestimated how hard the specific geth logic would be to implement, but I understand it much better now at least. Putting this up for review now. This PR implements the same fee-jump based priority, but the reordering is much simpler than geth because it just resorts the entire subpool. I plan on implementing eviction logic next, based on this ordering. Then, we can implement the geth logic which should be more efficient. Summary todo list:
Optional tasks / other optimizations:
// evictionPriority calculates the eviction priority based on the algorithm
// described in the BlobPool docs for both fee components.
//
// This method takes about 8ns on a very recent laptop CPU, recalculating about
// 125 million transaction priority values per second.
func evictionPriority(basefeeJumps float64, txBasefeeJumps, blobfeeJumps, txBlobfeeJumps float64) int { // dynamicFeeJumps calculates the log1.125(fee), namely the number of fee jumps
// needed to reach the requested one. We only use it when calculating the jumps
// between 2 fees, so it doesn't matter from what exact number with returns.
// it returns the result from (0, 1, 1.125).
//
// This method is very expensive, taking about 75ns on a very recent laptop CPU,
// but the result does not change with the lifetime of a transaction, so it can
// be cached.
func dynamicFeeJumps(fee *uint256.Int) float64 { |
/// Identifier that tags when transaction was submitted in the pool. | ||
pub(crate) submission_id: u64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like we no longer need this because we only have a single dimension with f64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, makes sense, removed the field from the subpool altogether
/// Resorts the transactions in the pool based on the pool's current [PendingFees]. | ||
pub(crate) fn reprioritize(&mut self) { | ||
for tx in self.by_id.values_mut() { | ||
tx.update_priority(&self.pending_fees); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the issue here is that this only updates the BlobOrd
in
struct BlobTransaction<T: PoolTransaction> {
/// Actual blob transaction.
transaction: Arc<ValidPoolTransaction<T>>,
/// The value that determines the order of this transaction.
ord: BlobOrd,
}
the sorted list
/// _All_ transactions sorted by blob priority.
all: BTreeSet<BlobTransaction<T>>,
is unaffected by this
do we need the BlobOrd as a value? in by_id
?
if not we should just drain the all
map, update the priority and recollect
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need them to be consistent, but this is a good point, changed this to drain, update, and recollect. by_id
is also updated now
let mut new_all_transactions: Vec<_> = Default::default(); | ||
while let Some(mut tx) = self.all.pop_last() { | ||
tx.update_priority(&self.pending_fees); | ||
new_all_transactions.push(tx); | ||
} | ||
|
||
// we need to update `by_id` as well because removal from `all` can only happen if the | ||
// `BlobTransaction`s in each struct are consistent | ||
for tx in self.by_id.values_mut() { | ||
tx.update_priority(&self.pending_fees); | ||
} | ||
|
||
self.all = new_all_transactions.into_iter().collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does two allocations we could instead, do a mem::tak and push directly back to all
so we don't need the temp vec here
// we need to update `by_id` as well because removal from `all` can only happen if the | ||
// `BlobTransaction`s in each struct are consistent | ||
for tx in self.by_id.values_mut() { | ||
tx.update_priority(&self.pending_fees); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we could potentially optimize this by wrapping the BlobOrd in a RefCell if we're careful and ensuring that we always resort the map if we change the priority.
for now this is fine
@@ -237,6 +314,6 @@ impl PartialOrd<Self> for BlobOrd { | |||
|
|||
impl Ord for BlobOrd { | |||
fn cmp(&self, other: &Self) -> Ordering { | |||
other.submission_id.cmp(&self.submission_id) | |||
other.priority.total_cmp(&self.priority) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh I now remember why we had the submission id, this is used as a guarantee so that all values are unique, because now we lose entries if a tx has the same priority...
we could also take a look at BinaryHeap, maybe that's a better solution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh right, yeah we would lose elements now with the btreeset. BinaryHeap does seem like it would work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm,
we should consider replacing btreeset with a binaryheap
will do in a followup, also added some tests from geth |
Implements blob subpool ordering, based on geth's blob subpool ordering. This PR resorts the entire subpool when fees are updated, which is simpler but less efficient than geth's reordering logic. Reordering is necessary because there are now two fee types (and a cap for each), which can move independently of each other.