-
Notifications
You must be signed in to change notification settings - Fork 36.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
p2p: Increase tx relay rate #27630
p2p: Increase tx relay rate #27630
Conversation
In the presence of smaller transactions on the network, blocks can sustain a higher relay rate than 7tx/second. In this event, the per-peer inventory queues can grow too large. Using 11 tx/second as an estimate for the max network throughput, this commit bumps the rate up to 22 tx/s (for inbound peers), providing a factor of 2 safety margin. Outbound peers continue to receive relayed transactions at 2.5x the rate of inbound peers, for a rate of 55tx/second.
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
Concept ACK - 7 tx/s seems outdated It would be nice if we had a proper simulation framework for stuff like this. For example, what would happen during a mempool storm like we had this past week, if only a portion (e.g. 50%) of the network upgrades to this patch? Would that increase the load on the nodes that are still clearing their relay queues at only 7 tx/s (ignoring #27610 here)? |
When this limit was introduced in f2d3ba7 it was pointed out that the per-peer rate doesn't have to be that high:
An alternative - or complimentary; it's not incompatible - approach I had in mind was to randomly drop half of What worries me about just increasing the tx relay rate is that it increases the total worst-case bandwidth on the network. On the other hand, not increasing it could mean that your mempool can't keep up with the next block if it only has a single peer. That in turn would hurt (compact) block relay. But for outbound we already allow double the rate, and we always try to have 8+ of those. So I'm not sure if that's really a problem. |
I just realized this would act as a shredder on clusters. Some peers would only get the parent (that's fine), but (for n > 2 clusters) most would get children for which they don't have the parent yet. So any kind of load distribution between peers would have to be done per cluster. Perhaps by randomly dropping clusters until the queue is at an acceptable size. Or by coordinating the distribution globally for all peers. This sound a lot more complicated. We could also drop the lower half of the queue. This gives preferential relay treatment to things at the top of the mempool; I don't know if that's bad or not. |
I agree this limit is too small after segwit. I think plausible datapoints for the rate at which we can sustain confirmable txs across the network might be:
(The 1-in/1-out cases seems the closest justification for the original 7tx/s, and 7:18.5 is pretty similar to 14:16.5, so I'd have just doubled what we have) RBF increases these rates by an arbitrary multiple, depending on the difference between the feerate at the top vs bottom of the mempool.
Post #27610 I'm not seeing queues grow excessively large. During bursts where there is a queue (ie I'm sending out invs with 35+ txs), I'm seeing queues of >1000 txs about 20% of the time, and I'm sending out invs in batches of 55 or more (ie >= 11tx/s) about 3% of the time. I think that data probably also suggests a bump would be worthwhile.
I'm not sure this actually provides a safety margin though? I think the queues grow large because of the difference between your inbound tx rate and your outbound tx rate; but if you're operating a listening node, and have other honest peers running the same version of Bitcoin Core, your inbound rate will be up to 2.5x this rate (ie 55 tx/s), which means (some of) your queues might grow at a rate of 33 tx/s instead of 11 tx/s due to the difference between the rate at which you receive txs from one set of inbound peers and the rate at which you send txs to another set of inbound peers. And if you're connected to peers which don't limit the rate they send txs to you, that can again be arbitrarily larger. If we have a good idea of what "too large" means for these queues, a better approach to getting a safety margin might be changing the mechanism in #27610 to instead target a specific maximum queue size, and send something along the lines of So I guess I'd suggest dropping the factor of 2, and just setting it to 11 or 14 tx/s (a 60% or 100% increase on what we currently have), with the corresponding bump for recent invs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK on the increase.
I don't feel confident enough to judge the exact numbers proposed.
The 11tx/second proposed is the max network throughput from a full-node with default and random transaction-relay topology to the miners or something else ? And what is the size of small transactions which make it sustainable for higher relay rate ? I think we can adjust rebroadcast frequency in function of those parameters on the Lightning-side. |
🐙 This pull request conflicts with the target branch and needs rebase. |
Are you still working on this? |
Closing as up for grabs due to lack of activity. |
In the presence of smaller transactions on the network, blocks can sustain a higher relay rate than 7tx/second. In this event, the per-peer inventory queues can grow too large.
Using 11 tx/second as an estimate for the max network throughput, this commit bumps the rate up to 22 tx/s (for inbound peers), providing a factor of 2 safety margin.
Outbound peers continue to receive relayed transactions at 2.5x the rate of inbound peers, for a rate of 55tx/second.
Note that this triples the per-peer memory needed for the rolling bloom filter
m_recently_announced_invs
.