Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

p2p: Increase tx relay rate #27630

Closed
wants to merge 1 commit into from

Conversation

sdaftuar
Copy link
Member

In the presence of smaller transactions on the network, blocks can sustain a higher relay rate than 7tx/second. In this event, the per-peer inventory queues can grow too large.

Using 11 tx/second as an estimate for the max network throughput, this commit bumps the rate up to 22 tx/s (for inbound peers), providing a factor of 2 safety margin.

Outbound peers continue to receive relayed transactions at 2.5x the rate of inbound peers, for a rate of 55tx/second.

Note that this triples the per-peer memory needed for the rolling bloom filter m_recently_announced_invs.

In the presence of smaller transactions on the network, blocks can sustain a
higher relay rate than 7tx/second. In this event, the per-peer inventory queues
can grow too large.

Using 11 tx/second as an estimate for the max network throughput, this commit
bumps the rate up to 22 tx/s (for inbound peers), providing a factor of 2
safety margin.

Outbound peers continue to receive relayed transactions at 2.5x the rate of
inbound peers, for a rate of 55tx/second.
@DrahtBot
Copy link
Contributor

DrahtBot commented May 11, 2023

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Reviews

See the guideline for information on the review process.

Type Reviewers
Concept ACK dergoegge, vasild

If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #27675 (p2p: Drop m_recently_announced_invs bloom filter by ajtowns)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@DrahtBot DrahtBot added the P2P label May 11, 2023
@dergoegge
Copy link
Member

Concept ACK - 7 tx/s seems outdated

It would be nice if we had a proper simulation framework for stuff like this. For example, what would happen during a mempool storm like we had this past week, if only a portion (e.g. 50%) of the network upgrades to this patch? Would that increase the load on the nodes that are still clearing their relay queues at only 7 tx/s (ignoring #27610 here)?

@Sjors
Copy link
Member

Sjors commented May 12, 2023

When this limit was introduced in f2d3ba7 it was pointed out that the per-peer rate doesn't have to be that high:

Limits the amount of transactions sent in a single INV to
7tx/sec (and twice that for outbound); this limits the
harm of low fee transaction floods, gives faster relay
service to higher fee transactions. The 7 sounds lower
than it really is because received advertisements need
not be sent, and because the aggregate rate is multipled
by the number of peers.

An alternative - or complimentary; it's not incompatible - approach I had in mind was to randomly drop half of m_recently_announced_invs if it exceeds some maximum size (e.g. 30 minutes at max throughput). This essentially splits the load between peers during moments of peak throughput, without permanently dropping much. This could be done after we constructed the next INV (which already filters some stuff we don't need anymore).

What worries me about just increasing the tx relay rate is that it increases the total worst-case bandwidth on the network.

On the other hand, not increasing it could mean that your mempool can't keep up with the next block if it only has a single peer. That in turn would hurt (compact) block relay. But for outbound we already allow double the rate, and we always try to have 8+ of those. So I'm not sure if that's really a problem.

@Sjors
Copy link
Member

Sjors commented May 13, 2023

randomly drop half of m_recently_announced_invs if it exceeds some maximum size

I just realized this would act as a shredder on clusters. Some peers would only get the parent (that's fine), but (for n > 2 clusters) most would get children for which they don't have the parent yet.

So any kind of load distribution between peers would have to be done per cluster. Perhaps by randomly dropping clusters until the queue is at an acceptable size. Or by coordinating the distribution globally for all peers. This sound a lot more complicated.

We could also drop the lower half of the queue. This gives preferential relay treatment to things at the top of the mempool; I don't know if that's bad or not.

@ajtowns
Copy link
Contributor

ajtowns commented May 15, 2023

In the presence of smaller transactions on the network, blocks can sustain a higher relay rate than 7tx/second.

I agree this limit is too small after segwit. I think plausible datapoints for the rate at which we can sustain confirmable txs across the network might be:

  • pre-segwit, 2-in, 2-out p2pkh: 374 vb, ~4.5 tx/s

  • 2-in, 2-out p2wpkh: 208.5 vb, ~8 tx/s

  • pre-segwit, 1-in, 1-out p2pkh: 192 vb, ~8.5 tx/s

  • 1-in, 1-out p2wpkh: 109.5 vb, ~15 tx/s

  • 1-in p2tr, 1-out p2wpkh: 99 vb, ~16.5 tx/s

(The 1-in/1-out cases seems the closest justification for the original 7tx/s, and 7:18.5 is pretty similar to 14:16.5, so I'd have just doubled what we have)

RBF increases these rates by an arbitrary multiple, depending on the difference between the feerate at the top vs bottom of the mempool.

In this event, the per-peer inventory queues can grow too large.

Post #27610 I'm not seeing queues grow excessively large. During bursts where there is a queue (ie I'm sending out invs with 35+ txs), I'm seeing queues of >1000 txs about 20% of the time, and I'm sending out invs in batches of 55 or more (ie >= 11tx/s) about 3% of the time. I think that data probably also suggests a bump would be worthwhile.

Using 11 tx/second as an estimate for the max network throughput, this commit bumps the rate up to 22 tx/s (for inbound peers), providing a factor of 2 safety margin.

I'm not sure this actually provides a safety margin though? I think the queues grow large because of the difference between your inbound tx rate and your outbound tx rate; but if you're operating a listening node, and have other honest peers running the same version of Bitcoin Core, your inbound rate will be up to 2.5x this rate (ie 55 tx/s), which means (some of) your queues might grow at a rate of 33 tx/s instead of 11 tx/s due to the difference between the rate at which you receive txs from one set of inbound peers and the rate at which you send txs to another set of inbound peers. And if you're connected to peers which don't limit the rate they send txs to you, that can again be arbitrarily larger.

If we have a good idea of what "too large" means for these queues, a better approach to getting a safety margin might be changing the mechanism in #27610 to instead target a specific maximum queue size, and send something along the lines of 35 + q**2/(10m)) txs per inv, where q is the queue size and m is our specified maximum.

So I guess I'd suggest dropping the factor of 2, and just setting it to 11 or 14 tx/s (a 60% or 100% increase on what we currently have), with the corresponding bump for recent invs.

Copy link
Contributor

@vasild vasild left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK on the increase.

I don't feel confident enough to judge the exact numbers proposed.

@ariard
Copy link
Contributor

ariard commented May 16, 2023

Using 11 tx/second as an estimate for the max network throughput, this commit bumps the rate up to 22 tx/s (for inbound peers), providing a factor of 2 safety margin.

The 11tx/second proposed is the max network throughput from a full-node with default and random transaction-relay topology to the miners or something else ? And what is the size of small transactions which make it sustainable for higher relay rate ? I think we can adjust rebroadcast frequency in function of those parameters on the Lightning-side.

@DrahtBot
Copy link
Contributor

🐙 This pull request conflicts with the target branch and needs rebase.

@achow101
Copy link
Member

Are you still working on this?

@achow101
Copy link
Member

Closing as up for grabs due to lack of activity.

@achow101 achow101 closed this Sep 20, 2023
@bitcoin bitcoin locked and limited conversation to collaborators Oct 4, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants