-
Notifications
You must be signed in to change notification settings - Fork 949
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
swarm/connection: Enforce limit on inbound substreams via StreamMuxer
#2861
Conversation
This is no longer needed because we process the substreams as they come in.
HandlerWrapper
StreamMuxer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wonderful to see. I think there is more work needed before we can land this though.
Looking at the many simplifications that @thomaseizinger and @elenaf9 landed in the last months, I already foresee the day where rust-libp2p is a single |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the follow ups on this Thomas!
|
||
if this.inbound_stream_buffer.len() >= MAX_BUFFERED_INBOUND_STREAMS { | ||
log::warn!("dropping {inbound_stream} because buffer is full"); | ||
drop(inbound_stream); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to make sure we are making an informed decision here. Instead of tail dropping, we could as well do head dropping.
I would have to put more thoughts into this before having an opinion. This does maintain the status quo, thus we can consider it a non-change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this change, the timeout for opening a new stream already begins as the request is queued. I believe head dropping would negatively affect the number of successfully negotiated streams because we might drop a stream that was about to be processed by the local node. A dropped stream will trigger a ConnectionHandler::inject_dial_upgrade_error
with the provided "open info" on the remote node.
In a scenario where a node opens 3 streams that are all pending in this buffer, I think it may be confusing if stream 1 fails but 2 and 3 succeed. It feels more natural to process this in FIFO order.
} | ||
|
||
const MAX_BUFFERED_INBOUND_STREAMS: usize = 25; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I am not mistaken, the additional buffer of 25 streams increases the overall number of possible inflight inbound streams, correct? In other words, while we previously only supported max_negotiating_inbound_streams
before dropping streams, we now allow up to max_negotiating_inbound_streams + 25
.
Is this correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that sounds correct to me! Like I said, this number is completely arbitrary and we can change it to something else :)
Co-authored-by: Max Inden <mail@max-inden.de>
…p into inline-handler-wrapper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
I had to adjust the implemenation of Connection::poll for the autonat (???) tests to work again. I've debugged it for a bit but I can't explain why. Any pointers are welcome.
I can look into that (I am the author of those tests). Those test expect a strict order in which events should occur, which is maybe not ideal. On the other hand they prove that it does make a difference in which order we poll our state machines and this can help decide what order makes the most sense.
That is outdated and has since been fixed. No adjustment was needed, I just had a stupid bug :D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙏
Interoperability failures due to libp2p/test-plans#41. Merging here. |
Description
Inlines the
HandlerWrapper
component intoConnection
which allows us to enforce the limit of inbound streams directly down to theStreamMuxer
level by not callingpoll_inbound
if we would exceed the limit. Depending on the muxer implementation, this may enforce backpressure all the way to the other node.Due to this backpressure mechanism, a remote may now be slow in accepting new outbound streams. To prevent unbounded growth of "pending" streams, we change the implementation such that the "upgrade timeout" for outbound streams starts ticking from the moment the stream is requested instead of only from when the stream is opened and we start the upgrade.
Fixes #2796.
Links to any relevant issues
Open tasks
Connection::poll
implementationOpen Questions
libp2p-yamux::Yamux
?libp2p-yamux
be in a separate PR?Change checklist