Replies: 6 comments 5 replies
-
What lib do you plan to use for QUIC ? |
Beta Was this translation helpful? Give feedback.
-
@gterzian a basis of discussion |
Beta Was this translation helpful? Give feedback.
-
As a general note, I think we should:
Regarding the top diagram: I think we should not spend too much time on the internals of the "networking module" in this discussion, because that is likely to be influenced by, and differ based on, the underlying protocol used. I propose we instead focus the interface it would offer to other modules, such as:
Regarding the bottom diagram:
In conclusion: I propose switching to a more "pipeline-centric" approach, consisting of two separate components, like:
(I could add a diagram but the above bullet points seem simple enough). Regarding the (not shared) data:
Please let me know what you think @damip @massalabs/core-team |
Beta Was this translation helpful? Give feedback.
-
First of all my apologies for writing such a late reply on this. I have some questions concerning choices and incoherences with what I have seen so far in the code. This is written taking into account the initial spec, the above comment, and the first associated PR (#3272).
I agree that some of the modules (such as the connector) do not require a thread, however for modules such as the one handling an established connection I find it to be necessary. I also think that the argument of saying "same shared-state = no extra thread required" is not something that can be applied on every case as a general rule. I do see that you chose to keep a separate thread for blocks, and was wondering what was the reason for it being the only one deserving of it, and if you changed your mind since you wrote this or if I misunderstood something as there is a thread for handshake management in your PR. When you write "they probably should run on the same thread, simply as different modules", it is unclear for me how this traduces to code, does running as different modules without separate threads refer to the blocking async tasks such as this one? I also do not understand your concept of pipeline, you describe it as an alternative approach to the one using multiple threads:
But after reading this I see no real difference (or maybe don't understand it) between the current behaviour and what you are proposing, and what has been done in the PR seems to follow a different idea as well. I also disagree with the fact that we do not need to share connections because of protocol refusing data coming from network if it has banned one, since they could just keep sending information that we do not wish to read at all, and network would make no difference in the way that it treats it. Regarding the first iteration that was made concerning this (treating this subject here as well as I feel it is intimately related with the more general matters, could be moved to the PR discussion), I can understand why some asynchronous code had to be used even if it was the base premise to remove any occurrences entirely. But is this the right approach? The asynchronous tasks (for handshakes in this context) are condemned to be blocked at some point, and the workflow of protocol & network did not become more straight forward. As a third evaluation point, I am unable to determine how optimised your approach is compared to the initial proposal. To make a conclusion, I find that this solution solves the debugging issue only partially and does not make the workflow more understandable. What could justify it would be the compared performance, which I can't estimate since I lack specific knowledge on the subject and the implementation details of the other proposal. |
Beta Was this translation helpful? Give feedback.
-
@damip After reading about rate limiting I saw that there two main basic algorithms : Leaky bucket and Token bucket. The difference (and simple explanations) is well explained here : https://qr.ae/pr94Tx and a basic implementation of token bucket algorithm in python is here : https://dev.to/satrobit/rate-limiting-using-the-token-bucket-algorithm-3cjh The crate |
Beta Was this translation helpful? Give feedback.
-
I think it is essential to write the scope of the networking stack for our use case:
For these reasons:
I'm basing my analysis on the fact that creating a parallel thread is slow, so is only worth it if the thread has a long time to live, or allows to parallelize work such that the overall "time cost" is in our favor. Summing up these thoughts, about the architecture and implementation I personally think:
Related to the code organization:
Each of these layers should have their internal state, and communicate the necessary data (aka events) to the other layer to make them update their state. |
Beta Was this translation helpful? Give feedback.
-
Network refactoring
Rationale
The current Network/Protocol stack is suboptimal and has accumulated legacy:
In this proposal, we describe a new design for Massa's network stack.
General design
Key points:
Layers
Endpoint
To connect towards an endpoint, we use the OutEndpoint structure containing:
When receiving inbound connections, we should be able to know the InEndpoint containing:
Transport
A transport follows the following structure:
Transports include framing, and guarantee the integrity and ordering of sent data.
Transports need to include rate limiting. A simple way to rate-limit in a sync thread is the following:
Thread organization
Separate every aspect (eg. operation propagation, block retrieval etc...) into their own threads.
DNS support
DNS resolution should be supported out of the box (eg. https://doc.rust-lang.org/std/net/trait.ToSocketAddrs.html ).
This will help with docker containers, local networks, dynDNS etc...
Peer announcements
HashMap<NodeId, NodeInfo>
where NodeInfo would contain the peer info as well as a copy of the latest signed PeerAnnouncement for that peerBeta Was this translation helpful? Give feedback.
All reactions