-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Peer Sharing - how is the reply to a share request calculated #3958
Comments
After a node receives a Peer Hsaring request from another peer, before responding there are some things it needs to consider:
After considering these items, the node needs to calculate its response. There's 2 ways one can probably go about this: The requesting end requests an upper-bound on the amount of peers to fetch or the replying end decides how many peers to give. I think the better way is for the requesting end to ask an upper-limit, since the other way around the receiving end would have to prune the requests anyway to avoid for huge dumps of information and possible resource usage exploits. Given this, a possible algorithm for computing the response can be:
The to-share set should not include known-to-be-ledger peers and peers that have expressed their unwillingness to participate in Peer Sharing, either via configuration file, wither by handshake. The policy should also manage the amount of "entropy" in the response (e.g. pick a random percentage of cold/warm; maybe even some fake peers). Questions: Should we share Established peers? Or any Known Peers is enough? How can we introduce more entropy in our response? |
What if a node restarts and we don't have cache of known peers implemented? If that information does not persist over lifetime of a connection, then we won't have any problems. Q: Should this time delay be a protocol constant?
If we also have
This won't be an issue if we set a minimal time between share requests.
Yes this makes sense. We also need to specify an upper bound of data send in response. I think the simplest way to do that is to have another protocol level limit which would correspond to something like
If somebody asks us for
Established peers are a subset of known peers 😄.
Isn't a random choice from our shareable non-ledger known peers enough? |
Not sure I understand what you're trying to say here. Are you saying what I said is okay even if we restart the node without any caching?
Don't think this is necessary. Since we'll discourage such cases by returning nothing I don't think this can be considered bad behaviour. It can also be that for some reason the requesting node restarted and we don't want to disconnect from a possibly good peer just because of this
Good idea, so this TTL mechanism churns asked and shared peers.
Agree! So this should be yet another requirement, having a delay between requests this should probably be a random value each time to increase entropy
Yes, sounds fair, I wasn't sure if randomizing the upper-limit would be the kind of entropy one wants |
Not sure I understand what you're trying to say here. Are you saying what I said is okay even if we restart the node without any caching? |
The text was updated successfully, but these errors were encountered: