Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Kademlia discovery #501

Merged
merged 9 commits into from
Jun 16, 2020
Merged

Implement Kademlia discovery #501

merged 9 commits into from
Jun 16, 2020

Conversation

austinabell
Copy link
Contributor

Summary of changes
Changes introduced in this pull request:

  • Implements Kademlia bootstrapping and discovery
    • May be some tweaks in future to make sure it matches as closely as possible with go impls but there isn't a strict need to
  • Refactors libp2p crate in various places to allow for cleaner interaction
    • Updates logging levels to be more readable by default

Reference issue to close (if applicable)

Closes

Other information and links

@ec2
Copy link
Member

ec2 commented Jun 16, 2020

I don't see where you call get_closest_peers periodically. Isn't that how you discover new peers through kad?

@austinabell
Copy link
Contributor Author

I don't see where you call get_closest_peers periodically. Isn't that how you discover new peers through kad?

Yeah, I mean, the way it is already bootstraps finding new peers (gets up to ~50 in 10 seconds on the testnet, should probably also look into limiting this) but this was what I was alluding to making tweaks to match behaviour. I can be more specific and detail an issue to come back to after this PR comes in, or if you have an idea of how this can be periodically polled cleanly I'm open to hearing it.

The reason I went without the periodic polling of closest peers is because it didn't seem clear that was necessary or even how go impls have it (from what I could see) also because that logic was being refactored I didn't want to cause unnecessary conflicts. Also the other thing was it wasn't clear in that periodic poll what peer id would be provided to get closest peers to, so didn't want to introduce technical debt by guessing.

@@ -982,6 +971,18 @@ where
pub fn set_state(&mut self, new_state: SyncState) {
self.state = new_state
}

async fn get_peer(&self) -> PeerId {
while self.peer_manager.is_empty().await {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this can't find a peer, it will loop indefinitely, is it worth it to have a time out mechanism?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe implement a try_get_peer() that takes in a timeout variable as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somewhat intentional, because it can't do anything else until it has a peer to sync with. The logic hasn't changed, I just put in a function because there was another place it should be used.

The logic is definitely a bit broken now after the logic had been refactored since this was setup, but I don't want to cause conflicts with the work Erlich is doing in a bit. This shouldn't be an issue once the syncing setup is improved, but maybe something to keep in mind @ec2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But yeah, definitely agree a timeout is worthwhile to have, just probably after the setup is refactored (unless you guys think otherwise)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is totally fine to me!


// Subscribe to gossipsub topics with the network name suffix
for topic in PUBSUB_TOPICS.iter() {
swarm.subscribe(Topic::new(format!("{}/{}", topic, network_name)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed but could also be a one liner by changing to

PUBSUB_TOPICS.iter().for_each(|topic|swarm.subscribe(Topic::new(format!("{}/{}", topic, network_name))));

Copy link
Contributor Author

@austinabell austinabell Jun 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ends up being the same (not one liner) because the subscribe returns a boolean for if it was successful (ignoring because it should always be)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Network Libp2p and PubSub stuff
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants