-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reset peer store after stop #617
Conversation
Add test coverage of existing and new surface.
Prior, dirty peers from last run interferes with discovery and new connections.
What's the reasoning for clearing peers here? We are currently working on a large update to the PeerStore, #582, which is planned to land in the next few weeks in 0.28. Part of this work includes PeerStore persistence, as it can greatly reduce the time to reconnect to new peers. I'd like to understand why you don't want to keep a record of peers on restart. |
Oh I see, interesting. I use TCP and mdns modules, and had issues with peers not being rediscovered properly upon reconnection. As I understand it, the libp2p peer discovery is using the existing One particular pain point was that if peer B goes offline and online again, peer A was notified of an old address (in my case - it was the old port number, which fails to connect). When the working address arrives seconds later, it didn't emit a second discovery event necessary to establish a connection. My workaround was to manipulate the I guess that the high level goals that I'm trying to achieve in my application are:
There's definitely room for a design that works for both cases -- I don't know exactly how to fit it with your ongoing plans yet. Do you think my goals are general enough to be supported either automatically or with some manual logic around? |
Yes I do. I think this aligns with the Topology registration system we are creating. It still needs better integration with the connection manager but you can see the interface and general usage at https://github.com/libp2p/js-libp2p-interfaces/tree/master/src/topology.
The PeerStore emits a libp2p.peerStore.on('change:multiaddrs', ({ peerInfo, multiaddrs }) => {
// ...
})
Are you hitting edge cases? A single connection should be all that's needed. We try to avoid creating multiple connections to a peer, but we currently don't have any protocol for converging on a single connection if two are simultaneously created, as often happens with mdns.
I think this could be valuable to add to Connection Manager once the priority of Topology peers is being handled. We could automatically ping priority peers we have connections to in order to maintain them and track disconnects better. |
I am not sure we should go this way. If the peer was already discovered and in the PeerStore, I would not expect to listen a new
This would be nice to add. We should create an issue tracking all the things we would like to achieve with the { ConnectionManager, Registrar, Topology }, so that someone can work on this in the future |
I wasn't aware, so I should definitely try that before moving forward. But nevertheless there's still one piece of the puzzle that isn't solved: I still need to connect to the old peers when the libp2p bundle is restarted. Either
If it helps, in my case the network interface may have changed in-between, yielding all previous peers irrelevant. So (2) or (3) is going to be more reliable.
Ooh interesting, then you might be interested in this workaround that I've been using: libp2p.on('peer:discovery', async (other) => {
if (other.id.toB58String() > me.id.toB58String) {
return // Other has right of way
}
..
} This has been very reliable for me and I've tested on all major OS:s. However, I'm using an
It looks promising, but an example would really help. I'm not completely sure I'm affected by it (which wouldn't necessarily be a bad thing). |
Do you mean when you do |
Yes. |
When a node is started, it will try to dial the peers from the peerstore according to the autodial configuration: https://github.com/libp2p/js-libp2p/blob/master/src/index.js#L425-L428 The same should happen in |
Maybe I should reconsider |
@betamos sorry totally missed this! Also, the new PeerStore work is available in |
Closing this as we iterated on the Peerstore in |
PeerInfo
for parity