-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Miraclx/p2p exec #5
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM; I've added few comments from my perspective which could be addressed in the future improvements of this app.
pub const DEFAULT_BOOTSTRAP_NODES: &[&str] = &[ | ||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN", | ||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa", | ||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb", | ||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt", | ||
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ", | ||
"/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ", | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might make sense to create a simple nodes which would advertise themselves as boot nodes, so we can have our own bootstrap nodes, without relying on libp2p public nodes.
pub mdns: bool, | ||
|
||
#[clap(long, hide = true)] | ||
#[clap(overrides_with("mdns"))] | ||
pub no_mdns: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the other discovery method, if mdns is set to false?
i don't particulary like mdns solution as it is not designed to work over internet, and although you can achieve it with having static bootstrap servers, in the future improvements, i'd like to go into direction where we support some other discovery protocols like dht discovery.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've already implemented DHT discovery, but it requires at least one address to begin discovery from.
If mDNS is activated, that seeds the DHT, and helps discover other nodes.
And when it's turned off, well the only option left is boot nodes.
use color_eyre::owo_colors::OwoColorize; | ||
use libp2p::gossipsub; | ||
use tracing::{debug, error}; | ||
|
||
use super::{Event, EventHandler, EventLoop}; | ||
|
||
impl EventHandler<gossipsub::Event> for EventLoop { | ||
async fn handle(&mut self, event: gossipsub::Event) { | ||
debug!("{}: {:?}", "gossipsub".yellow(), event); | ||
|
||
match event { | ||
gossipsub::Event::Message { | ||
message_id: id, | ||
message, | ||
.. | ||
} => { | ||
if let Err(err) = self.event_sender.send(Event::Message { id, message }).await { | ||
error!("Failed to send message event: {:?}", err); | ||
} | ||
} | ||
gossipsub::Event::Subscribed { peer_id, topic } => { | ||
if let Err(_) = self | ||
.event_sender | ||
.send(Event::Subscribed { peer_id, topic }) | ||
.await | ||
{ | ||
error!("Failed to send subscribed event"); | ||
} | ||
} | ||
_ => {} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we know what are the defaults for full connections and metadata connections in rust libp2p?
also, does it make sense to limit number of full connections and metadata connections in the gossip sub here?
if we are going with full connections everywhere it comes with certain tradeoffs (in terms of scalability, resource usage, latency, etc).
it is ok to have full connections everywhere in small network, but the problem grows exponentially when a new peer is added.
IMO, this is more a comment to think about a possible issue, not something that we should solve immediately for this POC app
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good point. Let's address it later
Made the use of ipfs nodes conditional, takes a bit to sync anyway, better off setting up a local intermediary node as a boot node
This will setup 3 new nodes with differing identities in their respective directories, with zero bootstrap nodes and mDNS turned on for local discovery.
You can directly edit the config files in
data/*/config.toml
to specify the boot nodes for exampleOnce everything is good, you can start the network
In separate terminals, you can run:
By default each of these would use mDNS to discover themselves over the local network and everything should just work.
Each node prints a log of active peers every 5 seconds and you can proceed to send messages in real-time from one to the other.
If you want, you can choose to disable mDNS, and opt, instead to use node2 as the bootstrap node (for example).
Add
/ip4/127.0.0.1/tcp/2429/p2p/{PeerId}
to the config files of both node1 and node3where PeerId is the value in Identity.PeerId of node2
Start node2, then proceed to start nodes 1 and 3, and they should all connect.
Fixed the DHT issue