Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Miraclx/p2p exec #5

Merged
merged 18 commits into from
Feb 6, 2024
Merged

Miraclx/p2p exec #5

merged 18 commits into from
Feb 6, 2024

Conversation

xilosada
Copy link
Member

@xilosada xilosada commented Feb 6, 2024

Made the use of ipfs nodes conditional, takes a bit to sync anyway, better off setting up a local intermediary node as a boot node

$ cargo run -- --home data/node1 init
$ cargo run -- --home data/node2 init --port 2429
$ cargo run -- --home data/node3 init --port 2430

This will setup 3 new nodes with differing identities in their respective directories, with zero bootstrap nodes and mDNS turned on for local discovery.

You can directly edit the config files in data/*/config.toml to specify the boot nodes for example
Once everything is good, you can start the network
In separate terminals, you can run:

$ cargo run -- --home data/node1
$ cargo run -- --home data/node2
$ cargo run -- --home data/node3

By default each of these would use mDNS to discover themselves over the local network and everything should just work.
Each node prints a log of active peers every 5 seconds and you can proceed to send messages in real-time from one to the other.

If you want, you can choose to disable mDNS, and opt, instead to use node2 as the bootstrap node (for example).
Add /ip4/127.0.0.1/tcp/2429/p2p/{PeerId} to the config files of both node1 and node3
where PeerId is the value in Identity.PeerId of node2
Start node2, then proceed to start nodes 1 and 3, and they should all connect.

Fixed the DHT issue

Copy link

@jknezevic jknezevic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; I've added few comments from my perspective which could be addressed in the future improvements of this app.

Comment on lines +14 to +21
pub const DEFAULT_BOOTSTRAP_NODES: &[&str] = &[
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might make sense to create a simple nodes which would advertise themselves as boot nodes, so we can have our own bootstrap nodes, without relying on libp2p public nodes.

Comment on lines +56 to +60
pub mdns: bool,

#[clap(long, hide = true)]
#[clap(overrides_with("mdns"))]
pub no_mdns: bool,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the other discovery method, if mdns is set to false?

i don't particulary like mdns solution as it is not designed to work over internet, and although you can achieve it with having static bootstrap servers, in the future improvements, i'd like to go into direction where we support some other discovery protocols like dht discovery.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've already implemented DHT discovery, but it requires at least one address to begin discovery from.

If mDNS is activated, that seeds the DHT, and helps discover other nodes.

And when it's turned off, well the only option left is boot nodes.

Comment on lines +1 to +31
use color_eyre::owo_colors::OwoColorize;
use libp2p::gossipsub;
use tracing::{debug, error};

use super::{Event, EventHandler, EventLoop};

impl EventHandler<gossipsub::Event> for EventLoop {
async fn handle(&mut self, event: gossipsub::Event) {
debug!("{}: {:?}", "gossipsub".yellow(), event);

match event {
gossipsub::Event::Message {
message_id: id,
message,
..
} => {
if let Err(err) = self.event_sender.send(Event::Message { id, message }).await {
error!("Failed to send message event: {:?}", err);
}
}
gossipsub::Event::Subscribed { peer_id, topic } => {
if let Err(_) = self
.event_sender
.send(Event::Subscribed { peer_id, topic })
.await
{
error!("Failed to send subscribed event");
}
}
_ => {}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we know what are the defaults for full connections and metadata connections in rust libp2p?

also, does it make sense to limit number of full connections and metadata connections in the gossip sub here?

if we are going with full connections everywhere it comes with certain tradeoffs (in terms of scalability, resource usage, latency, etc).

it is ok to have full connections everywhere in small network, but the problem grows exponentially when a new peer is added.

IMO, this is more a comment to think about a possible issue, not something that we should solve immediately for this POC app

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good point. Let's address it later

@chefsale chefsale merged commit b8cfc17 into master Feb 6, 2024
@miraclx miraclx deleted the miraclx/p2p-exec branch February 7, 2024 04:17
@MatejVukosav
Copy link
Member

I have followed instructions to establish 3 nodes and only 2 of them see each other, while 3rd cannot find any peers.

Screenshot 2024-02-07 at 21 04 15

Config is created using gen_localnet_configs.sh What is missing if this is not known?

frdomovic added a commit that referenced this pull request Dec 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants