-
Notifications
You must be signed in to change notification settings - Fork 167
floodsub compatibility with go-ipfs #132
Comments
Ok, I managed to reproduce this and investigate a little. Still not sure where connections gets closed on the rust side. This is the log line and logic, for dead peers the connections state is checked, if it is still connected then pubsub RPC is done again: Here the peer is marked as dead, so when EOF is reached (eg. stream is closed on the rust side): Not sure what is the correct behavior, so this has to be clarified first. |
I came across a potentially related issue, using the following code: use async_std::task;
use futures::StreamExt;
use ipfs::{IpfsOptions, Types, UninitializedIpfs};
fn main() {
env_logger::init();
let options = IpfsOptions::<Types>::default();
task::block_on(async move {
println!("IPFS options: {:?}", options);
let (ipfs, future) = UninitializedIpfs::new(options).await.start().await.unwrap();
task::spawn(future);
// Subscribe
let topic = "test1234".to_owned();
let mut subscription = ipfs.pubsub_subscribe(topic.clone()).await.unwrap();
ipfs.pubsub_publish(topic.clone(), vec![41, 41]).await.unwrap();
while let Some(message) = subscription.next().await {
println!("Got message: {:?}", message)
}
// Exit
ipfs.exit_daemon().await;
})
} This will not connect with itself (i.e. running this twice), or with the go-ipfs client. Rust Client A
Rust Client B
go-ipfs
There is a lot of logging in the daemon with |
The real solution this is to upgrade to gossipsub. |
in #186 you mentioned:
Is this also the case for go-ipfs 0.6? And js-ipfs? Does this mean the pubsub functionality is currently only working among rs-ipfs nodes? |
As far as I know yes but haven't looked into this for a while! While it is possible that the floodsub has been changed in the meanwhile (if so, I have missed those PRs), the gossipsub is as far I remember still on track to support both gossipsub and floodsub.
I don't think I ever tested the js-ipfs for floodsub, nor am I sure on the go-ipfs 0.x where x > 5. |
When connecting with
go-ipfs daemon --enable-pubsub-experiment
(0.4.23 at least) both processes will start consuming a lot of cpu time, and looking at logs atgo-ipfs
at logall=debug,mdns=error,bootstrap=error
:With
rust-ipfs
the logs are similar, mostly about yamux and multistream_select opening and closing a substream. I suspect that the issue is thatgo-ipfs
side wants to have a long running substream and therust-ipfs
side wants to have a substream only when there is something to send.The text was updated successfully, but these errors were encountered: