-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support split routable networks - v0.5.0 specific #7169
Comments
If node A is a DHT server, it will try to route nodes in the public DHT to your yggdrasil nodes, and will try to route route yggdrasil nodes to the public DHT. This will degrade both DHTs. Note: This is not new in 0.5. A single DHT won't work across multiple split networks. In CJDNS, this is less of an issue because all nodes on the CJDNS network will form a LAN DHT while all nodes also connected to the public internet will additionally join the WAN DHT. However, as you've noted, peers in the WAN DHT won't publish records to the LAN DHT so it's far from perfect. You are right, the long-term solution would be to allow peers to either define their own custom DHT structure or to try to learn the network topology. The former solution is easier (but still complicated) and the latter sounds like a potential research project. |
Thanks for clarifying, and yeah, we've definitely experienced this DHT issue before. I'd be a nice thing to be able to avoid in the future. Do you think either of the features I mentioned could make it into IPFS? |
In situations like CJDNS / Yggdrasil we do want to do the right thing. The main "failure case" I'm worried about is that If I'm on a small LAN (in the extreme case, I visit a pinning service or the internet archive or some-such), a busy node can pretty easily saturate the rest of the LAN. The fewer peers there are on that LAN network, the more likely there's a potential for imbalance, so perhaps we can use a heuristic of how large we believe our non-public network is. I'll need to think more about whether there are heuristics that would let us differentiate the yggdrasil overlay from the rest of the internet. How is that overlay structured? presumably it uses a TUN device, but from the perspective of a given node, what does routing look like to other overlay nodes? do I see them as direct routes, or is there a gateway? if they're direct routes, the |
In terms of priority, this is very low on the core team's list. We landed on the current solution to cover 99% of use-cases. I'd love to have a general solution that works in all cases, but we're not going to be able to dedicate any real time towards solving this particular problem. However, I would encourage you to experiment yourself and to try to patch go-ipfs to handle this problem. We're unlikely to accept any first-pass attempts (added complexity always adds a maintenance burden) but any exploration in this space will help. Thinking longer-term, if we can generalize this problem to automatically dealing with more common partially partitioned networks (e.g., China), it becomes a higher priority. That is, I'd love to be able to automatically detect mostly disconnected clusters of highly connected peers, then treat them as separate networks. However, this would be a long-term research project and a new DHT protocol. It's definitely not something we're likely to tackle in the short term. |
If I understand correctly, the only IPv6 address range that has been allocated to global unicast (i.e. "the internet") is |
I believe they see them as direct routes. @Arceliar please correct me if I'm wrong on that. @willscott can you explain more about what @Stebalien I understand, and I hope that some easier solution can be included in the future. I agree with what @Arceliar said above though, it seems to me that only |
If they're direct routes, then we should consider them as LAN nodes for the purpose of the DHT, and things should 🤞 work out. https://github.com/libp2p/go-libp2p-kad-dht/blob/master/dht_filters.go#L142-L150 The public/private balance code right now is doing an inverse: https://github.com/multiformats/go-multiaddr-net/blob/master/private.go#L69 We have 3 states an address can be in (public, private, unroutable), and thinking about yet another shade of grey there would potentially be confusing existing uses of it that are expecting one of those 3 cases to be true. |
@willscott That's very interesting thanks. Definitely something I should test. Edit: Although I wonder what will happen with peers in the Yggdrasil network that aren't direct, like a peer that's two hops away. Maybe it would look for it in the WAN DHT then? Things could get messy. |
Most users wont bother or understand this kind of configurations, so a simple default is mandatory. But for those more advanced running multi-homed nodes it would be great to have finer grained control. I would suggest remove auto assigned tags like "public" and "private" and replace them with user defined "network segments", so I can specify which IP/Multi-addr ranges are which "LAN", "CJDNS", "YGGDRASIL", "I2P" etc. and then enable/disable certain functionality for each segment, for example, disable provider broadcasts on CJDNS, Yggdrasil, disable AutoNAT on the LAN etc. And perhaps even grant/deny specific bridging of content between network segments. |
Issue for filtering out 2000::/3 addresses from the public DHT: libp2p/go-libp2p-kad-dht#595 @makeworld-the-better-one the DHT will look at your computers "routes". If there is no "gateway" for the destination IP address, it'll be considered to be a part of your LAN. |
Thanks! I saw that's it's been merged today. When will that update make it into IPFS? I'm unsure how these parts all fit together, and on what timescales. |
v0.5.0 was released today, which supports only |
@makew0rld I am working on classifying tasks for #10000, AFAIT this precise issues has been solved ? |
@Jorropo in general it looks like this should be fine. I haven't worked with Yggdrasil and IPFS together in a long time though, so I can't speak to the current state of things. Feel free to close. |
As I mentioned in #7109, I am part of @tomeshnet, and we use IPFS on our experimental mesh network nodes. There are some new changes in the upcoming v0.5.0 release that will affect us, that I discussed with @Stebalien and @willscott in that issue, eventually leading me to create this feature request. I've created a somewhat related issue in #7168.
--
Yggdrasil is a mesh routing software that @tomeshnet uses, similar to CJDNS if you know that. The main difference that applies here is that Yggdrasil uses the
0200::/7
address space, which as @willscott pointed out above, will be considered part of the WAN by IPFS.These seems like it could lead to some complex situations, where I'm unsure what would happen. If node A is connected to the Internet and a Yggdrasil mesh network, and node B is connected only to that same mesh network, how would things work?
B would receive Internet records from A that it couldn't access, and become aware of many nodes on the Internet that it can't connect with, through the WAN DHT. As far as I can tell, A and B would still be able to share files with each other, since they are on the same DHT, but there might be some inefficiencies in B's case.
Where I become more unsure, is what would happen with AutoNAT enabled. B and A can talk back and forth freely, and so there wouldn't be NAT issues there. But would B's AutoNAT (or some other part of IPFS) see Internet nodes on the DHT, try to talk to them, and eventually change B to DHT client mode, because all the communications failed?
I think a possible long term solution to this problem would be to allow users to define their own DHT address spaces, beyond just the built-in LAN and WAN. Another simpler solution would be have a config option where users could choose what's part of the LAN address space, so one could just add
0200::/7
to that list. This simpler solution would run into problems on a network that has a regular home LAN as well as a mesh network, or two mesh networks, however.The text was updated successfully, but these errors were encountered: