-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment plan for AutoNAT v2 #10091
Comments
This sounds to be incrementally testable. |
Part of #10091 We include a flag that allows shutting down V2 in case there are issues with it.
Part of #10091 We include a flag that allows shutting down V2 in case there are issues with it.
Part of #10091 We include a flag that allows shutting down V2 in case there are issues with it.
* feat: libp2p.EnableAutoNATv2 Part of #10091 We include a flag that allows shutting down V2 in case there are issues with it. * docs: EnableAutoNATv2
For anyone following this rollout, #10468 is scheduled to ship with 0.30.0-rc1 (#10436), by default, a publicly diallable node will expose both v1 and v2:
My understanding of the plan:
|
I haven't looked at the code yet, just a quick question: Is there a blacklist for addresses yet which won't be able to be tested? I was thinking about RFC1918 as well as RFC4193 addresses. Otherwise offering an AutoNAT server may cause a lot of connection attempts to a private network the server has also access to. The related bug tracking this would be libp2p/go-libp2p#436. I just see an issue deploying a new version of AutoNAT without fixing a potential DoS vector of a server into a private network architecture not accessible to the internet. Sure the potential of harm is slim, but we have had issues with plastic routers at home hanging up because of this in the past. |
The client(provided in a future release) that uses autonat v2 for verifying reachability, will only test Public IP / DNS Addresses. The client will not query for private IPs and the server will reply with an error for private IPs. libp2p/go-libp2p#436 is about the node sharing private unroutable IPs with the rest of the network. That's a separate issue unrelated to the service AutoNAT provides. If you're interested in fixing that, a comment explaining your usecase will be very helpful!
Can you elaborate on the attack vector here?
Can you share more info about this? |
There is none - thanks for your explanation!
Yeah sure. So a client would announce all their IPs, including say a 10.x.x.x IP-Address to the DHT. Another client would now try to find this client and try to connect to those addresses. This usually is not a problem, as there's simply no route to those networks in your NAT router and thus it can respond with an ICMP that there's no route. But this changes as soon as we talk about carrier grade NAT. In this case your router has a route but seems to get no ICMP back from the ISP infrastructure due to ICMP firewalling. This leads to a lot of half-open NAT connections which will never get a response, until the plastic router at home will randomly start to drop actually useful connections as the routing table overflows. There have been a couple of issues around this, like #3320 and #9998 and the only solution is either to manually create a connection filter to all RFC1918 subnets in the configure, except your own and move your own subnet to something less often used. So avoid |
@RubenKelevra some code landed sicne last year, and Kubo (and other clients running DualDHT setup) now should only announce public IPs on public DHT, and private ones on LAN DHT: @sukunrt fysa Kubo 0.30 should be >5% of the network: Any preference when do we want to enable V2 client and remove V1 one? |
@lidel which would explain why my router hasn't crashed since I've installed IPFS a week ago 🥲 Good job, very much appreciated :) |
Checklist
Description
We plan to introduce AutoNAT v2 in the next go-libp2p release(v0.31.0)(PR). AutoNAT v2 will allow nodes to test individual addresses for reachability. This fixes issues with address handling within libp2p and allows the node to advertise an accurate set of reachable addresses to its peers. This would also allow nodes to determine IPv6 vs IPv4 connectivity.
When initially deployed a node won't be able to find any AutoNATv2 servers as there would be so few of them, so it cannot use autonat v2 to verify its addresses. So initially we will need to enable the server and not use the client.
There are two considerations for a subsequent deployment that uses the v2 client to verify its addresses.
As the adoption of the version with the AutoNATv2 server increases, nodes will be able to find servers for AutoNATv2. This adoption fraction, when nodes can find at least 3 servers is some small multiple of
3 * 1/(dht routing table size)
. For the ipfs dht the routing table size is roughly 250 nodes which makes this number 1.2%. So once 5% nodes have adopted the version with the AutoNATv2 server, most nodes should be able to find enough autonat v2 servers to verify their addresses.If the ratio of clients to servers is too high, AutoNATv2 will not be useful as clients won't be able to verify their addresses because of server rate limiting.
In the steady state, when a large chunk(10%?) of the network runs AutoNATv2 servers, this is not a problem. The ratio of DHT nodes in client mode to server mode is roughly 10:1. So we should have a ratio of 10 AutoNATv2 clients to 1 server. If all the clients have 10 addresses each and they verify these 10 addresses every 30 minutes(a very naive implementation), the server needs to verify 100 addresses in 30 minutes which is 4 addresses a minute.
But till we reach this state, there is a possibility that clients update more quickly than servers. This seems likely to me as about 50% of the ipfs network is still on v0.18. I'm not sure how to factor this.
Is there any prior art to such rollouts? I'm especially interested in understanding how to bound the client to server ratio.
Note: I'm ignoring the alternative of gracefully handling the case where AutoNAT v2 peers are not available as the development time for that will be very high. It seems better to just switch over to AutoNATv2 after we have enough servers in the network.
cc @marten-seemann
The text was updated successfully, but these errors were encountered: