Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubo Causes Network Issues on Shared Public IP Connections #10616

Open
3 tasks done
Griss168 opened this issue Dec 6, 2024 · 5 comments
Open
3 tasks done

Kubo Causes Network Issues on Shared Public IP Connections #10616

Griss168 opened this issue Dec 6, 2024 · 5 comments
Labels
kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization

Comments

@Griss168
Copy link

Griss168 commented Dec 6, 2024

Checklist

Installation method

dist.ipfs.tech or ipfs-update

Version

I have seen this issue on versions from 0.27 - 0.32.1

Kubo version: 0.31.0
Repo version: 16
System version: amd64/linux
Golang version: go1.23.2

Config

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5002",
    "Announce": [],
    "AppendAnnounce": [],
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [
      "/ip4/10.0.0.0/ipcidr/8",
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4",
      "/ip6/100::/ipcidr/64",
      "/ip6/2001:2::/ipcidr/48",
      "/ip6/2001:db8::/ipcidr/32",
      "/ip6/fc00::/ipcidr/7",
      "/ip6/fe80::/ipcidr/10"
    ],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/webrtc-direct",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/udp/4001/webrtc-direct",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
    "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"
  ],
  "DNS": {
    "Resolvers": {}
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "child": {
        "path": "pebbleds",
        "type": "pebbleds"
      },
      "prefix": "pebble.datastore",
      "type": "measure"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": false
    }
  },
  "Experimental": {
    "FilestoreEnabled": true,
    "Libp2pStreamMounting": false,
    "OptimisticProvide": false,
    "OptimisticProvideJobsPoolSize": 0,
    "P2pHttpProxy": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": true
  },
  "Gateway": {
    "DeserializedResponses": null,
    "DisableHTMLErrors": null,
    "ExposeRoutingAPI": null,
    "HTTPHeaders": {},
    "NoDNSLink": false,
    "NoFetch": false,
    "PublicGateways": null,
    "RootRedirect": ""
  },
  "Identity": {
    "PeerID": "12D3KooWDgQNimVrtJyggCUpjamydDL7rtEYpLZ1nyCPkYYuYRLJ"
  },
  "Import": {
    "CidVersion": 1,
    "HashFunction": "sha2-256",
    "UnixFSChunker": "size-1048576",
    "UnixFSRawLeaves": true
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": [],
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": []
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {},
  "Routing": {
    "Methods": null,
    "Routers": null
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {},
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": true,
    "RelayClient": {},
    "RelayService": {},
    "ResourceMgr": {},
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  },
  "Version": {}
}

Description

When Kubo is used on an internet connection with a shared public IP address, it can cause random issues for the entire network.

Nowadays, most internet connections use shared public IPs because IPv4 addresses have run out. On such networks, Kubo opens numerous connections and frequently changes them. This behavior can trigger connection limits on the ISP’s (Internet Service Provider) gateway, leading to packet drops. This issue is very common, I have tested it with multiple ISPs and on various types of connections (wired, fiber optic, mobile 3G, LTE, 5G).

Since ISPs often have multiple customers sharing a single public IP, and each IP can only use 65,535 ports, they implement connection limits per customer. For example, my ISP has a limit of 1,000 connections per customer, which is insufficient for an entire home network. I observed that Kubo sometimes uses hundreds of connections simultaneously, which quickly exceeds this limit.

I have also tried adjusting the configuration for Swarm.ConnMgr.LowWater and Swarm.ConnMgr.HighWater, but this did not resolve the issue.

One of the most valuable features of IPFS is the ability to punch through NAT and relay nodes without public IPs. However, this connection behavior severely impacts the usability of IPFS in such environments.

@Griss168 Griss168 added kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization labels Dec 6, 2024
@hsanjuan
Copy link
Contributor

hsanjuan commented Dec 6, 2024

You can gate the approximate number of connections via ResourceMgr settings, I think. How do you notice the connection limits? Do they affect TCP and UDP?

I may suffer from it too but I haven't tried a number of things and I think I see issues with ipv6 so I'm not sure if it's the same thing.

In theory limiting listeners to UDP transports might help in not triggering TCP connection limits?

@Griss168
Copy link
Author

Griss168 commented Dec 6, 2024

How do you notice the connection limits?

Webpages fail to load on the first try. Web content doesn’t load properly. Sometimes, when I run Kubo, the internet connection stops working within a few seconds. On connections where I have a public IP, it works properly. When I called my ISP's tech support, they told me there are connection limits in place.

Do they affect TCP and UDP?

I tried running Kubo with the configuration modified to listen only for UDP:
"Swarm": [
"/ip4/0.0.0.0/udp/4001/webrtc-direct",
"/ip4/0.0.0.0/udp/4001/quic-v1",
"/ip4/0.0.0.0/udp/4001/quic-v1/webtransport"
]
with the same result. However, it seems UDP connections are less affected. (It’s hard to test properly because most services run on TCP.)

I may suffer from it too but I haven't tried a number of things and I think I see issues with ipv6 so I'm not sure if it's the same thing.

I don’t have IPv6. All my tests were conducted on IPv4-only networks.

In theory limiting listeners to UDP transports might help in not triggering TCP connection limits?

As I mentioned above, it doesn’t work. Maybe the ISP doesn’t distinguish between protocols and just counts the total number of connections.

@lidel
Copy link
Member

lidel commented Dec 17, 2024

[..] Kubo opens numerous connections and frequently changes them

It could be that connections were not triggered by your Kubo, but random peers interacting with your DHT server?

Your config runs with implicit Routing.Type=auto.

If your node is publicly diallable (thanks to uPnP, static port forwarding, or DMZ), Kubo will automatically act as DHT server, and you will have a lot of peers connecting to you.

If you set Routing.Type to autoclient then your node no longer will act as DHT server, and you should see way lower connection churn. Note, that if your PeerID already act as DHT server, peers will have that information cached and setting this will not help immediately (i don't remember how long it takes for cached libp2p identify info to expire, but its something between 15 minutes or 48h).

You could either wait or change both ports (from 4001 to something else) and peerid (ipfs key rotate -o old-self -t ed25519) to force identity change.

@Griss168
Copy link
Author

It could be that connections were not triggered by your Kubo, but random peers interacting with your DHT server?

This is not possible because my kubo is behind symmetric NAT and DHT is not reachable from the internet. DHT is always in client mode.

I have also tried Routing.Type to Autoclient in the configuration for a long time without any change.

The only way to use Kubo with an internet connection that has a shared IP is by enforcing a connection limit in libp2p. However, this cannot be achieved through Kubo's configuration, and an average user won't be able to solve this issue on their own.

@RobQuistNL
Copy link

I'm having the same experience here. I have a dedicated IP though and can easily cap the bandwidth usage - but everything starts to slow down & drag if I don't.

I use pretty basic configs except for the Accelerated DHT client;

	"Routing": {
		"AcceleratedDHTClient": true,
		"Methods": null,
		"Routers": null
	},

I have a gbit fibre connection - and its nowhere near saturated in bandwidth. Its using about 100mbit/s at tops - but still everything is grinding to a halt so its probably the connection number too. High latency, packet loss, other random issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization
Projects
None yet
Development

No branches or pull requests

4 participants