Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Illegal RFC1918 traffic #7985

Closed
rlucassen2 opened this issue Mar 16, 2021 · 10 comments
Closed

Illegal RFC1918 traffic #7985

rlucassen2 opened this issue Mar 16, 2021 · 10 comments
Labels
kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization

Comments

@rlucassen2
Copy link

$ ipfs version --all
go-ipfs version: 0.8.0
Repo version: 11
System version: amd64/linux
Golang version: go1.15.8

I'm new to ipfs and today I built a Debian Bullseye IPFS server using the 0.8.0 tarball. IPFS works well! The host is in a DMZ (DeMilitarized Zone) and has ip 10.239.22.32, the ports 4001/udp and 4001/tcp of the external ip are NAT-ed to 10.239.22.32 (although udp is not mentioned in the docs)

But there is one thing that should not occur, and which, IMHO as a IPFS noob, is a bug. This is the situation which is a normal NAT configuration:

internet <-1-> external ip <-2-> NAT firewall <-3-> IPFS in DMZ

As the traffic comes from the internet, I should definitely NOT see any RFC1918 reply traffic at point 3 as these addresses are not routable over the internet. As I explicitely reject outgoing RFC1918 traffic to the internet, the firewall logs from time to time fill up with RFC1918 rejects. Apparently the ipfs daemon tries to reply to the original source-address-before-NAT of the other nodes. I just c/p some firewall log entries:

timestamp source add sport dest add dport proto
20:58:20 10.239.22.32 4001 172.17.0.1 4001 TCP
20:58:21 10.239.22.32 4001 172.17.0.1 4001 TCP
20:58:21 10.239.22.32 58054 172.31.13.74 20030 TCP
20:58:21 10.239.22.32 4001 10.244.8.113 31026 TCP
20:58:21 10.239.22.32 4001 172.17.0.10 4001 UDP
20:58:21 10.239.22.32 4001 172.17.0.10 4001 TCP
20:58:22 10.239.22.32 4001 172.21.0.2 4001 TCP
20:58:22 10.239.22.32 55744 172.21.0.2 4001 TCP
20:58:22 10.239.22.32 4001 10.0.0.100 36442 TCP
20:58:23 10.239.22.32 44722 172.17.0.1 4001 TCP
20:58:23 10.239.22.32 4001 10.244.8.113 31026 UDP
20:58:24 10.239.22.32 45650 10.0.2.15 42788 TCP
20:58:24 10.239.22.32 4001 10.10.10.100 4001 TCP
20:58:24 10.239.22.32 4001 192.168.0.191 4001 TCP

You can clearly see in this dump that ipfs is trying to setup a connection to 172.19.0.3 on tcp port 4001 (which is rejected by the
firewall):

21:05:19.911924 IP 10.239.22.32.4001 > 172.19.0.3.4001: Flags [S], seq 1050858194, win 64240, options [mss 1460,sackOK,TS val 2184226457 ecr 0,nop,wscale 7], length 0

I'm sure the issue described here above should not occur. IPFS is replying to the peer's original internal source address instead of the peer's NAT-ed one.

R.

@rlucassen2 rlucassen2 added kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization labels Mar 16, 2021
@welcome
Copy link

welcome bot commented Mar 16, 2021

Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
In the meantime, please double-check that you have provided all the necessary information to make this process easy! Any information that can help save additional round trips is useful! We currently aim to give initial feedback within two business days. If this does not happen, feel free to leave a comment.
Please keep an eye on how this issue will be labeled, as labels give an overview of priorities, assignments and additional actions requested by the maintainers:

  • "Priority" labels will show how urgent this is for the team.
  • "Status" labels will show if this is ready to be worked on, blocked, or in progress.
  • "Need" labels will indicate if additional input or analysis is required.

Finally, remember to use https://discuss.ipfs.io if you just need general support.

@willscott
Copy link
Contributor

This traffic is likely IPFS attempting to discover if other IPFS nodes in the p2p network are on it's same local network. This is used to prioritize content lookup and retrieval within the local network, since that will be faster than going out to the internet.

IPFS will ask for the routing table from the OS, and should only make these requests if those local addresses are routable. (e.g. it's only worth trying to open a connection to a peer with a 172.16.x.x address if our local machine also knows how to send packets to a 172.16.x.x address.)

@rlucassen2
Copy link
Author

rlucassen2 commented Mar 19, 2021

PFS will ask for the routing table from the OS, and should only make these requests if those local addresses are routable. (e.g. it's only worth trying to open a connection to a peer with a 172.16.x.x address if our local machine also knows how to send packets to a 172.16.x.x address.)

But that sounds to me like a design glitch. As all networks have a default gateway, all addresses will be routable. Even if they do not exist. A peer cannot simply be aware of the networks that are behind a gateway, which is always part of the local layer2 network.

If you're on a 10.0.0.0/24 network and your gateway is 10.0.0.1, there is no way for the ipfs instance to know if there is another RFC1918 network behind the 10.0.0.1. You will explicitely have to tell the peer that such a network exists. That said, and IMHO of course, ipfs should never connect to other RFC1918 networks, unless it has been explicitely told to do so.

R.

@RubenKelevra
Copy link
Contributor

@rlucassen2 Hey, this is a duplicate, see #6932 and libp2p/go-libp2p#436

@RubenKelevra
Copy link
Contributor

To avoid this from happening, you can add a swarm filter, which will forbid IPFS to access those IPs. This will also forbid any connection within your private networks, so just be aware of this.

  "Swarm": {
    "AddrFilters": [
      "/ip4/10.0.0.0/ipcidr/8",
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.0.0/ipcidr/29",
      "/ip4/192.0.0.8/ipcidr/32",
      "/ip4/192.0.0.170/ipcidr/32",
      "/ip4/192.0.0.171/ipcidr/32",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/192.168.6.0/ipcidr/24",
      "/ip4/192.168.34.0/ipcidr/24",
      "/ip4/192.168.34.0/ipcidr/24",
      "/ip4/192.168.34.0/ipcidr/24",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4",
      "/ip6/100::/ipcidr/64",
      "/ip6/2001:2::/ipcidr/48",
      "/ip6/2001:db8::/ipcidr/32",
      "/ip6/fc00::/ipcidr/7",
      "/ip6/fe80::/ipcidr/10"
    ],

If you want to filter the announced IPs to the DHT as well, you can add an announce filter under NoAnnounce too. :)

@Stebalien
Copy link
Member

Thanks @RubenKelevra!

@rlucassen2
Copy link
Author

"/ip4/192.0.0.0/ipcidr/24",
"/ip4/192.0.0.0/ipcidr/29",
"/ip4/192.0.0.8/ipcidr/32",
"/ip4/192.0.0.170/ipcidr/32",
"/ip4/192.0.0.171/ipcidr/32",
"/ip4/192.0.2.0/ipcidr/24",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/192.168.6.0/ipcidr/24",
"/ip4/192.168.34.0/ipcidr/24",
"/ip4/192.168.34.0/ipcidr/24",
"/ip4/192.168.34.0/ipcidr/24",

"/ip4/192.0.0.0/ipcidr/24",
covers:
"/ip4/192.0.0.0/ipcidr/29",
"/ip4/192.0.0.8/ipcidr/32",
"/ip4/192.0.0.170/ipcidr/32",
"/ip4/192.0.0.171/ipcidr/32",

and:

"/ip4/192.168.0.0/ipcidr/16",

covers:

"/ip4/192.168.6.0/ipcidr/24",
"/ip4/192.168.34.0/ipcidr/24",
"/ip4/192.168.34.0/ipcidr/24",
"/ip4/192.168.34.0/ipcidr/24",

So:

"Swarm": {
"AddrFilters": [
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/100.64.0.0/ipcidr/10",
"/ip4/169.254.0.0/ipcidr/16",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.0.0.0/ipcidr/24",
"/ip4/192.0.2.0/ipcidr/24",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/198.18.0.0/ipcidr/15",
"/ip4/198.51.100.0/ipcidr/24",
"/ip4/203.0.113.0/ipcidr/24",
"/ip4/240.0.0.0/ipcidr/4",
"/ip6/100::/ipcidr/64",
"/ip6/2001:2::/ipcidr/48",
"/ip6/2001:db8::/ipcidr/32",
"/ip6/fc00::/ipcidr/7",
"/ip6/fe80::/ipcidr/10"
],

Will do the same job. Thanks!

Richard

@RubenKelevra
Copy link
Contributor

Yeah, this has more historic reasons :)

@jcomeauictx
Copy link

isn't that what ipfs init --profile=server is for?

@RubenKelevra
Copy link
Contributor

RubenKelevra commented Aug 7, 2024

@jcomeauictx I don't think so. I can't think of a reason why Kubo in any deployment should try to connect to non-(publicly)-routeable IPs learned through the DHT by default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization
Projects
None yet
Development

No branches or pull requests

5 participants