Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

switching to alternate exit node breaks dns configuration on home nodes #15

Closed
paidforby opened this issue Feb 14, 2018 · 7 comments
Closed
Labels

Comments

@paidforby
Copy link

Following instructions to build your own exit node in https://github.com/sudomesh/exitnode ,
I am successfully able set up a tunnel broker on an exit node; however, when I try to reconfigure a home node to tunnel through this exit node, the dns either takes very long to begin working or does not work at all.

To reproduce:

  1. Build a new exit node
  2. Replace the list address in /etc/config/tunneldigger with the IP address of your new exit node
  3. Connect a computer to the public (peoplesopen SSID) and run the following:
traceroute 8.8.8.8

you should see something like

traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  modor_2017_12.peoplesopen.net (100.65.21.65)  17.215 ms  18.235 ms  21.224 ms
 2  100.64.0.42 (100.64.0.42)  84.646 ms  93.128 ms  95.022 ms
 3  <new exit node IP> (<new exit node IP>)  94.106 ms 159.203.56.254 (159.203.56.254)  93.343 ms 159.203.56.253 (159.203.56.253)  94.075 ms
 4  138.197.249.86 (138.197.249.86)  95.144 ms 138.197.249.82 (138.197.249.82)  94.038 ms 138.197.249.90 (138.197.249.90)  93.696 ms
 5  162.243.190.33 (162.243.190.33)  98.874 ms 72.14.219.10 (72.14.219.10)  99.467 ms  99.844 ms
 6  108.170.250.241 (108.170.250.241)  99.008 ms 108.170.250.225 (108.170.250.225)  80.819 ms 108.170.250.241 (108.170.250.241)  80.629 ms
 7  108.170.227.31 (108.170.227.31)  80.884 ms 108.170.227.35 (108.170.227.35)  75.451 ms 108.170.227.43 (108.170.227.43)  77.982 ms
 8  8.8.8.8 (8.8.8.8)  82.522 ms  83.942 ms  86.641 ms

then try

traceroute archlinux.org

which produces the output

archlinux.org: Name or service not known
Cannot handle "host" cmdline arg `archlinux.org' on position 1 (argc 1)
  1. Then try connecting to the private network (admin SSID) and run:
traceroute 8.8.8.8

producing a similar output as before,

traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  admin.peoplesopen.net (172.30.0.1)  5.425 ms  6.214 ms  23.473 ms
 2  10.0.0.1 (10.0.0.1)  29.146 ms  32.772 ms  35.885 ms
 3  <private home IP> (<private home IP>)  163.325 ms  113.882 ms  114.843 ms
 4  68.85.100.77 (68.85.100.77)  41.736 ms  43.659 ms  44.476 ms
 5  162.151.78.185 (162.151.78.185)  50.422 ms  64.476 ms  66.902 ms
 6  68.85.154.97 (68.85.154.97)  70.067 ms 68.85.154.241 (68.85.154.241)  31.780 ms 68.85.154.97 (68.85.154.97)  26.905 ms
 7  96.112.146.26 (96.112.146.26)  21.977 ms  35.109 ms 96.112.146.18 (96.112.146.18)  33.971 ms
 8  * * *
 9  209.85.251.8 (209.85.251.8)  41.305 ms 108.170.237.22 (108.170.237.22)  42.023 ms 108.170.237.20 (108.170.237.20)  26.819 ms
10  209.85.240.43 (209.85.240.43)  34.592 ms 108.170.232.83 (108.170.232.83)  31.696 ms 108.170.232.69 (108.170.232.69)  27.659 ms
11  8.8.8.8 (8.8.8.8)  30.159 ms  30.169 ms  24.392 ms

then try resolving a domain name again

traceroute archlinux.org

which produces the same output as before

archlinux.org: Name or service not known
Cannot handle "host" cmdline arg `archlinux.org' on position 1 (argc 1)

Expected results

First, It is expected that traceroute 8.8.8.8 would return something like,

10  google-public-dns-a.google.com (8.8.8.8)  29.834 ms  35.540 ms  36.695 ms

in the last line the domain name resolved from the IP

Secondly, it is expected that traceroute archlinux.org would resolve the domain to an IP address and then route to that ip address.

Finally, the admin SSID would be expected to not be effected by the exit node, this implies that something is wrong with the home node or something is being reconfigured on the home node by the new exit node (perhaps dnsmasq?).

@paidforby
Copy link
Author

paidforby commented Feb 14, 2018

Attempted to fix by manually altering /etc/resolv.conf. On new exit node servers (digitalocean droplets) resolv.conf contains the following,

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 67.207.67.3
nameserver 67.207.67.2
nameserver 127.0.0.1

the production exit node points at 8.8.8.8 (google's dns)
Ignoring the warning and changing resolv.conf to

nameserver 8.8.8.8

and then rebooting the exit node fixes name resolution on the private SSID, but appears to break the tunnel, preventing the public ssid from routing anything.

@paidforby
Copy link
Author

resolv.conf appears to be set by the config file, /etc/network/interfaces.d/50-cloud-init.cfg changing this file and then rebooting the droplet results in the same change that I made manually. It also produces the same results. Private SSID is now working, but the public SSID no longer tunnels to the broker, that is, it does not appear in the broker's routing table and is unable to route any traffic.

@paidforby
Copy link
Author

paidforby commented Feb 15, 2018

After further investigation, /etc/network/interfaces does appear to be the culprit changing interfaces.d/50-cloud-init.cfg to the following resolves the problem,

auto lo
iface lo inet loopback
    dns-nameservers 8.8.8.8 

auto eth0
iface eth0 inet static
    address 159.203.56.129/21
    gateway 159.203.56.1
    dns-nameservers 8.8.8.8 

# control-alias eth0
iface eth0 inet static
    address 10.20.0.5/16
    dns-nameservers 8.8.8.8 

where 159.203.56.129 is the IP of the exit node.

/etc/network/interfaces looks like so,

auto lo
iface lo inet loopback

source /etc/network/interfaces.d/*

I'm attempting to integrate this config into exitnode repo on a fork, https://github.com/paidforby/exitnode. Will merge once I work out the kinks.

@jhpoelen
Copy link
Contributor

I was just able to reproduce! Thanks. Let me know if you need help testing once you merged this.

@paidforby
Copy link
Author

@jhpoelen please confirm that this was addressed in recent commit to exit node, sudomesh/exitnode@ab2070a

@jhpoelen
Copy link
Contributor

Just create a "fresh" droplet and ran create_exitnode.sh and found expected results -

# cat 50-cloud-init.cfg 
# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
auto lo
iface lo inet loopback
    dns-nameservers 8.8.8.8

auto eth0
iface eth0 inet static
    address 159.89.26.182/20
    dns-nameservers 8.8.8.8
    gateway 159.89.16.1

# control-alias eth0
iface eth0 inet static
    address 10.19.0.5/16
    dns-nameservers 8.8.8.8

@jhpoelen
Copy link
Contributor

jhpoelen commented Feb 21, 2018

Also, I was able to confirm that after reconfiguring (edit /etc/config/tunneldigger and reload tunnel digger /etc/init.d/tunneldigger reload) a home node to use the freshly created exit node, it was able to ping sudoroom.org and traceroute indicated that packets were going through the exit node. Evidence, see below.

goat:~# ping sudomesh.org
PING sudomesh.org (162.255.119.10): 56 data bytes
64 bytes from 162.255.119.10: seq=0 ttl=47 time=362.271 ms
64 bytes from 162.255.119.10: seq=1 ttl=47 time=360.929 ms
^C
--- sudomesh.org ping statistics ---
3 packets transmitted, 2 packets received, 33% packet loss
round-trip min/avg/max = 360.929/361.600/362.271 ms
root@goat:~# traceroute sudomesh.org
traceroute to sudomesh.org (162.255.119.10), 30 hops max, 38 byte packets
 1  100.64.0.42 (100.64.0.42)  193.638 ms  193.398 ms  193.344 ms
 2  159.89.16.253 (159.89.16.253)  194.080 ms  159.89.16.254 (159.89.16.254)  193.213 ms  159.89.16.253 (159.89.16.253)  193.799 ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants