-
Notifications
You must be signed in to change notification settings - Fork 672
Weave worker nodes in two different subnets - strange behavior #3297
Comments
@paphillon Can you please provide weave logs ( |
@brb
|
I find it hard to imagine what could be going wrong - all your nodes seem to be connected. |
Here is the test I did.
Access pods by service IP using curl - Successful. Ping the same pod from host in 10.50.0.0/22 subnet, it fails.
Access pods by service IP using curl - Successful Now If I do the same from within a pod running on host1010050212140 (10.50.32.0/22) the ping is successful, however, the service ips are not reachable. In addition, yesterday when I restarted kubelet on one of the 10.50.x.x nodes, not sure if this could be the cause of it, but I did check there is no other service running on port 6784.
|
Can you show the logs from the Also run |
weave status connections
ip route
Weave logs
|
Fixed by #3442 |
** Is this a REQUEST FOR HELP? **
Yes.
In my kubernetes setup, there are two sets of worker nodes, one in 10.51.0.0 subnet and other are in 10.50.0.0 subnet. There is no firewall between these nodes and ping, including weave net ports tcp/udp are checked out to be ok.
Service IP: 172.32.0.0/24
Cluster IP: 172.200.0.0/24
What happened?
Here is the strange behavior. When I ping a pod running on 10.51.0.x host with the pod IP say (172.200.0.25) from the 10.50.0.0 host, it results in the destination unreachable. I can ping without problems from the same subnet host 10.51.0.x. However, when I curl the service IP, 172.32.0.1 it's reachable.
To add to this, when I do all of the above from a container running on 10.50.0.x, the behavior is reversed. I cannot reach to service IP but can reach to pod IP.
What you expected to happen?
Expected that both pod/service IP to be reachable from the container as well hosts
Anything else we need to know?
On-prem installation, the only difference I see is the kernel version in one set of worker node (10.50.x.x)
Versions:
Logs:
$ kubectl logs -n kube-system weave
$ ip route
default via 10.50.212.1 dev eno16777984
10.50.212.0/22 dev eno16777984 proto kernel scope link src 10.50.212.140
169.254.0.0/16 dev eno16777984 scope link metric 1002
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.200.0.0/24 dev weave proto kernel scope link src 172.200.0.40
From 2nd pool
default via 10.51.32.1 dev ens160
10.51.32.0/22 dev ens160 proto kernel scope link src 10.51.35.10
169.254.0.0/16 dev ens160 scope link metric 1002
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.200.0.0/24 dev weave proto kernel scope link src 172.200.0.160
$ ip -4 -o addr
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
2: eno16777984 inet 10.50.212.140/22 brd 10.50.215.255 scope global eno16777984\ valid_lft forever preferred_lft forever
3: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever
7: weave inet 172.200.0.40/24 brd 172.200.0.255 scope global weave\ valid_lft forever preferred_lft forever
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
2: ens160 inet 10.51.35.10/22 brd 10.51.35.255 scope global ens160\ valid_lft forever preferred_lft forever
3: docker0 inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0\ valid_lft forever preferred_lft forever
6: weave inet 172.200.0.160/24 brd 172.200.0.255 scope global weave\ valid_lft forever preferred_lft forever
$ Weave connection status
-> 10.50.212.142:6783 established fastdp 7a:cc:70:08:75:f4(a.corp.com) mtu=1337
<- 10.50.212.141:50919 established fastdp ba:bb:dd:d8:ae:0e(b.corp.com) mtu=1337
<- 10.51.35.10:55966 established fastdp 7a:6e:bb:9b:50:de(c.corp.com) mtu=1337
-> 10.51.35.6:6783 established fastdp 4e:4c:4b:cb:d6:ff(e.corp.com) mtu=1337
<- 10.50.212.140:37708 established fastdp 82:b1:3f:68:58:96(f.corp.com) mtu=1337
-> 10.51.35.8:6783 established fastdp 36:26:8b:a2:44:bd(g.corp.com) mtu=1337
<- 10.51.35.166:59164 established fastdp 2a:f8:1d:6f:b6:ce(h.corp.com) mtu=1337
<- 10.51.35.11:45225 established fastdp a6:7d:e7:8b:e3:20(i.corp.com) mtu=1337
-> 10.51.35.5:6783 failed cannot connect to ourself, retry: never
$ sudo iptables-save
Did not really find any major difference.
The text was updated successfully, but these errors were encountered: