Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only one NIC reachable on VM with openvswitch virtual network with IP spoofing #3249

Closed
7 tasks
dann1 opened this issue Apr 19, 2019 · 3 comments
Closed
7 tasks

Comments

@dann1
Copy link
Contributor

dann1 commented Apr 19, 2019

Description
When creating a LXD container with more than one NIC on an openvswitch network, with IP spoofing enabled, the container will only be reachable by one of those NICs. Either attaching a new one or deploying with several ones, the result is the same.

To Reproduce

  • Setup ovs on LXD nodes
  • Create a VM with the conditions described
root@ubuntu1804-lxd-nfs-ovs-61b7c-2:~# lxc list
+-------+---------+------------------------+------+------------+-----------+
| NAME  |  STATE  |          IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+-------+---------+------------------------+------+------------+-----------+
| one-0 | RUNNING | 192.168.150.103 (eth1) |      | PERSISTENT | 0         |
|       |         | 192.168.150.100 (eth0) |      |            |           |
+-------+---------+------------------------+------+------------+-----------+
root@ubuntu1804-lxd-nfs-ovs-61b7c-2:~# ping -c 4 192.168.150.100
PING 192.168.150.100 (192.168.150.100) 56(84) bytes of data.
From 192.168.150.3 icmp_seq=1 Destination Host Unreachable
From 192.168.150.3 icmp_seq=2 Destination Host Unreachable
From 192.168.150.3 icmp_seq=3 Destination Host Unreachable
From 192.168.150.3 icmp_seq=4 Destination Host Unreachable

--- 192.168.150.100 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3070ms
pipe 4
root@ubuntu1804-lxd-nfs-ovs-61b7c-2:~# ping -c 4 192.168.150.103
PING 192.168.150.103 (192.168.150.103) 56(84) bytes of data.
64 bytes from 192.168.150.103: icmp_seq=1 ttl=64 time=0.194 ms
64 bytes from 192.168.150.103: icmp_seq=2 ttl=64 time=0.047 ms
64 bytes from 192.168.150.103: icmp_seq=3 ttl=64 time=0.050 ms
64 bytes from 192.168.150.103: icmp_seq=4 ttl=64 time=0.051 ms

--- 192.168.150.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3078ms
rtt min/avg/max/mdev = 0.047/0.085/0.194/0.063 ms

root@ubuntu1804-lxd-nfs-ovs-61b7c-1:~# lxc list
+-------+---------+------------------------+------+------------+-----------+
| NAME  |  STATE  |          IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+-------+---------+------------------------+------+------------+-----------+
| one-1 | RUNNING | 192.168.150.102 (eth1) |      | PERSISTENT | 0         |
|       |         | 192.168.150.101 (eth0) |      |            |           |
+-------+---------+------------------------+------+------------+-----------+
root@ubuntu1804-lxd-nfs-ovs-61b7c-1:~# ping -c 4 192.168.150.101
PING 192.168.150.101 (192.168.150.101) 56(84) bytes of data.
64 bytes from 192.168.150.101: icmp_seq=1 ttl=64 time=0.295 ms
64 bytes from 192.168.150.101: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 192.168.150.101: icmp_seq=3 ttl=64 time=0.058 ms
64 bytes from 192.168.150.101: icmp_seq=4 ttl=64 time=0.054 ms

--- 192.168.150.101 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3070ms
rtt min/avg/max/mdev = 0.043/0.112/0.295/0.106 ms
root@ubuntu1804-lxd-nfs-ovs-61b7c-1:~# ping -c 4 192.168.150.102
PING 192.168.150.102 (192.168.150.102) 56(84) bytes of data.
From 192.168.150.2 icmp_seq=1 Destination Host Unreachable
From 192.168.150.2 icmp_seq=2 Destination Host Unreachable
From 192.168.150.2 icmp_seq=3 Destination Host Unreachable
From 192.168.150.2 icmp_seq=4 Destination Host Unreachable

--- 192.168.150.102 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3070ms
pipe 4

Expected behavior
The container should be reachable on all of the nics

Details

  • Hypervisor: LXD
  • Version: 5.8.0, 5.8.1

Additional context
Add any other context about the problem here.

Progress Status

  • Branch created
  • Code committed to development branch
  • Testing - QA
  • Documentation
  • Release notes - resolved issues, compatibility, known issues
  • Code committed to upstream release/hotfix branches
  • Documentation committed to upstream release/hotfix branches
@dann1 dann1 added this to the Release 5.8.2 milestone Apr 19, 2019
@dann1 dann1 self-assigned this Apr 19, 2019
@dann1 dann1 removed this from the Release 5.8.2 milestone Apr 23, 2019
@dann1 dann1 changed the title Only one NIC reachable on LXD openvswitch virtual network with IP spoofing Only one NIC reachable on VM with openvswitch virtual network with IP spoofing May 5, 2019
@dann1
Copy link
Contributor Author

dann1 commented May 5, 2019

This issue applies to KVM as well, the flows applied by the network driver, block any packet entering the ovswitch switch if the ip and mac address do not match the ones in the NIC connected to the port of the switch.

The following picture shows the container one-10, where eth1 isn't reachable

image

The ping response is returned via eth0 and it is blocked in ovsbr0 because its source address don't match eth0 but eth1.

In order to fix the issue, it is necessary to create policy-based routing, to return the packets from the nic it comes from. The container one-9, with 3 nics, was fixed issuing,

echo 200 iface1 >> /etc/iproute2/rt_tables

ip rule add from 192.168.150.101 table iface1

ip route del 192.168.150.0/24 dev eth1 proto kernel scope link src 192.168.150.101
ip route add 192.168.150.0/24 dev eth1 proto kernel scope link src 192.168.150.101 table iface1

echo 201 iface2 >> /etc/iproute2/rt_tables 

ip rule add from 192.168.150.102 table iface2

ip route del 192.168.150.0/24 dev eth2 proto kernel scope link src 192.168.150.102
ip route add 192.168.150.0/24 dev eth2 proto kernel scope link src 192.168.150.102 table iface2

Which creates a route table per extra nic and handles the traffic matching the source ip address in that tables. Then, the direct delivery is migrated from the normal routing table to the custom table in order to leave the regular behavior only matching eth0.

image

This solution could be automated and handled in the context package, since it is solved inside the host.

@rsmontero
Copy link
Member

With the previous comment and proposed solution (also networks can be created with different masks to force routing in each interface) we are closing this issue.

@dann1
Copy link
Contributor Author

dann1 commented May 6, 2019

Since it has a lot of complexity, and it's a corner case, the patch could be automated as a script for specific VMs and run as a startup script using contextualization

rsmontero pushed a commit that referenced this issue Jan 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants