Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Epic: Incorporate gvisor into Rancher Desktop's networking stack #3810

Closed
9 of 11 tasks
Nino-K opened this issue Jan 19, 2023 · 1 comment
Closed
9 of 11 tasks

Epic: Incorporate gvisor into Rancher Desktop's networking stack #3810

Nino-K opened this issue Jan 19, 2023 · 1 comment
Assignees
Labels
area/networking kind/epic Umbrella-bug for a group of related issues
Milestone

Comments

@Nino-K
Copy link
Member

Nino-K commented Jan 19, 2023

As part of a continuous effort to make Rancher Desktop's networking stack more robust, we are considering to integrate gvisor's networking layer into our project. One of the main goals of this integration to consider is consistent networking layer implementation across all the offered platforms (Win, MacOs, Linux). This integration allows for a more manageable code base along with ease of feature development.

Previously, we have implemented processes (on WSL) to tackle some of the issues we were seeing with DNS and VPN. This work architecturally aligns with some of the work that has been done in past for Rancher Desktop Host Resolver and Vtunnel. We will be leveraging AF_VSOCK as a main communication bus between the host and the VM.

In this newly proposed architecture, we will have two main processes. One runs in the VM (vm-switch) the other on the Host (host-switch.exe). Both of these processes will be maintained under the same project (Rancher Desktop Switch or Rancher Desktop Networking, open to other suggestions).

The two processes will be communicating over AF_VSOCK protocol. The VM or VMSwitch will be picking up all the traffic that is destined for a tap device that will be initiated upon startup and forwards the traffic as ethernet frames to the host daemon or HostSwitch. The HostSwitch then reconstructs the eth frames and hands them off to system libraries as syscalls. Also, the HostSwitch will be responsible for maintaining both internal (host to VM) and external (host to the internet) connections.

Furthermore, the host Deamon (HostSwitch) will also be acting as a DNS server (more precisely a stub resolver).

The main focus areas for this newly proposed architecture are as follows:

  • DNS (with split functionality over VPN)
  • VPN
  • Proxy
  • Port Forwarding

The below diagram can demonstrate the communication flow between the host and the VM:

flowchart  LR;
 subgraph Host["HOST"]
 subgraph hostSwitch["Host Switch"]
 vsockHost{"Host Deamon \nListens for Incoming connection"}
 eth(("reconstruct ETH frames"))
 syscall(("OS syscall"))
 dhcp["DHCP"]
 dns["DNS"]
 api["API"]
 portForwarding["Port Forwarding"]
 vsockHost <----> eth
 eth <----> syscall
 vsockHost ----> dhcp
 vsockHost ----> dns
 vsockHost ----> portForwarding
 vsockHost ----> api
 end
 end
 subgraph VM["VM"]
 subgraph vmSwitch["VM Switch"]
 vsockVM{"VM Deamon"}
 ethVM(("listens for ETH frames\n from TAP Device"))
 tapDevice("eth1")
 tapDevice <----> ethVM
 ethVM <----> vsockVM
 end
 end
 vsockVM  <---> |AF_VSOCK| vsockHost
Loading

More up to date diagrams can be found here: https://github.com/rancher-sandbox/rancher-desktop-networking

Stories

Acceptance Criteria

TBD

Release notes

TBD

Documentation

Please note that this is an experimental feature and will be available upon 1.8.0 release of Rancher Desktop. This feature is currently only available on windows and it is meant to change the underlying networking mechanism that is used by Rancher Desktop.
Once enabled, it can tackle some of the historical DNS/Routing issues that were observed by our users when using Rancher Desktop behind corporate VPNs.

This feature can be enabled using the rdctl set command. E.g:

rdctl set --experimental.virtual-machine.networking-tunnel=true

Once the command is executed the settings.json should be populated with the correct setting for this feature.

C:\Users\[UserName]\AppData\Roaming\rancher-desktop\settings.json

The networkingTunnel configuration is currently nested under the experimental.virtualMachine parent object, below are sample settings that demonstrate the networkingTunnel when enabled.

{
  "version": 6,
  "containerEngine": {
    "name": "moby",
    "allowedImages": {
      "enabled": false,
      "locked": false,
      "patterns": []
    }
  },
  "kubernetes": {
    "version": "1.25.6",
    "port": 6443,
    "enabled": true,
    "options": {
      "traefik": true,
      "flannel": true
    }
  },
  "portForwarding": {
    "includeKubernetesServices": false
  },
  "images": {
    "showAll": true,
    "namespace": "k8s.io"
  },
  "diagnostics": {
    "showMuted": false,
    "mutedChecks": {}
  },
  "application": {
    "adminAccess": true,
    "debug": false,
    "pathManagementStrategy": "notset",
    "telemetry": {
      "enabled": true
    },
    "updater": {
      "enabled": false
    },
    "autoStart": false,
    "startInBackground": false,
    "hideNotificationIcon": false,
    "window": {
      "quitOnClose": false
    }
  },
  "virtualMachine": {
    "hostResolver": true,
    "memoryInGB": 2,
    "numberCPUs": 2
  },
  "experimental": {
    "virtualMachine": {
      "networkingTunnel": true, // <--- this is the setting
      "socketVMNet": false,
      "mount": {
        "type": "reverse-sshfs",
        "9p": {
          "securityModel": "none",
          "protocolVersion": "9p2000.L",
          "msizeInKB": 128,
          "cacheMode": "mmap"
        }
      }
    }
  },
  "WSL": {
    "integrations": {}
  },
  "autoStart": false,
  "startInBackground": false,
  "hideNotificationIcon": false,
  "window": {
    "quitOnClose": false
  }
}

When networkingTunnel is enabled, a separate network namespace is created in rancher-desktop WSL distro. The namespace is configured with an appropriate network interface to forward all the traffic that is destined for the eth0 within the namespace to the host, making it seem as though the traffic originated from the host. This will allow VPN clients to handle the routing issue that existed previously while using corporate VPNs.

Important Note: Please note that for 1.8 release when this experimental feature is enabled the port forwarding has to be performed manually. For more details please take a look at this issue.
For e.g:

when you are required to expose a port:

docker run --name mynginx1 -p 8801:80 -d nginx

You can now manually expose:

rdctl shell curl http://192.168.127.1:80/services/forwarder/expose -X POST -d '{\"local\":\":8801\",\"remote\":\"192.168.127.2:8801\"}'

And unexposed:

rdctl shell curl http://192.168.127.1:80/services/forwarder/unexpose -X POST -d '{\"local\":\":8801\",\"remote\":\"192.168.127.2:8801\"}'

Furthermore, the WSL integration and additional features offered by the CLI server are not going to be enabled. However, these features should be enabled again in our upcoming releases.

@Nino-K Nino-K self-assigned this Jan 19, 2023
@jandubois jandubois added the kind/epic Umbrella-bug for a group of related issues label Jan 19, 2023
@Nino-K Nino-K added this to the Next milestone Jan 24, 2023
@gaktive
Copy link
Contributor

gaktive commented Feb 13, 2023

We'll need other tickets to make this exposed as experimental and then handle the network namespacing for WSL. We also need to flesh out the docs side of this some more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/epic Umbrella-bug for a group of related issues
Projects
None yet
Development

No branches or pull requests

3 participants