Skip to content

Commit

Permalink
Add documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
rohitagarwal003 committed Jun 27, 2018
1 parent 74e1908 commit 94cfb5e
Show file tree
Hide file tree
Showing 2 changed files with 113 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@

* **Caching Images** ([cache.md](cache.md)): Caching non-minikube images in minikube

* **GPUs** ([gpu.md](gpu.md)): Using NVIDIA GPUs on minikube

### Installation and debugging

* **Driver installation** ([drivers.md](drivers.md)): In depth instructions for installing the various hypervisor drivers
Expand Down
111 changes: 111 additions & 0 deletions docs/gpu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# (Experimental) NVIDIA GPU support in minikube

minikube has experimental support for using GPUs on Linux.

## Using NVIDIA GPU on minikube on Linux with `--vm-driver=kvm2`

When using NVIDIA GPUs with kvm2 vm-driver. We passthrough spare GPUs on the
host to the minikube VM. Doing so has a few prerequisites:

- Install the kvm2 driver:
https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver

- Your CPU must support IOMMU. Vendors have different names for this technology.
For Intel CPUs it's called Intel VT-d. For AMD, it's called AMD-Vi. Your
motherboard must also support IOMMU.

- You must enable IOMMU in the kernel: add `intel_iommu=on` or `amd_iommu=on`
(depending to your CPU vendor) to the kernel command line. Also add `iommu=pt`
to the kernel command line.

- You must have spare GPUs that are not used on the host and can be passthrough
to the VM. These GPUs must not be controlled by the nvidia/nouveau driver. You
can ensure this by either not loading the nvidia/nouveau driver on the host at
all or assigning the spare GPU devices to stub kernel modules like `vfio-pci` or
`pci-stub` at system boot time. You can do that by adding the
[vendorId:deviceId](https://pci-ids.ucw.cz/read/PC/10de) of your spare GPU to
the kernel command line. For ex. for Quadro M4000 add `pci-stub.ids=10de:13f1`
to the kernel command line. Note that you will have to do this for all GPUs
you want to passthrough to the VM and all other devices that are in the IOMMU
group of these GPUs.

- Once you reboot the system after doing the above, you should be ready to you
GPUs with kvm2. Run the following command to start minikube:
```
minikube start --vm-driver kvm2 --gpu
```
This command will check if all the above conditions are satisfied and
passthrough spare GPUs found to the VM.
If this succeeded, run the following commands:
```
minikube addons enable nvidia-gpu-device-plugin
minikube addons enable nvidia-driver-installer
```

- If everything succeeded, you should be able to see `nvidia.com/gpu` in the
capacity:
```
kubectl get nodes -ojson | jq .items[].status.capacity
```

### Where can I learn more about GPU passthrough?
See the excellent documentation at https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF


### Why are so many manual steps required to use GPUs with kvm2 on minikube?
These steps require elevated privileges which minikube doesn't run with and they
are disruptive to the host, so we decided to not do them automatically.


## Using NVIDIA GPU on minikube on Linux with `--vm-driver=none`

NOTE: This approach used to expose GPUs here is different than the approach used
to expose GPUs with `--vm-driver=kvm2`. Please don't mix these instructions.

- Install minikube.

- Install the nvidia driver, nvidia-docker and configure docker with nvidia as the default runtime.
See instructions at https://github.com/NVIDIA/nvidia-docker

- Start minikube:
```
minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
```

- Install NVIDIA's device plugin:
```
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.10/nvidia-device-plugin.yml
```


## Why does minikube not support NVIDIA GPUs on macOS?
VM drivers supported by minikube for macOS doesn't support GPU passthrough:
- [mist64/xhyve#108](https://github.com/mist64/xhyve/issues/108)
- [moby/hyperkit#159](https://github.com/moby/hyperkit/issues/159)
- [VirtualBox docs](http://www.virtualbox.org/manual/ch09.html#pcipassthrough)

Also:
- For quite a while, all Mac hardware (both laptops and desktops) have come with
Intel or AMD GPUs (and not with NVIDIA GPUs). Recently, Apple added [support
for eGPUs](https://support.apple.com/en-us/HT208544), but even then all the
supported GPUs listed are AMD’s.

- nvidia-docker [doesn't support
macOS](https://github.com/NVIDIA/nvidia-docker/issues/101) either.


## Why does minikube not support NVIDIA GPUs on Windows?
minikube suppports Windows host through Hyper-V or VirtualBox.

- VirtualBox doesn't support PCI passthrough for [Windows
host](http://www.virtualbox.org/manual/ch09.html#pcipassthrough).

- Hyper-V supports DDA (discrete device assignment) but [only for Windows Server
2016](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment)

Since the only possibility of supporting GPUs on minikube on Windows is on a
server OS where users don't usually run minikube, we haven't invested time in
trying to support NVIDIA GPUs on minikube on Windows.

Also, nvidia-docker [doesn't support
Windows](https://github.com/NVIDIA/nvidia-docker/issues/197) either.

0 comments on commit 94cfb5e

Please sign in to comment.