Skip to content

Commit

Permalink
Merge branch 'readme' into 'main'
Browse files Browse the repository at this point in the history
Update README

See merge request nvidia/cloud-native/vgpu-device-manager!3
  • Loading branch information
cdesiniotis committed Jun 27, 2022
2 parents 01e5784 + 090ed85 commit 863b212
Show file tree
Hide file tree
Showing 2 changed files with 79 additions and 27 deletions.
73 changes: 72 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,72 @@
# vGPU Device Manager
# NVIDIA vGPU Device Manager

The `NVIDIA vGPU Device Manager` manages vGPU devices on a GPU node in a Kubernetes cluster.
It defines a schema for declaratively specifying the list of vGPU types one would like to create on the node.
The vGPU Device Manager parses this schema and applies the desired config by creating vGPU devices following steps outlined in the
[NVIDIA vGPU User Guide](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#creating-vgpu-device-red-hat-el-kvm).

As an example, consider the following configuration for a node with NVIDIA A100 PCIe 40GB cards.

```
version: v1
vgpu-configs:
default:
- "A100-40C"
# NVIDIA A100 PCIe 40GB, C-Series
A100-40C:
- "A100-40C"
A100-20C:
- "A100-20C"
A100-10C:
- "A100-10C"
A100-8C:
- "A100-8C"
A100-5C:
- "A100-5C"
A100-4C:
- "A100-4C"
# Custom configurations
A100-small:
- "A100-4C"
- "A100-5C"
A100-medium:
- "A100-8C"
- "A100-10C"
A100-large:
- "A100-20C"
- "A100-40C"
```

Each of the sections under `vgpu-configs` is user-defined, with custom labels used to refer to them. For example, the `A100-20C` label refers to the vGPU configuration that creates vGPU devices of type `A100-20C` on all GPUs on the node. Likewise, the `A100-4C` label refers to the vGPU configuration that creates vGPU devices of type `A100-4C` on all GPUs on the node.

More than one vGPU type can be associated with a configuration. For example, the `A100-small` label specifies both the `A100-4C` and `A100-5C` vGPU types. If the node has multiple A100 cards, then vGPU devices of both types will be created on the node. More specifically, the vGPU Device Manager will select the vGPU types in a round robin fashion as it creates devices. vGPU devices of type `A100-4C` get created on the first card, vGPU devices of type `A100-5C` get created on the second card, vGPU devices of type `A100-4C` get created on the third card, etc.

## Prerequisites

- [NVIDIA vGPU Manager](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#installing-configuring-grid-vgpu) is installed on the system.

## Usage

**Note:** Currently this project can only be deployed on Kubernetes, and the only supported way is through the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/overview.html). It is not meant to be run as a standalone component and no CLI utility exists. The instructions below are for deploying the vGPU Device Manager as a standalone DaemonSet, for development purposes.

First, create a vGPU devices configuration file. The example file in `examples/` can be used as a starting point:

```
wget -O config.yaml https://raw.githubusercontent.com/NVIDIA/vgpu-device-manager/main/examples/config-example.yaml
```

Modify `config.yaml` as needed. Then, create a ConfigMap for it:

```
kubectl create configmap vgpu-devices-config --from-file=config.yaml
```

Deploy the vGPU Device Manager:

```
kubectl apply -f https://raw.githubusercontent.com/NVIDIA/vgpu-device-manager/main/examples/nvidia-vgpu-device-manager-example.yaml
```

The example DaemonSet will apply the `default` vGPU configuration by default. To override and pick a new configuration, label the worker node `nvidia.com/vgpu.config=<config>`, where `<config>` is the name of a valid configuration in `config.yaml`. The vGPU Device Manager continuously watches for changes to this label.
33 changes: 7 additions & 26 deletions examples/nvidia-vgpu-device-manager-example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,44 +29,25 @@ spec:
fieldRef:
fieldPath: spec.nodeName
- name: CONFIG_FILE
value: "/vgpu-config/config.yaml"
value: "/vgpu-devices-config/config.yaml"
- name: DEFAULT_VGPU_CONFIG
value: "default"
securityContext:
privileged: true
volumeMounts:
- mountPath: /vgpu-config
name: vgpu-config
- mountPath: /vgpu-devices-config
name: vgpu-devices-config
- mountPath: /sys
name: host-sys
volumes:
- name: vgpu-config
- name: vgpu-devices-config
configMap:
name: vgpu-config
name: vgpu-devices-config
- name: host-sys
hostPath:
path: /sys
type: Directory

---
apiVersion: v1
kind: ConfigMap
metadata:
name: vgpu-config
namespace: default
data:
config.yaml: |
version: v1
vgpu-configs:
default:
- A10-4C
- A10-8C
a10-full-profile:
- A10-24C
a10-small:
- A10-4C
a10-custom:
- A10-8C
- A10-12C
---
apiVersion: v1
kind: ServiceAccount
Expand Down

0 comments on commit 863b212

Please sign in to comment.