diff --git a/cmd/gpu_plugin/README.md b/cmd/gpu_plugin/README.md
index 8eb4cc332..2e5c34ecb 100644
--- a/cmd/gpu_plugin/README.md
+++ b/cmd/gpu_plugin/README.md
@@ -5,30 +5,20 @@ Table of Contents
* [Introduction](#introduction)
* [Modes and Configuration Options](#modes-and-configuration-options)
* [Operation modes for different workload types](#operation-modes-for-different-workload-types)
+* [Installing driver and firmware for Intel GPUs](#installing-driver-and-firmware-for-intel-gpus)
+* [Pre-built Images](#pre-built-images)
* [Installation](#installation)
- * [Prerequisites](#prerequisites)
- * [Drivers for discrete GPUs](#drivers-for-discrete-gpus)
- * [Kernel driver](#kernel-driver)
- * [Intel DKMS packages](#intel-dkms-packages)
- * [Upstream kernel](#upstream-kernel)
- * [GPU Version](#gpu-version)
- * [GPU Firmware](#gpu-firmware)
- * [User-space drivers](#user-space-drivers)
- * [Drivers for older (integrated) GPUs](#drivers-for-older-integrated-gpus)
- * [Pre-built Images](#pre-built-images)
- * [Install to all nodes](#install-to-all-nodes)
- * [Install to nodes with Intel GPUs with NFD](#install-to-nodes-with-intel-gpus-with-nfd)
- * [Install to nodes with NFD, Monitoring and Shared-dev](#install-to-nodes-with-nfd-monitoring-and-shared-dev)
- * [Install to nodes with Intel GPUs with Fractional resources](#install-to-nodes-with-intel-gpus-with-fractional-resources)
- * [Fractional resources details](#fractional-resources-details)
+ * [Install with NFD](#install-with-nfd)
+ * [Install with Operator](#install-with-operator)
* [Verify Plugin Registration](#verify-plugin-registration)
* [Testing and Demos](#testing-and-demos)
-* [Labels created by GPU plugin](#labels-created-by-gpu-plugin)
-* [SR-IOV use with the plugin](#sr-iov-use-with-the-plugin)
-* [Issues with media workloads on multi-GPU setups](#issues-with-media-workloads-on-multi-gpu-setups)
+* [Notes](#notes)
+ * [Running GPU plugin as non-root](#running-gpu-plugin-as-non-root)
+ * [Labels created by GPU plugin](#labels-created-by-gpu-plugin)
+ * [SR-IOV use with the plugin](#sr-iov-use-with-the-plugin)
+ * [Issues with media workloads on multi-GPU setups](#issues-with-media-workloads-on-multi-gpu-setups)
* [Workaround for QSV and VA-API](#workaround-for-qsv-and-va-api)
-
## Introduction
Intel GPU plugin facilitates Kubernetes workload offloading by providing access to
@@ -51,7 +41,7 @@ backend libraries can offload compute operations to GPU.
| Flag | Argument | Default | Meaning |
|:---- |:-------- |:------- |:------- |
| -enable-monitoring | - | disabled | Enable 'i915_monitoring' resource that provides access to all Intel GPU devices on the node |
-| -resource-manager | - | disabled | Enable fractional resource management, [see also dependencies](#fractional-resources) |
+| -resource-manager | - | disabled | Enable fractional resource management, [see use](./fractional.md) |
| -shared-dev-num | int | 1 | Number of containers that can share the same GPU device |
| -allocation-policy | string | none | 3 possible values: balanced, packed, none. For shared-dev-num > 1: _balanced_ mode spreads workloads among GPU devices, _packed_ mode fills one GPU fully before moving to next, and _none_ selects first available device from kubelet. Default is _none_. Allocation policy does not have an effect when resource manager is enabled. |
@@ -60,104 +50,23 @@ Please use the -h option to see the complete list of logging related options.
## Operation modes for different workload types
+
+
Intel GPU-plugin supports a few different operation modes. Depending on the workloads the cluster is running, some modes make more sense than others. Below is a table that explains the differences between the modes and suggests workload types for each mode. Mode selection applies to the whole GPU plugin deployment, so it is a cluster wide decision.
| Mode | Sharing | Intended workloads | Suitable for time critical workloads |
|:---- |:-------- |:------- |:------- |
| shared-dev-num == 1 | No, 1 container per GPU | Workloads using all GPU capacity, e.g. AI training | Yes |
| shared-dev-num > 1 | Yes, >1 containers per GPU | (Batch) workloads using only part of GPU resources, e.g. inference, media transcode/analytics, or CPU bound GPU workloads | No |
-| shared-dev-num > 1 && resource-management | Yes and no, 1>= containers per GPU | Any. For best results, all workloads should declare their expected GPU resource usage (memory, millicores). Requires [GAS](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling). See also [fractional use](#fractional-resources-details) | Yes. 1000 millicores = exclusive GPU usage. See note below. |
+| shared-dev-num > 1 && resource-management | Depends on resource requests | Any. For requirements and usage, see [fractional resource management](./fractional.md) | Yes. 1000 millicores = exclusive GPU usage. See note below. |
> **Note**: Exclusive GPU usage with >=1000 millicores requires that also *all other GPU containers* specify (non-zero) millicores resource usage.
-## Installation
-
-The following sections detail how to obtain, build, deploy and test the GPU device plugin.
-
-Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
-
-### Prerequisites
-
-Access to a GPU device requires firmware, kernel and user-space
-drivers supporting it. Firmware and kernel driver need to be on the
-host, user-space drivers in the GPU workload containers.
-
-Intel GPU devices supported by the current kernel can be listed with:
-```
-$ grep i915 /sys/class/drm/card?/device/uevent
-/sys/class/drm/card0/device/uevent:DRIVER=i915
-/sys/class/drm/card1/device/uevent:DRIVER=i915
-```
-
-#### Drivers for discrete GPUs
-
-> **Note**: Kernel (on host) and user-space drivers (in containers)
-> should be installed from the same repository as there are some
-> differences between DKMS and upstream GPU driver uAPI.
-
-##### Kernel driver
-
-###### Intel DKMS packages
-
-`i915` GPU driver DKMS[^dkms] package is recommended for Intel
-discrete GPUs, until their support in upstream is complete. DKMS
-package(s) can be installed from Intel package repositories for a
-subset of older kernel versions used in enterprise / LTS
-distributions:
-https://dgpu-docs.intel.com/installation-guides/index.html
-
-[^dkms]: [intel-gpu-i915-backports](https://github.com/intel-gpu/intel-gpu-i915-backports).
-
-###### Upstream kernel
-
-Upstream Linux kernel 6.2 or newer is needed for Intel discrete GPU
-support. For now, upstream kernel is still missing support for a few
-of the features available in DKMS kernels (e.g. Level-Zero Sysman API
-GPU error counters).
+## Installing driver and firmware for Intel GPUs
-##### GPU Version
+In case your host's operating system lacks support for Intel GPUs, see this page for help: [Drivers for Intel GPUs](./driver-firmware.md)
-PCI IDs for the Intel GPUs on given host can be listed with:
-```
-$ lspci | grep -e VGA -e Display | grep Intel
-88:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
-8d:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
-```
-
-(`lspci` lists GPUs with display support as "VGA compatible controller",
-and server GPUs without display support, as "Display controller".)
-
-Mesa "Iris" 3D driver header provides a mapping between GPU PCI IDs and their Intel brand names:
-https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/include/pci_ids/iris_pci_ids.h
-
-###### GPU Firmware
-
-If your kernel build does not find the correct firmware version for
-a given GPU from the host (see `dmesg | grep i915` output), latest
-firmware versions are available in upstream:
-https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915
-
-##### User-space drivers
-
-Until new enough user-space drivers (supporting also discrete GPUs)
-are available directly from distribution package repositories, they
-can be installed to containers from Intel package repositories. See:
-https://dgpu-docs.intel.com/installation-guides/index.html
-
-Example container is listed in [Testing and demos](#testing-and-demos).
-
-Validation status against *upstream* kernel is listed in the user-space drivers release notes:
-* Media driver: https://github.com/intel/media-driver/releases
-* Compute driver: https://github.com/intel/compute-runtime/releases
-
-#### Drivers for older (integrated) GPUs
-
-For the older (integrated) GPUs, new enough firmware and kernel driver
-are typically included already with the host OS, and new enough
-user-space drivers (for the GPU containers) are in the host OS
-repositories.
-
-### Pre-built Images
+## Pre-built Images
[Pre-built images](https://hub.docker.com/r/intel/intel-gpu-plugin)
of this component are available on the Docker hub. These images are automatically built and uploaded
@@ -165,40 +74,21 @@ to the hub from the latest main branch of this repository.
Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers in the format `x.y.z`, corresponding to the branches and releases in this
-repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
-
-> **Note**: Replace `` with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
-
-> **Note**: Add ```--dry-run=client -o yaml``` to the ```kubectl``` commands below to visualize the yaml content being applied.
+repository.
See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
-#### Install to all nodes
-
-Simplest option to enable use of Intel GPUs in Kubernetes Pods.
-
-```bash
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin?ref='
-```
-
-#### Install to nodes with Intel GPUs with NFD
-
-Deploying GPU plugin to only nodes that have Intel GPU attached. [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) is required to detect the presence of Intel GPUs.
+## Installation
-```bash
-# Start NFD - if your cluster doesn't have NFD installed yet
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref='
+There are multiple ways to install Intel GPU plugin to a cluster. The most common methods are described below. For alternative methods, see [advanced install](./advanced-install.md) page.
-# Create NodeFeatureRules for detecting GPUs on nodes
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref='
+> **Note**: Replace `` with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
-# Create GPU plugin daemonset
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/nfd_labeled_nodes?ref='
-```
+> **Note**: Add ```--dry-run=client -o yaml``` to the ```kubectl``` commands below to visualize the yaml content being applied.
-#### Install to nodes with NFD, Monitoring and Shared-dev
+### Install with NFD
-Same as above, but configures GPU plugin with logging, [monitoring and shared-dev](#modes-and-configuration-options) features enabled. This option is useful when there is a desire to retrieve GPU metrics from nodes. For example with [XPU-Manager](https://github.com/intel/xpumanager/) or [collectd](https://github.com/collectd/collectd/tree/collectd-6.0).
+Deploy GPU plugin with the help of NFD ([Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery)). It detects the presence of Intel GPUs and labels them accordingly. GPU plugin's node selector is used to deploy plugin to nodes which have such a GPU label.
```bash
# Start NFD - if your cluster doesn't have NFD installed yet
@@ -208,66 +98,20 @@ $ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref='
# Create GPU plugin daemonset
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/monitoring_shared-dev_nfd/?ref='
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/nfd_labeled_nodes?ref='
```
-#### Install to nodes with Intel GPUs with Fractional resources
-
-With the experimental fractional resource feature you can use additional kubernetes extended
-resources, such as GPU memory, which can then be consumed by deployments. PODs will then only
-deploy to nodes where there are sufficient amounts of the extended resources for the containers.
+### Install with Operator
-(For this to work properly, all GPUs in a given node should provide equal amount of resources
-i.e. heteregenous GPU nodes are not supported.)
+GPU plugin can be installed with the Intel Device Plugin Operator. It allows configuring GPU plugin's parameters without kustomizing the deployment files. The general installation is described in the [install documentation](../operator/README.md#installation). For configuring the GPU Custom Resource (CR), see the [configuration options](#modes-and-configuration-options) and [operation modes](#operation-modes-for-different-workload-types).
-Enabling the fractional resource feature isn't quite as simple as just enabling the related
-command line flag. The DaemonSet needs additional RBAC-permissions
-and access to the kubelet podresources gRPC service, plus there are other dependencies to
-take care of, which are explained below. For the RBAC-permissions, gRPC service access and
-the flag enabling, it is recommended to use kustomization by running:
+### Install alongside with GPU Aware Scheduling
-```bash
-# Start NFD - if your cluster doesn't have NFD installed yet
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref='
-
-# Create NodeFeatureRules for detecting GPUs on nodes
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref='
-
-# Create GPU plugin daemonset
-$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/fractional_resources?ref='
-```
+GPU plugin can be installed alongside with GPU Aware Scheduling (GAS). It allows scheduling Pods which e.g. request only partial use of a GPU. The installation is described in [fractional resources](./fractional.md) page.
-##### Fractional resources details
-
-Usage of these fractional GPU resources requires that the cluster has node
-extended resources with the name prefix `gpu.intel.com/`. Those can be created with NFD
-by running the [hook](/cmd/gpu_nfdhook/) installed by the plugin initcontainer. When fractional resources are
-enabled, the plugin lets a [scheduler extender](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling)
-do card selection decisions based on resource availability and the amount of extended
-resources requested in the [pod spec](https://github.com/intel/platform-aware-scheduling/blob/master/gpu-aware-scheduling/docs/usage.md#pods).
-
-The scheduler extender then needs to annotate the pod objects with unique
-increasing numeric timestamps in the annotation `gas-ts` and container card selections in
-`gas-container-cards` annotation. The latter has container separator '`|`' and card separator
-'`,`'. Example for a pod with two containers and both containers getting two cards:
-`gas-container-cards:card0,card1|card2,card3`. Enabling the fractional-resource support
-in the plugin without running such an annotation adding scheduler extender in the cluster
-will only slow down GPU-deployments, so do not enable this feature unnecessarily.
-
-In multi-tile systems, containers can request individual tiles to improve GPU resource usage.
-Tiles targeted for containers are specified to pod via `gas-container-tiles` annotation where the the annotation
-value describes a set of card and tile combinations. For example in a two container pod, the annotation
-could be `gas-container-tiles:card0:gt0+gt1|card1:gt1,card2:gt0`. Similarly to `gas-container-cards`, the container
-details are split via `|`. In the example above, the first container gets tiles 0 and 1 from card 0,
-and the second container gets tile 1 from card 1 and tile 0 from card 2.
-
-> **Note**: It is also possible to run the GPU device plugin using a non-root user. To do this,
-the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
-Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
-
-### Verify Plugin Registration
+### Verify Plugin Installation
-You can verify the plugin has been registered with the expected nodes by searching for the relevant
+You can verify that the plugin has been installed on the expected nodes by searching for the relevant
resource allocation status on the nodes:
```bash
@@ -341,17 +185,27 @@ The GPU plugin functionality can be verified by deploying an [OpenCL image](../.
Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 Insufficient gpu.intel.com/i915.
```
-## Labels created by GPU plugin
+## Notes
+
+### Running GPU plugin as non-root
+
+It is possible to run the GPU device plugin using a non-root user. To do this,
+the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
+Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
+
+More info: https://kubernetes.io/blog/2021/11/09/non-root-containers-and-devices/
+
+### Labels created by GPU plugin
If installed with NFD and started with resource-management, plugin will export a set of labels for the node. For detailed info, see [labeling documentation](./labels.md).
-## SR-IOV use with the plugin
+### SR-IOV use with the plugin
GPU plugin does __not__ setup SR-IOV. It has to be configured by the cluster admin.
GPU plugin does however support provisioning Virtual Functions (VFs) to containers for a SR-IOV enabled GPU. When the plugin detects a GPU with SR-IOV VFs configured, it will only provision the VFs and leaves the PF device on the host.
-## Issues with media workloads on multi-GPU setups
+### Issues with media workloads on multi-GPU setups
OneVPL media API, 3D and compute APIs provide device discovery
functionality for applications and work fine in multi-GPU setups.
@@ -376,7 +230,7 @@ options are documented here:
* QSV: https://github.com/Intel-Media-SDK/MediaSDK/wiki/FFmpeg-QSV-Multi-GPU-Selection-on-Linux
-### Workaround for QSV and VA-API
+#### Workaround for QSV and VA-API
[Render device](render-device.sh) shell script locates and outputs the
correct device file name. It can be added to the container and used
diff --git a/cmd/gpu_plugin/advanced-install.md b/cmd/gpu_plugin/advanced-install.md
new file mode 100644
index 000000000..1b1d98923
--- /dev/null
+++ b/cmd/gpu_plugin/advanced-install.md
@@ -0,0 +1,24 @@
+# Alternative installation methods for Intel GPU plugin
+
+## Install to all nodes
+
+In case the target cluster will not have NFD (or you don't want to install it), Intel GPU plugin can be installed to all nodes. This installation method will consume little unnecessary CPU resources on nodes without Intel GPUs.
+
+```bash
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin?ref='
+```
+
+## Install to nodes via NFD, with Monitoring and Shared-dev
+
+Intel GPU plugin is installed via NFD's labels and node selector. Plugin is configured with monitoring and shared devices enabled. This option is useful when there is a desire to retrieve GPU metrics from nodes. For example with [XPU-Manager](https://github.com/intel/xpumanager/) or [collectd](https://github.com/collectd/collectd/tree/collectd-6.0).
+
+```bash
+# Start NFD - if your cluster doesn't have NFD installed yet
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref='
+
+# Create NodeFeatureRules for detecting GPUs on nodes
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref='
+
+# Create GPU plugin daemonset
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/monitoring_shared-dev_nfd/?ref='
+```
diff --git a/cmd/gpu_plugin/driver-firmware.md b/cmd/gpu_plugin/driver-firmware.md
new file mode 100644
index 000000000..1718ca97f
--- /dev/null
+++ b/cmd/gpu_plugin/driver-firmware.md
@@ -0,0 +1,80 @@
+# Driver and firmware for Intel GPUs
+
+Access to a GPU device requires firmware, kernel and user-space
+drivers supporting it. Firmware and kernel driver need to be on the
+host, user-space drivers in the GPU workload containers.
+
+Intel GPU devices supported by the current kernel can be listed with:
+```
+$ grep i915 /sys/class/drm/card?/device/uevent
+/sys/class/drm/card0/device/uevent:DRIVER=i915
+/sys/class/drm/card1/device/uevent:DRIVER=i915
+```
+
+## Drivers for discrete GPUs
+
+> **Note**: Kernel (on host) and user-space drivers (in containers)
+> should be installed from the same repository as there are some
+> differences between DKMS and upstream GPU driver uAPI.
+
+##### Kernel driver
+
+###### Intel DKMS packages
+
+`i915` GPU driver DKMS[^dkms] package is recommended for Intel
+discrete GPUs, until their support in upstream is complete. DKMS
+package(s) can be installed from Intel package repositories for a
+subset of older kernel versions used in enterprise / LTS
+distributions:
+https://dgpu-docs.intel.com/installation-guides/index.html
+
+[^dkms]: [intel-gpu-i915-backports](https://github.com/intel-gpu/intel-gpu-i915-backports).
+
+###### Upstream kernel
+
+Support for first Intel discrete GPUs was added to upstream Linux kernel in v6.2,
+and expanded in later versions. For now, upstream kernel is still missing support
+for few of the features available in DKMS kernels, listed here:
+https://dgpu-docs.intel.com/driver/kernel-driver-types.html
+
+##### GPU Version
+
+PCI IDs for the Intel GPUs on given host can be listed with:
+```
+$ lspci | grep -e VGA -e Display | grep Intel
+88:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
+8d:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
+```
+
+(`lspci` lists GPUs with display support as "VGA compatible controller",
+and server GPUs without display support, as "Display controller".)
+
+A mapping between GPU PCI IDs and their Intel brand names is available here:
+https://dgpu-docs.intel.com/devices/hardware-table.html
+
+###### GPU Firmware
+
+If your kernel build does not find the correct firmware version for
+a given GPU from the host (see `dmesg | grep i915` output), latest
+firmware versions are available in upstream:
+https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915
+
+##### User-space drivers
+
+Until new enough user-space drivers (supporting also discrete GPUs)
+are available directly from distribution package repositories, they
+can be installed to containers from Intel package repositories. See:
+https://dgpu-docs.intel.com/installation-guides/index.html
+
+Example container is listed in [Testing and demos](#testing-and-demos).
+
+Validation status against *upstream* kernel is listed in the user-space drivers release notes:
+* Media driver: https://github.com/intel/media-driver/releases
+* Compute driver: https://github.com/intel/compute-runtime/releases
+
+#### Drivers for older (integrated) GPUs
+
+For the older (integrated) GPUs, new enough firmware and kernel driver
+are typically included already with the host OS, and new enough
+user-space drivers (for the GPU containers) are in the host OS
+repositories.
diff --git a/cmd/gpu_plugin/fractional.md b/cmd/gpu_plugin/fractional.md
new file mode 100644
index 000000000..731b636ce
--- /dev/null
+++ b/cmd/gpu_plugin/fractional.md
@@ -0,0 +1,64 @@
+# GPU plugin with GPU Aware Scheduling
+
+This is an experimental feature.
+
+Installing the GPU plugin with [GPU Aware Scheduling](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling) (GAS) enables containers to request partial (fractional) GPU resources. For example, a Pod's container can request GPU's millicores or memory and use only a fraction of the GPU. The remaining resources could be leveraged by another container.
+
+> *NOTE*: For this use case to work properly, all GPUs in a given node should provide equal amount of resources
+i.e. heterogenous GPU nodes are not supported.
+
+> *NOTE*: Resource values are used only for scheduling workloads to nodes, not for limiting their GPU usage on the nodes. Container requesting 50% of the GPU's resources is not restricted by the kernel driver or firmware from using more than 50% of the resources. A container requesting 1% of the GPU could use 100% of it.
+
+## Install GPU Aware Scheduling
+
+GAS' installation is described in its [README](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling#usage-with-nfd-and-the-gpu-plugin).
+
+## Install GPU plugin with fractional resources
+
+### With yaml deployments
+
+The GPU Plugin DaemonSet needs additional RBAC-permissions and access to the kubelet podresources
+gRPC service to function. All the required changes are gathered in the `fractional_resources`
+overlay. Install GPU plugin by running:
+
+```bash
+# Start NFD - if your cluster doesn't have NFD installed yet
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref='
+
+# Create NodeFeatureRules for detecting GPUs on nodes
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref='
+
+# Create GPU plugin daemonset
+$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/fractional_resources?ref='
+```
+
+### With Device Plugin Operator
+
+Install the Device Plugin Operator according to the [install](../operator/README.md#installation) instructions. When applying the [GPU plugin Custom Resource](../../deployments/operator/samples/deviceplugin_v1_gpudeviceplugin.yaml) (CR), set `resourceManager` option to `true`. The Operator will install all the required RBAC objects and service accounts.
+
+```
+spec:
+ resourceManager: true
+```
+
+## Details about fractional resources
+
+Use of fractional GPU resources requires that the cluster has node extended resources with the name prefix `gpu.intel.com/`. Those are automatically created by GPU plugin with the help of the NFD. When fractional resources are enabled, the plugin lets GAS do card selection decisions based on resource availability and the amount of extended resources requested in the [pod spec](https://github.com/intel/platform-aware-scheduling/blob/master/gpu-aware-scheduling/docs/usage.md#pods).
+
+GAS then annotates the pod objects with unique increasing numeric timestamps in the annotation `gas-ts` and container card selections in `gas-container-cards` annotation. The latter has container separator '`|`' and card separator '`,`'. Example for a pod with two containers and both containers getting two cards: `gas-container-cards:card0,card1|card2,card3`.
+
+Enabling the fractional resource support in the plugin without running GAS in the cluster will only slow down GPU-deployments, so do not enable this feature unnecessarily.
+
+## Tile level access and Level Zero workloads
+
+Level Zero library supports targeting different tiles on a GPU. If the host is equipped with multi-tile GPU devices, and the container requests both `gpu.intel.com/i915` and `gpu.intel.com/tiles` resources, GPU plugin (with GAS) adds an [affinity mask](https://spec.oneapi.io/level-zero/latest/core/PROG.html#affinity-mask) to the container. By default the mask is in "FLAT" [device hierarchy](https://spec.oneapi.io/level-zero/latest/core/PROG.html#device-hierarchy) format. With the affinity mask, two Level Zero workloads can share a two tile GPU so that workloads use one tile each.
+
+If a multi-tile workload is intended to work in "COMPOSITE" hierarchy mode, the container spec environment should include hierarchy mode variable (ZE_FLAT_DEVICE_HIERARCHY) with "COMPOSITE" value. GPU plugin will then adapt the affinity mask from the default "FLAT" to "COMPOSITE" format.
+
+If the GPU is a single tile device, GPU plugin does not set the affinity mask. Only exposing GPU devices is enough in that case.
+
+### Details about tile resources
+
+GAS makes the GPU and tile selection based on the Pod's resource specification. The selection is passed to GPU plugin via the Pod's annotation.
+
+Tiles targeted for containers are specified to Pod via `gas-container-tiles` annotation where the the annotation value describes a set of card and tile combinations. For example in a two container pod, the annotation could be `gas-container-tiles:card0:gt0+gt1|card1:gt1,card2:gt0`. Similarly to `gas-container-cards`, the container details are split via `|`. In the example above, the first container gets tiles 0 and 1 from card 0, and the second container gets tile 1 from card 1 and tile 0 from card 2.
diff --git a/cmd/gpu_plugin/gpu_plugin.go b/cmd/gpu_plugin/gpu_plugin.go
index 50ee1fab9..6f1ad4018 100644
--- a/cmd/gpu_plugin/gpu_plugin.go
+++ b/cmd/gpu_plugin/gpu_plugin.go
@@ -403,6 +403,29 @@ func (dp *devicePlugin) isCompatibleDevice(name string) bool {
return true
}
+func (dp *devicePlugin) devSpecForDrmFile(drmFile string) (devSpec pluginapi.DeviceSpec, devPath string, err error) {
+ if dp.controlDeviceReg.MatchString(drmFile) {
+ //Skipping possible drm control node
+ err = os.ErrInvalid
+
+ return
+ }
+
+ devPath = path.Join(dp.devfsDir, drmFile)
+ if _, err = os.Stat(devPath); err != nil {
+ return
+ }
+
+ // even querying metrics requires device to be writable
+ devSpec = pluginapi.DeviceSpec{
+ HostPath: devPath,
+ ContainerPath: devPath,
+ Permissions: "rw",
+ }
+
+ return
+}
+
func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {
files, err := os.ReadDir(dp.sysfsDir)
if err != nil {
@@ -413,6 +436,7 @@ func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {
devTree := dpapi.NewDeviceTree()
rmDevInfos := rm.NewDeviceInfoMap()
+ tileCounts := []uint64{}
for _, f := range files {
var nodes []pluginapi.DeviceSpec
@@ -429,25 +453,14 @@ func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {
}
isPFwithVFs := pluginutils.IsSriovPFwithVFs(path.Join(dp.sysfsDir, f.Name()))
+ tileCounts = append(tileCounts, labeler.GetTileCount(dp.sysfsDir, f.Name()))
for _, drmFile := range drmFiles {
- if dp.controlDeviceReg.MatchString(drmFile.Name()) {
- //Skipping possible drm control node
+ devSpec, devPath, devSpecErr := dp.devSpecForDrmFile(drmFile.Name())
+ if devSpecErr != nil {
continue
}
- devPath := path.Join(dp.devfsDir, drmFile.Name())
- if _, err := os.Stat(devPath); err != nil {
- continue
- }
-
- // even querying metrics requires device to be writable
- devSpec := pluginapi.DeviceSpec{
- HostPath: devPath,
- ContainerPath: devPath,
- Permissions: "rw",
- }
-
if !isPFwithVFs {
klog.V(4).Infof("Adding %s to GPU %s", devPath, f.Name())
@@ -487,6 +500,7 @@ func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {
if dp.resMan != nil {
dp.resMan.SetDevInfos(rmDevInfos)
+ dp.resMan.SetTileCountPerCard(tileCounts)
}
return devTree, nil
diff --git a/cmd/gpu_plugin/gpu_plugin_test.go b/cmd/gpu_plugin/gpu_plugin_test.go
index 707eee438..0277a089f 100644
--- a/cmd/gpu_plugin/gpu_plugin_test.go
+++ b/cmd/gpu_plugin/gpu_plugin_test.go
@@ -61,6 +61,9 @@ func (m *mockResourceManager) GetPreferredFractionalAllocation(*v1beta1.Preferre
return &v1beta1.PreferredAllocationResponse{}, &dpapi.UseDefaultMethodError{}
}
+func (m *mockResourceManager) SetTileCountPerCard(counts []uint64) {
+}
+
func createTestFiles(root string, devfsdirs, sysfsdirs []string, sysfsfiles map[string][]byte) (string, string, error) {
sysfs := path.Join(root, "sys")
devfs := path.Join(root, "dev")
diff --git a/cmd/gpu_plugin/rm/gpu_plugin_resource_manager.go b/cmd/gpu_plugin/rm/gpu_plugin_resource_manager.go
index 56ed3d3fd..491d27fe1 100644
--- a/cmd/gpu_plugin/rm/gpu_plugin_resource_manager.go
+++ b/cmd/gpu_plugin/rm/gpu_plugin_resource_manager.go
@@ -25,6 +25,7 @@ import (
"net"
"net/http"
"os"
+ "slices"
"sort"
"strconv"
"strings"
@@ -43,7 +44,7 @@ import (
pluginapi "k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1"
podresourcesv1 "k8s.io/kubelet/pkg/apis/podresources/v1"
"k8s.io/kubernetes/pkg/kubelet/apis/podresources"
- "k8s.io/utils/strings/slices"
+ sslices "k8s.io/utils/strings/slices"
)
const (
@@ -52,6 +53,11 @@ const (
gasTileAnnotation = "gas-container-tiles"
levelZeroAffinityMaskEnvVar = "ZE_AFFINITY_MASK"
+ levelZeroHierarchyEnvVar = "ZE_FLAT_DEVICE_HIERARCHY"
+
+ hierarchyModeComposite = "COMPOSITE"
+ hierarchyModeFlat = "FLAT"
+ hierarchyModeCombined = "COMBINED"
grpcAddress = "unix:///var/lib/kubelet/pod-resources/kubelet.sock"
grpcBufferSize = 4 * 1024 * 1024
@@ -99,6 +105,7 @@ type ResourceManager interface {
CreateFractionalResourceResponse(*pluginapi.AllocateRequest) (*pluginapi.AllocateResponse, error)
GetPreferredFractionalAllocation(*pluginapi.PreferredAllocationRequest) (*pluginapi.PreferredAllocationResponse, error)
SetDevInfos(DeviceInfoMap)
+ SetTileCountPerCard(counts []uint64)
}
type containerAssignments struct {
@@ -124,6 +131,7 @@ type resourceManager struct {
mutex sync.RWMutex // for devTree updates during scan
cleanupMutex sync.RWMutex // for assignment details during cleanup
useKubelet bool
+ tileCountPerCard uint64
}
// NewDeviceInfo creates a new DeviceInfo.
@@ -365,7 +373,7 @@ func (rm *resourceManager) listPodsOnNodeWithStates(states []string) map[string]
for i := range podList.Items {
phase := string(podList.Items[i].Status.Phase)
- if slices.Contains(states, phase) {
+ if sslices.Contains(states, phase) {
key := getPodKey(&podList.Items[i])
pods[key] = &podList.Items[i]
}
@@ -470,7 +478,7 @@ func (rm *resourceManager) GetPreferredFractionalAllocation(request *pluginapi.P
pod := podCandidate.pod
containerIndex := podCandidate.allocatedContainerCount
cards := containerCards(pod, containerIndex)
- affinityMask := containerTileAffinityMask(pod, containerIndex)
+ affinityMask := containerTileAffinityMask(pod, containerIndex, int(rm.tileCountPerCard))
podKey := getPodKey(pod)
creq := request.ContainerRequests[0]
@@ -743,6 +751,25 @@ func (rm *resourceManager) SetDevInfos(deviceInfos DeviceInfoMap) {
rm.deviceInfos = deviceInfos
}
+func (rm *resourceManager) SetTileCountPerCard(counts []uint64) {
+ if len(counts) == 0 {
+ return
+ }
+
+ minCount := slices.Min(counts)
+ maxCount := slices.Max(counts)
+
+ if minCount != maxCount {
+ klog.Warningf("Node's GPUs are heterogenous (min: %d, max: %d tiles)", minCount, maxCount)
+
+ return
+ }
+
+ rm.mutex.Lock()
+ defer rm.mutex.Unlock()
+ rm.tileCountPerCard = maxCount
+}
+
func (rm *resourceManager) createAllocateResponse(deviceIds []string, tileAffinityMask string) (*pluginapi.AllocateResponse, error) {
rm.mutex.Lock()
defer rm.mutex.Unlock()
@@ -835,7 +862,40 @@ func containerCards(pod *v1.Pod, gpuUsingContainerIndex int) []string {
return nil
}
-func convertTileInfoToEnvMask(tileInfo string) string {
+// Guesses level zero hierarchy mode for the container. Defaults to the new "flat" mode
+// if no mode is set in the container's env variables.
+func guessLevelZeroHierarchyMode(pod *v1.Pod, containerIndex int) string {
+ klog.V(4).Infof("Checking pod %s envs", pod.Name)
+
+ if containerIndex < len(pod.Spec.Containers) {
+ c := pod.Spec.Containers[containerIndex]
+
+ if c.Env != nil {
+ for _, env := range c.Env {
+ if env.Name == levelZeroHierarchyEnvVar {
+ switch env.Value {
+ // Check that the value is valid.
+ case hierarchyModeComposite:
+ fallthrough
+ case hierarchyModeFlat:
+ fallthrough
+ case hierarchyModeCombined:
+ klog.V(4).Infof("Returning %s hierarchy", env.Value)
+ return env.Value
+ }
+
+ break
+ }
+ }
+ }
+ }
+
+ klog.V(4).Infof("Returning default %s hierarchy", hierarchyModeFlat)
+
+ return hierarchyModeFlat
+}
+
+func convertTileInfoToEnvMask(tileInfo string, tilesPerCard int, hierarchyMode string) string {
cards := strings.Split(tileInfo, ",")
tileIndices := make([]string, len(cards))
@@ -849,7 +909,7 @@ func convertTileInfoToEnvMask(tileInfo string) string {
tiles := strings.Split(cardTileSplit[1], "+")
- var combos []string
+ var maskItems []string
for _, tile := range tiles {
if !strings.HasPrefix(tile, "gt") {
@@ -865,13 +925,22 @@ func convertTileInfoToEnvMask(tileInfo string) string {
return ""
}
- levelZeroCardTileCombo :=
- strconv.FormatInt(int64(i), 10) + "." +
- strconv.FormatInt(tileNo, 10)
- combos = append(combos, levelZeroCardTileCombo)
+ maskItem := ""
+ if hierarchyMode == hierarchyModeComposite {
+ maskItem =
+ strconv.FormatInt(int64(i), 10) + "." +
+ strconv.FormatInt(tileNo, 10)
+ } else {
+ // This handles both FLAT and COMBINED hierarchy.
+ devIndex := i*tilesPerCard + int(tileNo)
+
+ maskItem = strconv.FormatInt(int64(devIndex), 10)
+ }
+
+ maskItems = append(maskItems, maskItem)
}
- tileIndices[i] = strings.Join(combos, ",")
+ tileIndices[i] = strings.Join(maskItems, ",")
}
return strings.Join(tileIndices, ",")
@@ -880,13 +949,15 @@ func convertTileInfoToEnvMask(tileInfo string) string {
// containerTiles returns the tile indices to use for a single container.
// Indices should be passed to level zero env variable to guide execution
// gpuUsingContainerIndex 0 == first gpu-using container in the pod.
+// The affinity mask is not needed for 1-tile GPUs. With 1-tile GPUs normal
+// GPU exposing is enough to limit container's access to targeted devices.
// annotation example:
// gas-container-tiles=card0:gt0+gt1,card1:gt0|card2:gt1+gt2||card0:gt3.
-func containerTileAffinityMask(pod *v1.Pod, gpuUsingContainerIndex int) string {
+func containerTileAffinityMask(pod *v1.Pod, gpuUsingContainerIndex, tilesPerCard int) string {
fullAnnotation := pod.Annotations[gasTileAnnotation]
onlyDividers := strings.Count(fullAnnotation, "|") == len(fullAnnotation)
- if fullAnnotation == "" || onlyDividers {
+ if fullAnnotation == "" || onlyDividers || tilesPerCard <= 1 {
return ""
}
@@ -895,13 +966,13 @@ func containerTileAffinityMask(pod *v1.Pod, gpuUsingContainerIndex int) string {
i := 0
- for _, containerTileInfo := range tileLists {
+ for containerIndex, containerTileInfo := range tileLists {
if len(containerTileInfo) == 0 {
continue
}
if i == gpuUsingContainerIndex {
- return convertTileInfoToEnvMask(containerTileInfo)
+ return convertTileInfoToEnvMask(containerTileInfo, tilesPerCard, guessLevelZeroHierarchyMode(pod, containerIndex))
}
i++
diff --git a/cmd/gpu_plugin/rm/gpu_plugin_resource_manager_test.go b/cmd/gpu_plugin/rm/gpu_plugin_resource_manager_test.go
index 6f53d4455..ae8038da3 100644
--- a/cmd/gpu_plugin/rm/gpu_plugin_resource_manager_test.go
+++ b/cmd/gpu_plugin/rm/gpu_plugin_resource_manager_test.go
@@ -419,6 +419,8 @@ func TestCreateFractionalResourceResponse(t *testing.T) {
for _, tCase := range testCases {
rm := newMockResourceManager(tCase.pods)
+ rm.SetTileCountPerCard([]uint64{1})
+
_, perr := rm.GetPreferredFractionalAllocation(&v1beta1.PreferredAllocationRequest{
ContainerRequests: tCase.prefContainerRequests,
})
@@ -468,6 +470,12 @@ func TestCreateFractionalResourceResponseWithOneCardTwoTiles(t *testing.T) {
"gpu.intel.com/i915": resource.MustParse("1"),
},
},
+ Env: []v1.EnvVar{
+ {
+ Name: levelZeroHierarchyEnvVar,
+ Value: hierarchyModeComposite,
+ },
+ },
},
},
},
@@ -493,6 +501,7 @@ func TestCreateFractionalResourceResponseWithOneCardTwoTiles(t *testing.T) {
}
rm := newMockResourceManager(tCase.pods)
+ rm.SetTileCountPerCard([]uint64{2})
_, perr := rm.GetPreferredFractionalAllocation(&v1beta1.PreferredAllocationRequest{
ContainerRequests: tCase.prefContainerRequests,
@@ -534,6 +543,12 @@ func TestCreateFractionalResourceResponseWithTwoCardsOneTile(t *testing.T) {
"gpu.intel.com/i915": resource.MustParse("2"),
},
},
+ Env: []v1.EnvVar{
+ {
+ Name: levelZeroHierarchyEnvVar,
+ Value: hierarchyModeComposite,
+ },
+ },
},
},
},
@@ -559,6 +574,7 @@ func TestCreateFractionalResourceResponseWithTwoCardsOneTile(t *testing.T) {
}
rm := newMockResourceManager(tCase.pods)
+ rm.SetTileCountPerCard([]uint64{5})
_, perr := rm.GetPreferredFractionalAllocation(&v1beta1.PreferredAllocationRequest{
ContainerRequests: tCase.prefContainerRequests,
@@ -605,6 +621,12 @@ func TestCreateFractionalResourceResponseWithThreeCardsTwoTiles(t *testing.T) {
"gpu.intel.com/i915": resource.MustParse("3"),
},
},
+ Env: []v1.EnvVar{
+ {
+ Name: levelZeroHierarchyEnvVar,
+ Value: hierarchyModeComposite,
+ },
+ },
},
},
},
@@ -630,6 +652,7 @@ func TestCreateFractionalResourceResponseWithThreeCardsTwoTiles(t *testing.T) {
}
rm := newMockResourceManager(tCase.pods)
+ rm.SetTileCountPerCard([]uint64{5})
_, perr := rm.GetPreferredFractionalAllocation(&v1beta1.PreferredAllocationRequest{
ContainerRequests: tCase.prefContainerRequests,
@@ -676,6 +699,12 @@ func TestCreateFractionalResourceResponseWithMultipleContainersTileEach(t *testi
"gpu.intel.com/i915": resource.MustParse("1"),
},
},
+ Env: []v1.EnvVar{
+ {
+ Name: levelZeroHierarchyEnvVar,
+ Value: hierarchyModeComposite,
+ },
+ },
},
{
Resources: v1.ResourceRequirements{
@@ -683,6 +712,12 @@ func TestCreateFractionalResourceResponseWithMultipleContainersTileEach(t *testi
"gpu.intel.com/i915": resource.MustParse("1"),
},
},
+ Env: []v1.EnvVar{
+ {
+ Name: levelZeroHierarchyEnvVar,
+ Value: hierarchyModeComposite,
+ },
+ },
},
},
},
@@ -712,6 +747,7 @@ func TestCreateFractionalResourceResponseWithMultipleContainersTileEach(t *testi
}
rm := newMockResourceManager(tCase.pods)
+ rm.SetTileCountPerCard([]uint64{2})
_, perr := rm.GetPreferredFractionalAllocation(&v1beta1.PreferredAllocationRequest{
ContainerRequests: properPrefContainerRequests,
@@ -738,26 +774,77 @@ func TestCreateFractionalResourceResponseWithMultipleContainersTileEach(t *testi
func TestTileAnnotationParsing(t *testing.T) {
type parseTest struct {
- line string
- result string
- index int
+ line string
+ result string
+ hierarchys []string
+ index int
+ tilesPerCard int
}
parseTests := []parseTest{
{
- line: "card1:gt1",
- index: 0,
- result: "0.1",
+ line: "card1:gt1",
+ index: 0,
+ result: "0.1",
+ hierarchys: []string{"COMPOSITE"},
+ tilesPerCard: 2,
},
{
- line: "card1:gt1+gt2",
- index: 0,
- result: "0.1,0.2",
+ line: "card1:gt0",
+ index: 0,
+ result: "",
+ hierarchys: []string{"COMPOSITE"},
+ tilesPerCard: 1,
},
{
- line: "card1:gt1+gt2,card2:gt0",
- index: 0,
- result: "0.1,0.2,1.0",
+ line: "card1:gt1+gt2",
+ index: 0,
+ result: "0.1,0.2",
+ hierarchys: []string{"COMPOSITE"},
+ tilesPerCard: 3,
+ },
+ // Invalid hierarchy defaults to FLAT
+ {
+ line: "card1:gt1+gt2,card2:gt0",
+ index: 0,
+ result: "1,2,3",
+ hierarchys: []string{"FOOBAR"},
+ tilesPerCard: 3,
+ },
+ {
+ line: "card1:gt1+gt2,card2:gt0",
+ index: 0,
+ result: "1,2,3",
+ hierarchys: []string{"FLAT"},
+ tilesPerCard: 3,
+ },
+ {
+ line: "||card1:gt1+gt2,card2:gt0",
+ index: 0,
+ result: "1,2,3",
+ hierarchys: []string{"", "", "FLAT"},
+ tilesPerCard: 3,
+ },
+ {
+ line: "||||card1:gt3,card5:gt1",
+ index: 0,
+ result: "3,9",
+ hierarchys: []string{"", "", "", "", "FLAT"},
+ tilesPerCard: 8,
+ },
+ {
+ line: "card1:gt1+gt2,card2:gt1",
+ index: 0,
+ result: "1,2,4",
+ hierarchys: []string{"COMBINED"},
+ tilesPerCard: 3,
+ },
+ {
+ line: "card1:gt1,card2:gt1",
+ index: 0,
+ result: "1,3",
+ hierarchys: []string{"COMBINED"},
+ tilesPerCard: 2,
},
{
line: "card1:gt1",
@@ -765,24 +852,31 @@ func TestTileAnnotationParsing(t *testing.T) {
result: "",
},
{
- line: "card1:gt1|card2:gt4",
- index: 1,
- result: "0.4",
+ line: "card1:gt1|card2:gt4",
+ index: 1,
+ result: "4",
+ tilesPerCard: 5,
},
{
- line: "card1:gt1|card2:gt4,card3:gt2",
- index: 1,
- result: "0.4,1.2",
+ line: "card1:gt1|card2:gt4,card3:gt2",
+ index: 1,
+ result: "0.4,1.2",
+ hierarchys: []string{"COMPOSITE", "COMPOSITE"},
+ tilesPerCard: 5,
},
{
- line: "card1:gt1|card2:gt4,card3:gt2|card5:gt0",
- index: 2,
- result: "0.0",
+ line: "card1:gt1|card2:gt4,card3:gt2|card5:gt0",
+ index: 2,
+ result: "0.0",
+ hierarchys: []string{"COMPOSITE", "COMPOSITE", "COMPOSITE"},
+ tilesPerCard: 5,
},
{
- line: "||card5:gt0,card6:gt4||",
- index: 0,
- result: "0.0,1.4",
+ line: "||card5:gt0,card6:gt4||",
+ index: 0,
+ result: "0.0,1.4",
+ hierarchys: []string{"", "", "COMPOSITE"},
+ tilesPerCard: 5,
},
{
line: "||card5:gt0,card6:gt4||",
@@ -815,14 +909,18 @@ func TestTileAnnotationParsing(t *testing.T) {
result: "",
},
{
- line: "card1:gt1||card2:gt4,card3:gt2",
- index: 1,
- result: "0.4,1.2",
+ line: "card1:gt1||card2:gt4,card3:gt2",
+ index: 1,
+ result: "0.4,1.2",
+ hierarchys: []string{"", "", "COMPOSITE"},
+ tilesPerCard: 6,
},
{
- line: "|||card2:gt7",
- index: 0,
- result: "0.7",
+ line: "|||card2:gt7",
+ index: 0,
+ result: "0.7",
+ hierarchys: []string{"", "", "", "COMPOSITE"},
+ tilesPerCard: 8,
},
{
line: "card5",
@@ -831,7 +929,7 @@ func TestTileAnnotationParsing(t *testing.T) {
},
}
- for _, pt := range parseTests {
+ for testIndex, pt := range parseTests {
pod := v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
@@ -839,9 +937,25 @@ func TestTileAnnotationParsing(t *testing.T) {
},
}
- ret := containerTileAffinityMask(&pod, pt.index)
+ if pt.hierarchys != nil {
+ // Create enough containers
+ pod.Spec.Containers = make([]v1.Container, 10)
+
+ for i := range pod.Spec.Containers {
+ if i < len(pt.hierarchys) {
+ pod.Spec.Containers[i].Env = []v1.EnvVar{
+ {
+ Name: levelZeroHierarchyEnvVar,
+ Value: pt.hierarchys[i],
+ },
+ }
+ }
+ }
+ }
+
+ ret := containerTileAffinityMask(&pod, pt.index, max(1, pt.tilesPerCard))
- expectTruef(ret == pt.result, t, pt.line, "resulting mask is wrong. correct: %v, got: %v", pt.result, ret)
+ expectTruef(ret == pt.result, t, pt.line, "resulting mask is wrong (test index=%d). correct: %v, got: %v", testIndex, pt.result, ret)
}
}
diff --git a/cmd/gpu_plugin/usage-scenarios.png b/cmd/gpu_plugin/usage-scenarios.png
new file mode 100644
index 000000000..bd4dda730
Binary files /dev/null and b/cmd/gpu_plugin/usage-scenarios.png differ
diff --git a/cmd/internal/labeler/labeler.go b/cmd/internal/labeler/labeler.go
index 5a827cc17..869bd8da4 100644
--- a/cmd/internal/labeler/labeler.go
+++ b/cmd/internal/labeler/labeler.go
@@ -163,10 +163,10 @@ func fallback() uint64 {
return getEnvVarNumber(memoryOverrideEnv)
}
-func (l *labeler) getMemoryAmount(gpuName string, numTiles uint64) uint64 {
+func GetMemoryAmount(sysfsDrmDir, gpuName string, numTiles uint64) uint64 {
reserved := getEnvVarNumber(memoryReservedEnv)
- filePath := filepath.Join(l.sysfsDRMDir, gpuName, "lmem_total_bytes")
+ filePath := filepath.Join(sysfsDrmDir, gpuName, "lmem_total_bytes")
dat, err := os.ReadFile(filePath)
if err != nil {
@@ -183,9 +183,9 @@ func (l *labeler) getMemoryAmount(gpuName string, numTiles uint64) uint64 {
return totalPerTile*numTiles - reserved
}
-// getTileCount reads the tile count.
-func (l *labeler) getTileCount(gpuName string) (numTiles uint64) {
- filePath := filepath.Join(l.sysfsDRMDir, gpuName, "gt/gt*")
+// GetTileCount reads the tile count.
+func GetTileCount(sysfsDrmDir, gpuName string) (numTiles uint64) {
+ filePath := filepath.Join(sysfsDrmDir, gpuName, "gt/gt*")
files, _ := filepath.Glob(filePath)
@@ -196,9 +196,9 @@ func (l *labeler) getTileCount(gpuName string) (numTiles uint64) {
return uint64(len(files))
}
-// getNumaNode reads the cards numa node.
-func (l *labeler) getNumaNode(gpuName string) int {
- filePath := filepath.Join(l.sysfsDRMDir, gpuName, "device/numa_node")
+// GetNumaNode reads the cards numa node.
+func GetNumaNode(sysfsDrmDir, gpuName string) int {
+ filePath := filepath.Join(sysfsDrmDir, gpuName, "device/numa_node")
data, err := os.ReadFile(filePath)
if err != nil {
@@ -295,14 +295,14 @@ func (l *labeler) createLabels() error {
return errors.Wrap(err, "gpu name parsing error")
}
- numTiles := l.getTileCount(gpuName)
+ numTiles := GetTileCount(l.sysfsDRMDir, gpuName)
tileCount += int(numTiles)
- memoryAmount := l.getMemoryAmount(gpuName, numTiles)
+ memoryAmount := GetMemoryAmount(l.sysfsDRMDir, gpuName, numTiles)
gpuNumList = append(gpuNumList, gpuName[4:])
// get numa node of the GPU
- numaNode := l.getNumaNode(gpuName)
+ numaNode := GetNumaNode(l.sysfsDRMDir, gpuName)
if numaNode >= 0 {
// and store the gpu under that node id
diff --git a/deployments/gpu_plugin/overlays/fractional_resources/add-args.yaml b/deployments/gpu_plugin/overlays/fractional_resources/add-args.yaml
index a438bab4c..033f5ff00 100644
--- a/deployments/gpu_plugin/overlays/fractional_resources/add-args.yaml
+++ b/deployments/gpu_plugin/overlays/fractional_resources/add-args.yaml
@@ -10,3 +10,4 @@ spec:
args:
- "-shared-dev-num=300"
- "-resource-manager"
+ - "-enable-monitoring"
diff --git a/deployments/gpu_plugin/overlays/nfd_labeled_nodes/add-args.yaml b/deployments/gpu_plugin/overlays/nfd_labeled_nodes/add-args.yaml
new file mode 100644
index 000000000..999a6622e
--- /dev/null
+++ b/deployments/gpu_plugin/overlays/nfd_labeled_nodes/add-args.yaml
@@ -0,0 +1,12 @@
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: intel-gpu-plugin
+spec:
+ template:
+ spec:
+ containers:
+ - name: intel-gpu-plugin
+ args:
+ - "-enable-monitoring"
+ - "-v=2"
diff --git a/deployments/gpu_plugin/overlays/nfd_labeled_nodes/kustomization.yaml b/deployments/gpu_plugin/overlays/nfd_labeled_nodes/kustomization.yaml
index 38178d290..b0ea98bea 100644
--- a/deployments/gpu_plugin/overlays/nfd_labeled_nodes/kustomization.yaml
+++ b/deployments/gpu_plugin/overlays/nfd_labeled_nodes/kustomization.yaml
@@ -2,3 +2,4 @@ resources:
- ../../base
patches:
- path: add-nodeselector-intel-gpu.yaml
+ - path: add-args.yaml
diff --git a/deployments/operator/samples/deviceplugin_v1_gpudeviceplugin.yaml b/deployments/operator/samples/deviceplugin_v1_gpudeviceplugin.yaml
index f40706026..d120d7187 100644
--- a/deployments/operator/samples/deviceplugin_v1_gpudeviceplugin.yaml
+++ b/deployments/operator/samples/deviceplugin_v1_gpudeviceplugin.yaml
@@ -6,5 +6,6 @@ spec:
image: intel/intel-gpu-plugin:0.28.0
sharedDevNum: 10
logLevel: 4
+ enableMonitoring: true
nodeSelector:
intel.feature.node.kubernetes.io/gpu: "true"