You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -10,21 +10,29 @@ A new CPU Manager Static Policy Option called `prefer-align-cpus-by-uncorecache`
10
10
11
11
## Understanding the feature
12
12
13
-
### What is Split Uncore Cache Architecture
14
-
Split uncore caches (also referred to as last-level-caches) is an architectural design available for x86 and ARM based processors. Instead of a large monolithic uncore cache uniformly shared across all cores, a split uncore cache architecture divides the cache into separate segments that are aligned with specific CPU groupings.
13
+
### What is Uncore Cache
14
+
Traditional CPU processors have a single monolithic uncore cache, also referred to as last-level-cache or Level 3 cache, that is shared among all cores on the processors. In order to reduce the distance and latency between the CPU cores and the uncore cache, x86 and ARM based processors have introduced a split uncore cache architecture where the last-level-cache is divided into multiple physical caches that are aligned to specific CPU groupings.
The matrix below shows the [cpu-to-cpu latency](https://github.com/nviennot/core-to-core-latency) measured in nanoseconds when passing a packet between CPUs via its cache coherence protocol on a split uncore cache processor.
18
+
The matrix below shows the [cpu-to-cpu latency](https://github.com/nviennot/core-to-core-latency) measured in nanoseconds (lower is better) when passing a packet between CPUs via its cache coherence protocol on a split uncore cache processor. In this example, the processor consists of 2 uncore caches. Each uncore cache consists of 8 CPUs.
19
19

20
20
Blue entries in the matrix represent latency between CPUs sharing the same uncore cache, while grey entries indicate latency between CPUs corresponding to different uncore caches. Latency between CPUs that correspond to different caches are higher than the latency between CPUs that belong to the same cache.
21
21
22
22
With `prefer-align-cpus-by-uncorecache` enabled, the static CPU Manager will allocate CPU resources for a container such that all CPUs assigned to a container share the same uncore cache. This policy operates on a best-effort basis, aiming to minimize the distribution of a container's CPU resources across uncore caches, based on the container's requirements and allocatable resources on the node.
23
23
24
24
By concentrating CPU resources within a single or minimal number of uncore cahes, applications running on processors with split uncore caches can benefit from reduced cache latency (as seen in the matrix above) and reduced contention against other workloads, resulting in higher throughput.
25
25
26
+
The following diagram below illustrates uncore cache alignment when the feature is enabled.
27
+
28
+

29
+
30
+
In the Default Static Policy case, containers are assigned CPU resources in a packed methodology. As a result, Container 1 and Container 2 can experience a noisy neighbor impact due to cache access contention on Uncore Cache 0. Additionally, Container 2 will have CPUs distributed across both caches which can introduce a cross-cache latency.
31
+
32
+
With `prefer-align-cpus-by-uncorecache` enabled, each container is isolated on an individual cache. This resolves the cache contention between the containers and minimizes the cache latency for the CPUs being utilized.
33
+
26
34
## Use cases
27
-
Common use cases can include telco applications like vRAN, Mobile Packet Core, and Firewalls. It's important to note that the optimization provided by `prefer-align-cpus-by-uncorecache`is dependent on the workload. For example, applications that are memory bandwidth bound may not benefit from uncore cache alignment, as utilizing more uncore caches can increase memory bandwidth access.
35
+
Common use cases can include telco applications like vRAN, Mobile Packet Core, and Firewalls. It's important to note that the optimization provided by `prefer-align-cpus-by-uncorecache`can be dependent on the workload. For example, applications that are memory bandwidth bound may not benefit from uncore cache alignment, as utilizing more uncore caches can increase memory bandwidth access.
28
36
29
37
## Enabling the feature
30
38
To enable this feature, set the CPU Manager Policy to `static` and enable the CPU Manager Policy Options with `prefer-align-cpus-by-uncorecache`.
0 commit comments