Skip to content

Commit 4e98cc8

Browse files
committed
update uncorecache blog
1 parent 5df71e8 commit 4e98cc8

File tree

4 files changed

+13
-5
lines changed

4 files changed

+13
-5
lines changed
169 KB
Loading

content/en/blog/_posts/2025-xx-xx-prefer-align-cpus-by-uncore-cache-beta/index.md renamed to content/en/blog/_posts/2025-07-29-prefer-align-cpus-by-uncore-cache-beta/index.md

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
layout: blog
33
title: 'Kubernetes v1.34: Introducing CPU Manager Static Policy Option for Uncore Cache Alignment'
4-
date: 2025-xx-xx
4+
date: 2025-07-29
55
slug: prefer-align-by-uncore-cache-cpumanager-static-policy-optimization
66
author: Charles Wong (AMD)
77
---
@@ -10,21 +10,29 @@ A new CPU Manager Static Policy Option called `prefer-align-cpus-by-uncorecache`
1010

1111
## Understanding the feature
1212

13-
### What is Split Uncore Cache Architecture
14-
Split uncore caches (also referred to as last-level-caches) is an architectural design available for x86 and ARM based processors. Instead of a large monolithic uncore cache uniformly shared across all cores, a split uncore cache architecture divides the cache into separate segments that are aligned with specific CPU groupings.
13+
### What is Uncore Cache
14+
Traditional CPU processors have a single monolithic uncore cache, also referred to as last-level-cache or Level 3 cache, that is shared among all cores on the processors. In order to reduce the distance and latency between the CPU cores and the uncore cache, x86 and ARM based processors have introduced a split uncore cache architecture where the last-level-cache is divided into multiple physical caches that are aligned to specific CPU groupings.
1515
![architecture_diagram](./mono_vs_split_uncore.png)
1616

1717
### Benefit of the feature
18-
The matrix below shows the [cpu-to-cpu latency](https://github.com/nviennot/core-to-core-latency) measured in nanoseconds when passing a packet between CPUs via its cache coherence protocol on a split uncore cache processor.
18+
The matrix below shows the [cpu-to-cpu latency](https://github.com/nviennot/core-to-core-latency) measured in nanoseconds (lower is better) when passing a packet between CPUs via its cache coherence protocol on a split uncore cache processor. In this example, the processor consists of 2 uncore caches. Each uncore cache consists of 8 CPUs.
1919
![cpu_to_cpu_latency](./c2c_latency.png)
2020
Blue entries in the matrix represent latency between CPUs sharing the same uncore cache, while grey entries indicate latency between CPUs corresponding to different uncore caches. Latency between CPUs that correspond to different caches are higher than the latency between CPUs that belong to the same cache.
2121

2222
With `prefer-align-cpus-by-uncorecache` enabled, the static CPU Manager will allocate CPU resources for a container such that all CPUs assigned to a container share the same uncore cache. This policy operates on a best-effort basis, aiming to minimize the distribution of a container's CPU resources across uncore caches, based on the container's requirements and allocatable resources on the node.
2323

2424
By concentrating CPU resources within a single or minimal number of uncore cahes, applications running on processors with split uncore caches can benefit from reduced cache latency (as seen in the matrix above) and reduced contention against other workloads, resulting in higher throughput.
2525

26+
The following diagram below illustrates uncore cache alignment when the feature is enabled.
27+
28+
![cache-align-diagram](./cache-align-diagram.png)
29+
30+
In the Default Static Policy case, containers are assigned CPU resources in a packed methodology. As a result, Container 1 and Container 2 can experience a noisy neighbor impact due to cache access contention on Uncore Cache 0. Additionally, Container 2 will have CPUs distributed across both caches which can introduce a cross-cache latency.
31+
32+
With `prefer-align-cpus-by-uncorecache` enabled, each container is isolated on an individual cache. This resolves the cache contention between the containers and minimizes the cache latency for the CPUs being utilized.
33+
2634
## Use cases
27-
Common use cases can include telco applications like vRAN, Mobile Packet Core, and Firewalls. It's important to note that the optimization provided by `prefer-align-cpus-by-uncorecache` is dependent on the workload. For example, applications that are memory bandwidth bound may not benefit from uncore cache alignment, as utilizing more uncore caches can increase memory bandwidth access.
35+
Common use cases can include telco applications like vRAN, Mobile Packet Core, and Firewalls. It's important to note that the optimization provided by `prefer-align-cpus-by-uncorecache` can be dependent on the workload. For example, applications that are memory bandwidth bound may not benefit from uncore cache alignment, as utilizing more uncore caches can increase memory bandwidth access.
2836

2937
## Enabling the feature
3038
To enable this feature, set the CPU Manager Policy to `static` and enable the CPU Manager Policy Options with `prefer-align-cpus-by-uncorecache`.

0 commit comments

Comments
 (0)