150+ মাইক্রোসার্ভিস কুবারনেটি
{{< blocks/kubernetes-features >}}
{{< blocks/case-studies >}}
+
+{{< kubeweekly id="kubeweekly" >}}
diff --git a/content/bn/examples/application/mysql/mysql-deployment.yaml b/content/bn/examples/application/mysql/mysql-deployment.yaml
index 419fbe03d3ff0..4630b4d84abe9 100644
--- a/content/bn/examples/application/mysql/mysql-deployment.yaml
+++ b/content/bn/examples/application/mysql/mysql-deployment.yaml
@@ -25,7 +25,7 @@ spec:
app: mysql
spec:
containers:
- - image: mysql:5.6
+ - image: mysql:9
name: mysql
env:
# Use secret in real usage
diff --git a/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md b/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md
index 442817faa38c4..cd9b49aa67f5d 100644
--- a/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md
+++ b/content/en/blog/_posts/2019-01-15-container-storage-interface-ga.md
@@ -17,7 +17,7 @@ The GA milestone indicates that Kubernetes users may depend on the feature and i
Although prior to CSI Kubernetes provided a powerful volume plugin system, it was challenging to add support for new volume plugins to Kubernetes: volume plugins were “in-tree” meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries—vendors wanting to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) were forced to align with the Kubernetes release process. In addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain.
-CSI was developed as a standard for exposing arbitrary block and file storage storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. With the adoption of the Container Storage Interface, the Kubernetes volume layer becomes truly extensible. Using CSI, third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This gives Kubernetes users more options for storage and makes the system more secure and reliable.
+CSI was developed as a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. With the adoption of the Container Storage Interface, the Kubernetes volume layer becomes truly extensible. Using CSI, third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This gives Kubernetes users more options for storage and makes the system more secure and reliable.
## What’s new?
diff --git a/content/en/blog/_posts/XXXXX-pqc-in-k8s.md b/content/en/blog/_posts/2025-07-18-pqc-in-k8s.md
similarity index 99%
rename from content/en/blog/_posts/XXXXX-pqc-in-k8s.md
rename to content/en/blog/_posts/2025-07-18-pqc-in-k8s.md
index db9ede596633e..58de170329bf6 100644
--- a/content/en/blog/_posts/XXXXX-pqc-in-k8s.md
+++ b/content/en/blog/_posts/2025-07-18-pqc-in-k8s.md
@@ -2,10 +2,10 @@
layout: blog
title: "Post-Quantum Cryptography in Kubernetes"
slug: pqc-in-k8s
-date: XXXX
-canonicalUrl: XXXX
+date: 2025-07-18
+canonicalUrl: https://www.kubernetes.dev/blog/2025/07/18/pqc-in-k8s/
author: "Fabian Kammel (ControlPlane)"
-draft: true
+draft: false
---
The world of cryptography is on the cusp of a major shift with the advent of
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index 5d20244bd8569..be8bef544784a 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -296,63 +296,6 @@ the kubelet can use topology hints when making resource assignment decisions.
See [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/)
for more information.
-## Swap memory management {#swap-memory}
-
-{{< feature-state feature_gate_name="NodeSwap" >}}
-
-To enable swap on a node, the `NodeSwap` feature gate must be enabled on
-the kubelet (default is true), and the `--fail-swap-on` command line flag or `failSwapOn`
-[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
-must be set to false.
-To allow Pods to utilize swap, `swapBehavior` should not be set to `NoSwap` (which is the default behavior) in the kubelet config.
-
-{{< warning >}}
-When the memory swap feature is turned on, Kubernetes data such as the content
-of Secret objects that were written to tmpfs now could be swapped to disk.
-{{< /warning >}}
-
-A user can also optionally configure `memorySwap.swapBehavior` in order to
-specify how a node will use swap memory. For example,
-
-```yaml
-memorySwap:
- swapBehavior: LimitedSwap
-```
-
-- `NoSwap` (default): Kubernetes workloads will not use swap.
-- `LimitedSwap`: The utilization of swap memory by Kubernetes workloads is subject to limitations.
- Only Pods of Burstable QoS are permitted to employ swap.
-
-If configuration for `memorySwap` is not specified and the feature gate is
-enabled, by default the kubelet will apply the same behaviour as the
-`NoSwap` setting.
-
-With `LimitedSwap`, Pods that do not fall under the Burstable QoS classification (i.e.
-`BestEffort`/`Guaranteed` Qos Pods) are prohibited from utilizing swap memory.
-To maintain the aforementioned security and node health guarantees, these Pods
-are not permitted to use swap memory when `LimitedSwap` is in effect.
-
-Prior to detailing the calculation of the swap limit, it is necessary to define the following terms:
-
-* `nodeTotalMemory`: The total amount of physical memory available on the node.
-* `totalPodsSwapAvailable`: The total amount of swap memory on the node that is available for use by Pods
- (some swap memory may be reserved for system use).
-* `containerMemoryRequest`: The container's memory request.
-
-Swap limitation is configured as:
-`(containerMemoryRequest / nodeTotalMemory) * totalPodsSwapAvailable`.
-
-It is important to note that, for containers within Burstable QoS Pods, it is possible to
-opt-out of swap usage by specifying memory requests that are equal to memory limits.
-Containers configured in this manner will not have access to swap memory.
-
-Swap is supported only with **cgroup v2**, cgroup v1 is not supported.
-
-For more information, and to assist with testing and provide feedback, please
-see the blog-post about [Kubernetes 1.28: NodeSwap graduates to Beta1](/blog/2023/08/24/swap-linux-beta/),
-[KEP-2400](https://github.com/kubernetes/enhancements/issues/4128) and its
-[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
-
## {{% heading "whatsnext" %}}
Learn more about the following:
diff --git a/content/en/docs/concepts/cluster-administration/swap-memory-management.md b/content/en/docs/concepts/cluster-administration/swap-memory-management.md
new file mode 100644
index 0000000000000..16e9a90457339
--- /dev/null
+++ b/content/en/docs/concepts/cluster-administration/swap-memory-management.md
@@ -0,0 +1,402 @@
+---
+title: Swap memory management
+content_type: concept
+weight: 10
+---
+
+
+
+Kubernetes can be configured to use swap memory on a {{< glossary_tooltip text="node" term_id="node" >}},
+allowing the kernel to free up physical memory by swapping out pages to backing storage.
+This is useful for multiple use-cases.
+For example, nodes running workloads that can benefit from using swap,
+such as those that have large memory footprints but only access a portion of that memory at any given time.
+It also helps prevent Pods from being terminated during memory pressure spikes,
+shields nodes from system-level memory spikes that might compromise its stability,
+allows for more flexible memory management on the node, and much more.
+
+
+
+## How to use it?
+
+### Prerequisites
+
+- Swap must be enabled and provisioned on the node.
+- The node must run a Linux operating system.
+- The node must use cgroup v2. Kubernetes does not support swap on cgroup v1 nodes.
+
+## Enabling swap for Kubernetes Workloads
+
+To allow Kubernetes workloads to use swap,
+you must disable the kubelet's default behavior of failing when swap is detected,
+and specify memory-swap behavior as `LimitedSwap`:
+
+**Update kubelet configuration:**
+ ```yaml
+ # this fragment goes into the kubelet's configuration file
+ failSwapOn: false
+ memorySwap:
+ swapBehavior: LimitedSwap
+```
+
+The available choices for `swapBehavior` are:
+- `NoSwap` (default): Kubernetes workloads cannot use swap. However, processes
+ outside of Kubernetes' scope, like system daemons (such as kubelet itself!) can utilize swap.
+ This behavior is beneficial for protecting the node from system-level memory spikes,
+ but it does not safeguard the workloads themselves from such spikes.
+- `LimitedSwap`: Kubernetes workloads can utilize swap memory.
+ The amount of swap available to a Pod is determined automatically.
+ For more details, see the [section below](#how-is-the-swap-limit-being-determined-with-limitedswap).
+
+If configuration for `memorySwap` is not specified,
+by default the kubelet will apply the same behaviour as the `NoSwap` setting.
+
+Bear in mind that the following pods would be excluded from swap access
+(see more info in the [section below](#how-is-the-swap-limit-being-determined-with-limitedswap)):
+- Pods that are not classified as Burstable QoS.
+- Pods of High-priority.
+- Containers with memory limit that equals to memory request.
+
+{{< note >}}
+
+Kubernetes only supports swap for Linux nodes.
+
+{{< /note >}}
+
+## How does it work?
+
+There are a number of possible ways that one could envision swap use on a node.
+If kubelet is already running on a node, it would need to be restarted after swap is provisioned in order to identify it.
+
+When kubelet starts on a node in which swap is provisioned and available
+(with the `failSwapOn: false` configuration), kubelet will:
+- Be able to start on this swap-enabled node.
+- Direct the Container Runtime Interface (CRI) implementation, often referred to as the container runtime,
+to allocate zero swap memory to Kubernetes workloads by default.
+
+Swap configuration on a node is exposed to a cluster admin via the
+[`memorySwap` in the KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1).
+As a cluster administrator, you can specify the node's behaviour in the
+presence of swap memory by setting `memorySwap.swapBehavior`.
+
+The kubelet uses the container runtime API, and directs the container runtime to
+apply specific configuration (for example, in the cgroup v2 case, `memory.swap.max`) in a manner that will
+enable the desired swap configuration for a container. For runtimes that use control groups, or cgroups,
+the container runtime is then responsible for writing these settings to the container-level cgroup.
+
+## Observability for swap use
+
+### Node and container level metric statistics
+
+Kubelet now collects node and container level metric statistics,
+which can be accessed at the `/metrics/resource` (which is used mainly by monitoring
+tools like Prometheus) and `/stats/summary` (which is used mainly by Autoscalers) kubelet HTTP endpoints.
+This allows clients who can directly request the kubelet to
+monitor swap usage and remaining swap memory when using `LimitedSwap`.
+Additionally, a `machine_swap_bytes` metric has been added to cadvisor to show
+the total physical swap capacity of the machine.
+See [this page](/docs/reference/instrumentation/node-metrics/) for more info.
+
+For example, these `/metrics/resource` are supported:
+- `node_swap_usage_bytes`: Current swap usage of the node in bytes.
+- `container_swap_usage_bytes`: Current amount of the container swap usage in bytes.
+- `container_swap_limit_bytes`: Current amount of the container swap limit in bytes.
+
+### Using `kubectl top --show-swap`
+
+Querying metrics is valuable, but somewhat cumbersome, as these metrics
+are designed to be used by software rather than humans.
+In order to consume this data in a more user-friendly way,
+the `kubectl top` command has been extended to support swap metrics, using the `--show-swap` flag.
+
+In order to receive information about swap usage on nodes, `kubectl top nodes --show-swap` can be used:
+```shell
+kubectl top nodes --show-swap
+```
+
+This will result in an output similar to:
+```
+NAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%) SWAP(bytes) SWAP(%)
+node1 1m 10% 2Mi 10% 1Mi 0%
+node2 5m 10% 6Mi 10% 2Mi 0%
+node3 3m 10% 4Mi 10%
+```
+
+In order to receive information about swap usage by pods, `kubectl top nodes --show-swap` can be used:
+```shell
+kubectl top pod -n kube-system --show-swap
+```
+
+This will result in an output similar to:
+```
+NAME CPU(cores) MEMORY(bytes) SWAP(bytes)
+coredns-58d5bc5cdb-5nbk4 2m 19Mi 0Mi
+coredns-58d5bc5cdb-jsh26 3m 37Mi 0Mi
+etcd-node01 51m 143Mi 5Mi
+kube-apiserver-node01 98m 824Mi 16Mi
+kube-controller-manager-node01 20m 135Mi 9Mi
+kube-proxy-ffgs2 1m 24Mi 0Mi
+kube-proxy-fhvwx 1m 39Mi 0Mi
+kube-scheduler-node01 13m 69Mi 0Mi
+metrics-server-8598789fdb-d2kcj 5m 26Mi 0Mi
+```
+
+### Nodes to report swap capacity as part of node status
+
+A new node status field is now added, `node.status.nodeInfo.swap.capacity`, to report the swap capacity of a node.
+
+As an example, the following command can be used to retrieve the swap capacity of the nodes in a cluster:
+```shell
+kubectl get nodes -o go-template='{{range .items}}{{.metadata.name}}: {{if .status.nodeInfo.swap.capacity}}{{.status.nodeInfo.swap.capacity}}{{else}}{{end}}{{"\n"}}{{end}}'
+```
+
+This will result in an output similar to:
+```
+node1: 21474836480
+node2: 42949664768
+node3:
+```
+
+{{< note >}}
+
+The `` value indicates that the `.status.nodeInfo.swap.capacity` field is not set for that Node.
+This probably means that the node does not have swap provisioned, or less likely,
+that the kubelet is not able to determine the swap capacity of the node.
+
+{{< /note >}}
+
+### Swap discovery using Node Feature Discovery (NFD) {#node-feature-discovery}
+
+[Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery)
+is a Kubernetes addon for detecting hardware features and configuration.
+It can be utilized to discover which nodes are provisioned with swap.
+
+As an example, to figure out which nodes are provisioned with swap,
+use the following command:
+```shell
+kubectl get nodes -o jsonpath='{range .items[?(@.metadata.labels.feature\.node\.kubernetes\.io/memory-swap)]}{.metadata.name}{"\t"}{.metadata.labels.feature\.node\.kubernetes\.io/memory-swap}{"\n"}{end}'
+```
+
+This will result in an output similar to:
+```
+k8s-worker1: true
+k8s-worker2: true
+k8s-worker3: false
+```
+
+In this example, swap is provisioned on nodes `k8s-worker1` and `k8s-worker2`, but not on `k8s-worker3`.
+
+## Risks and caveats
+
+{{< caution >}}
+
+It is deeply encouraged to encrypt the swap space.
+See Memory-backed volumes [memory-backed volumes](#memory-backed-volumes) for more info.
+
+{{< /caution >}}
+
+Having swap available on a system reduces predictability.
+While swap can enhance performance by making more RAM available, swapping data
+back to memory is a heavy operation, sometimes slower by many orders of magnitude,
+which can cause unexpected performance regressions.
+Furthermore, swap changes a system's behaviour under memory pressure.
+Enabling swap increases the risk of noisy neighbors,
+where Pods that frequently use their RAM may cause other Pods to swap.
+In addition, since swap allows for greater memory usage for workloads in Kubernetes that cannot be predictably accounted for,
+and due to unexpected packing configurations,
+the scheduler currently does not account for swap memory usage.
+This heightens the risk of noisy neighbors.
+
+The performance of a node with swap memory enabled depends on the underlying physical storage.
+When swap memory is in use, performance will be significantly worse in an I/O
+operations per second (IOPS) constrained environment, such as a cloud VM with
+I/O throttling, when compared to faster storage mediums like solid-state drives
+or NVMe.
+As swap might cause IO pressure, it is recommended to give a higher IO latency
+priority to system critical daemons. See the relevant section in the
+[recommended practices](#good-practice-for-using-swap-in-a-kubernetes-cluster) section below.
+
+### Memory-backed volumes
+
+On Linux nodes, memory-backed volumes (such as [`secret`](/docs/concepts/configuration/secret/)
+volume mounts, or [`emptyDir`](/docs/concepts/storage/volumes/#emptydir) with `medium: Memory`)
+are implemented with a `tmpfs` filesystem.
+The contents of such volumes should remain in memory at all times, hence should
+not be swapped to disk.
+To ensure the contents of such volumes remain in memory, the `noswap` tmpfs option
+is being used.
+
+The Linux kernel officially supports the `noswap` option from version 6.3 (more info
+can be found in [Linux Kernel Version Requirements](/docs/reference/node/kernel-version-requirements/#requirements-other)).
+However, the different distributions often choose to backport this mount option to older
+Linux versions as well.
+
+In order to verify whether the node supports the `noswap` option, the kubelet will do the following:
+* If the kernel's version is above 6.3 then the `noswap` option will be assumed to be supported.
+* Otherwise, kubelet would try to mount a dummy tmpfs with the `noswap` option at startup.
+ If kubelet fails with an error indicating of an unknown option, `noswap` will be assumed
+ to not be supported, hence will not be used.
+ A kubelet log entry will be emitted to warn the user about memory-backed volumes might swap to disk.
+ If kubelet succeeds, the dummy tmpfs will be deleted and the `noswap` option will be used.
+ * If the `noswap` option is not supported, kubelet will emit a warning log entry,
+ then continue its execution.
+
+See the [section above](#setting-up-encrypted-swap) with an example for setting unencrypted swap.
+However, handling encrypted swap is not within the scope of kubelet;
+rather, it is a general OS configuration concern and should be addressed at that level.
+It is the administrator's responsibility to provision encrypted swap to mitigate this risk.
+
+### Evictions
+
+Configuring memory eviction thresholds for swap-enabled nodes can be tricky.
+
+With swap being disabled, it is reasonable to configure kubelet's eviction thresholds
+to be a bit lower than the node's memory capacity.
+The rationale is that we want Kubernetes to start evicting Pods before the node runs out of memory
+and invokes the Out Of Memory (OOM) killer, since the OOM killer is not Kubernetes-aware,
+therefore does not consider things like QoS, pod priority, or other Kubernetes-specific factors.
+
+With swap enabled, the situation is more complex.
+In Linux, the `vm.min_free_kbytes` parameter defines the memory threshold for the kernel
+to start aggressively reclaiming memory, which includes swapping out pages.
+If the kubelet's eviction thresholds are set in a way that eviction would take place
+before the kernel starts reclaiming memory, it could lead to workloads never
+being able to swap out during node memory pressure.
+However, setting the eviction thresholds too high could result in the node running out of memory
+and invoking the OOM killer, which is not ideal either.
+
+To address this, it is recommended to set the kubelet's eviction thresholds
+to be slightly lower than the `vm.min_free_kbytes` value.
+This way, the node can start swapping before kubelet would start evicting Pods,
+allowing workloads to swap out unused data and preventing evictions from happening.
+On the other hand, since it is just slightly lower, kubelet is likely to start evicting Pods
+before the node runs out of memory, thus avoiding the OOM killer.
+
+The value of `vm.min_free_kbytes` can be determined by running the following command on the node:
+```shell
+cat /proc/sys/vm/min_free_kbytes
+```
+
+### Unutilized swap space
+
+Under the `LimitedSwap` behavior, the amount of swap available to a Pod is determined automatically,
+based on the proportion of the memory requested relative to the node's total memory
+(For more details, see the [section below](#how-is-the-swap-limit-being-determined-with-limitedswap)).
+
+This design means that usually there would be some portion of swap that will remain
+restricted for Kubernetes workloads.
+For example, since Guaranteed QoS pods are currently not permitted to use swap,
+the amount of swap that's proportional to the memory request will remain unused
+by Kubernetes workloads.
+
+This behavior carries some risk in a situation where many pods are not eligible for swapping.
+On the other hand, it effectively keeps some system-reserved amount of swap memory that can be used by processes
+outside of Kubernetes' scope, such as system daemons and even kubelet itself.
+
+## Good practice for using swap in a Kubernetes cluster
+
+### Disable swap for system-critical daemons
+
+During the testing phase and based on user feedback, it was observed that the performance
+of system-critical daemons and services might degrade.
+This implies that system daemons, including the kubelet, could operate slower than usual.
+If this issue is encountered, it is advisable to configure the cgroup of the system slice
+to prevent swapping (i.e., set `memory.swap.max=0`).
+
+### Protect system-critical daemons for I/O latency
+
+Swap can increase the I/O load on a node.
+When memory pressure causes the kernel to rapidly swap pages in and out,
+system-critical daemons and services that rely on I/O operations may
+experience performance degradation.
+
+To mitigate this, it is recommended for systemd users to prioritize the system slice in terms of I/O latency.
+For non-systemd users,
+setting up a dedicated cgroup for system daemons and processes and prioritizing I/O latency in the same way is advised.
+This can be achieved by setting `io.latency` for the system slice,
+thereby granting it higher I/O priority.
+See [cgroup's documentation](https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rst) for more info.
+
+### Swap and control plane nodes
+
+The Kubernetes project recommends running control plane nodes without any swap space configured.
+The control plane primarily hosts Guaranteed QoS Pods, so swap can generally be disabled.
+The main concern is that swapping critical services on the control plane could negatively impact performance.
+
+### Use of a dedicated disk for swap
+
+The Kubernetes project recommends using encrypted swap, whenever you run nodes with swap enabled.
+If swap resides on a partition or the root filesystem, workloads may interfere
+with system processes that need to write to disk.
+When they share the same disk, processes can overwhelm swap,
+disrupting the I/O of kubelet, container runtime, and systemd, which would impact other workloads.
+Since swap space is located on a disk, it is crucial to ensure the disk is fast enough for the intended use cases.
+Alternatively, one can configure I/O priorities between different mapped areas of a single backing device.
+
+### Swap-aware scheduling
+
+Kubernetes {{< skew currentVersion >}} does not support allocating Pods to nodes in a way that accounts
+for swap memory usage. The scheduler typically uses _requests_ for infrastructure resources
+to guide Pod placement, and Pods do not request swap space; they just request `memory`.
+This means that the scheduler does not consider swap memory when making scheduling decisions.
+While this is something we are actively working on, it is not yet implemented.
+
+In order for administrators to ensure that Pods are not scheduled on nodes
+with swap memory unless they are specifically intended to use it,
+Administrators can taint nodes with swap available to protect against this problem.
+Taints will ensure that workloads which tolerate swap will not spill onto nodes without swap under load.
+
+### Selecting storage for optimal performance
+
+The storage device designated for swap space is critical to maintaining system responsiveness
+during high memory usage.
+Rotational hard disk drives (HDDs) are ill-suited for this task as their mechanical nature introduces significant latency,
+leading to severe performance degradation and system thrashing.
+For modern performance needs, a device such as a Solid State Drive (SSD) is probably the appropriate choice for swap,
+as its low-latency electronic access minimizes the slowdown.
+
+
+## Swap Behavior Details
+
+### How is the swap limit being determined with LimitedSwap?
+
+The configuration of swap memory, including its limitations, presents a significant
+challenge. Not only is it prone to misconfiguration, but as a system-level property, any
+misconfiguration could potentially compromise the entire node rather than just a specific
+workload. To mitigate this risk and ensure the health of the node, we have implemented
+Swap with automatic configuration of limitations.
+
+With `LimitedSwap`, Pods that do not fall under the Burstable QoS classification (i.e.
+`BestEffort`/`Guaranteed` QoS Pods) are prohibited from utilizing swap memory.
+`BestEffort` QoS Pods exhibit unpredictable memory consumption patterns and lack
+information regarding their memory usage, making it difficult to determine a safe
+allocation of swap memory.
+Conversely, `Guaranteed` QoS Pods are typically employed for applications that rely on the
+precise allocation of resources specified by the workload, with memory being immediately available.
+To maintain the aforementioned security and node health guarantees,
+these Pods are not permitted to use swap memory when `LimitedSwap` is in effect.
+In addition, high-priority pods are not permitted to use swap in order to ensure the memory
+they consume always residents on disk, hence ready to use.
+
+Prior to detailing the calculation of the swap limit, it is necessary to define the following terms:
+* `nodeTotalMemory`: The total amount of physical memory available on the node.
+* `totalPodsSwapAvailable`: The total amount of swap memory on the node that is available for use by Pods (some swap memory may be reserved for system use).
+* `containerMemoryRequest`: The container's memory request.
+
+Swap limitation is configured as:
+( `containerMemoryRequest` / `nodeTotalMemory` ) × `totalPodsSwapAvailable`
+
+In other words, the amount of swap that a container is able to use is proportionate to its
+memory request, the node's total physical memory and the total amount of swap memory on
+the node that is available for use by Pods.
+
+It is important to note that, for containers within Burstable QoS Pods, it is possible to
+opt-out of swap usage by specifying memory requests that are equal to memory limits.
+Containers configured in this manner will not have access to swap memory.
+
+
+## {{% heading "whatsnext" %}}
+
+- You can check out a [blog post about Kubernetes and swap](/blog/2025/03/25/swap-linux-improvements/)
+- For more information, please see the original KEP, [KEP-2400](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2400-node-swap),
+and its [design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
diff --git a/content/en/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md b/content/en/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md
index 6ab7edb885c5d..822ed5778f6f5 100644
--- a/content/en/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md
+++ b/content/en/docs/concepts/scheduling-eviction/dynamic-resource-allocation.md
@@ -30,165 +30,319 @@ api_metadata:
{{< feature-state feature_gate_name="DynamicResourceAllocation" >}}
-Dynamic resource allocation is an API for requesting and sharing resources
-between pods and containers inside a pod. It is a generalization of the
-persistent volumes API for generic resources. Typically those resources
-are devices like GPUs.
-
-Third-party resource drivers are
-responsible for tracking and preparing resources, with allocation of
-resources handled by Kubernetes via _structured parameters_ (introduced in Kubernetes 1.30).
-Different kinds of resources support arbitrary parameters for defining requirements and
-initialization.
-
-Kubernetes v1.26 through to 1.31 included an (alpha) implementation of _classic DRA_,
-which is no longer supported. This documentation, which is for Kubernetes
-v{{< skew currentVersion >}}, explains the current approach to dynamic resource
-allocation within Kubernetes.
-
-## {{% heading "prerequisites" %}}
-
-Kubernetes v{{< skew currentVersion >}} includes cluster-level API support for
-dynamic resource allocation, but it [needs to be enabled](#enabling-dynamic-resource-allocation)
-explicitly. You also must install a resource driver for specific resources that
-are meant to be managed using this API. If you are not running Kubernetes
-v{{< skew currentVersion>}}, check the documentation for that version of Kubernetes.
+This page describes _dynamic resource allocation (DRA)_ in Kubernetes.
-## API
+## About DRA {#about-dra}
-The `resource.k8s.io/v1beta1` and `resource.k8s.io/v1beta2`
-{{< glossary_tooltip text="API groups" term_id="api-group" >}} provide these types:
+{{< glossary_definition prepend="DRA is" term_id="dra" length="all" >}}
-ResourceClaim
-: Describes a request for access to resources in the cluster,
- for use by workloads. For example, if a workload needs an accelerator device
- with specific properties, this is how that request is expressed. The status
- stanza tracks whether this claim has been satisfied and what specific
- resources have been allocated.
+Allocating resources with DRA is a similar experience to
+[dynamic volume provisioning](/docs/concepts/storage/dynamic-provisioning/), in
+which you use PersistentVolumeClaims to claim storage capacity from storage
+classes and request the claimed capacity in your Pods.
-ResourceClaimTemplate
-: Defines the spec and some metadata for creating
- ResourceClaims. Created by a user when deploying a workload.
- The per-Pod ResourceClaims are then created and removed by Kubernetes
- automatically.
+### Benefits of DRA {#dra-benefits}
-DeviceClass
-: Contains pre-defined selection criteria for certain devices and
- configuration for them. DeviceClasses are created by a cluster administrator
- when installing a resource driver. Each request to allocate a device
- in a ResourceClaim must reference exactly one DeviceClass.
+DRA provides a flexible way to categorize, request, and use devices in your
+cluster. Using DRA provides benefits like the following:
-ResourceSlice
-: Used by DRA drivers to publish information about resources (typically devices)
- that are available in the cluster.
+* **Flexible device filtering**: use common expression language (CEL) to perform
+ fine-grained filtering for specific device attributes.
+* **Device sharing**: share the same resource with multiple containers or Pods
+ by referencing the corresponding resource claim.
+* **Centralized device categorization**: device drivers and cluster admins can
+ use device classes to provide app operators with hardware categories that are
+ optimized for various use cases. For example, you can create a cost-optimized
+ device class for general-purpose workloads, and a high-performance device
+ class for critical jobs.
+* **Simplified Pod requests**: with DRA, app operators don't need to specify
+ device quantities in Pod resource requests. Instead, the Pod references a
+ resource claim, and the device configuration in that claim applies to the Pod.
+
+These benefits provide significant improvements in the device allocation
+workflow when compared to
+[device plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/),
+which require per-container device requests, don't support device sharing, and
+don't support expression-based device filtering.
+
+### Types of DRA users {#dra-user-types}
+
+The workflow of using DRA to allocate devices involves the following types of
+users:
-DeviceTaintRule
-: Used by admins or control plane components to add device taints
- to the devices described in ResourceSlices.
+* **Device owner**: responsible for devices. Device owners might be commercial
+ vendors, the cluster operator, or another entity. To use DRA, devices must
+ have DRA-compatible drivers that do the following:
-All parameters that select devices are defined in the ResourceClaim and
-DeviceClass with in-tree types. Configuration parameters can be embedded there.
-Which configuration parameters are valid depends on the DRA driver -- Kubernetes
-only passes them through without interpreting them.
+ * Create ResourceSlices that provide Kubernetes with information about
+ nodes and resources.
+ * Update ResourceSlices when resource capacity in the cluster changes.
+ * Optionally, create DeviceClasses that workload operators can use to
+ claim devices.
-The `core/v1` `PodSpec` defines ResourceClaims that are needed for a Pod in a
-`resourceClaims` field. Entries in that list reference either a ResourceClaim
-or a ResourceClaimTemplate. When referencing a ResourceClaim, all Pods using
-this PodSpec (for example, inside a Deployment or StatefulSet) share the same
-ResourceClaim instance. When referencing a ResourceClaimTemplate, each Pod gets
-its own instance.
+* **Cluster admin**: responsible for configuring clusters and nodes,
+ attaching devices, installing drivers, and similar tasks. To use DRA,
+ cluster admins do the following:
-The `resources.claims` list for container resources defines whether a container gets
-access to these resource instances, which makes it possible to share resources
-between one or more containers.
+ * Attach devices to nodes.
+ * Install device drivers that support DRA.
+ * Optionally, create DeviceClasses that workload operators can use to claim
+ devices.
-Here is an example for a fictional resource driver. Two ResourceClaim objects
-will get created for this Pod and each container gets access to one of them.
+* **Workload operator**: responsible for deploying and managing workloads in the
+ cluster. To use DRA to allocate devices to Pods, workload operators do the
+ following:
+
+ * Create ResourceClaims or ResourceClaimTemplates to request specific
+ configurations within DeviceClasses.
+ * Deploy workloads that use specific ResourceClaims or ResourceClaimTemplates.
+
+## DRA terminology {#terminology}
+
+DRA uses the following Kubernetes API kinds to provide the core allocation
+functionality. All of these API kinds are included in the
+`resource.k8s.io/v1beta1`
+{{< glossary_tooltip text="API group" term_id="api-group" >}}.
+
+DeviceClass
+: Defines a category of devices that can be claimed and how to select specific
+ device attributes in claims. The DeviceClass parameters can match zero or
+ more devices in ResourceSlices. To claim devices from a DeviceClass,
+ ResourceClaims select specific device attributes.
+
+ResourceClaim
+: Describes a request for access to attached resources, such as
+ devices, in the cluster. ResourceClaims provide Pods with access to
+ a specific resource. ResourceClaims can be created by workload operators
+ or generated by Kubernetes based on a ResourceClaimTemplate.
+
+ResourceClaimTemplate
+: Defines a template that Kubernetes uses to create per-Pod
+ ResourceClaims for a workload. ResourceClaimTemplates provide Pods with
+ access to separate, similar resources. Each ResourceClaim that Kubernetes
+ generates from the template is bound to a specific Pod. When the Pod
+ terminates, Kubernetes deletes the corresponding ResourceClaim.
+
+ResourceSlice
+: Represents one or more resources that are attached to nodes, such as devices.
+ Drivers create and manage ResourceSlices in the cluster. When a ResourceClaim
+ is created and used in a Pod, Kubernetes uses ResourceSlices to find nodes
+ that have access to the claimed resources. Kubernetes allocates resources to
+ the ResourceClaim and schedules the Pod onto a node that can access the
+ resources.
+
+### DeviceClass {#deviceclass}
+
+A DeviceClass lets cluster admins or device drivers define categories of devices
+in the cluster. DeviceClasses tell operators what devices they can request and
+how they can request those devices. You can use
+[common expression language (CEL)](https://cel.dev) to select devices based on
+specific attributes. A ResourceClaim that references the DeviceClass can then
+request specific configurations within the DeviceClass.
+
+To create a DeviceClass, see
+[Set Up DRA in a Cluster](/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster).
+
+### ResourceClaims and ResourceClaimTemplates {#resourceclaims-templates}
+
+A ResourceClaim defines the resources that a workload needs. Every ResourceClaim
+has _requests_ that reference a DeviceClass and select devices from that
+DeviceClass. ResourceClaims can also use _selectors_ to filter for devices that
+meet specific requirements, and can use _constraints_ to limit the devices that
+can satisfy a request. ResourceClaims can be created by workload operators or
+can be generated by Kubernetes based on a ResourceClaimTemplate. A
+ResourceClaimTemplate defines a template that Kubernetes can use to
+auto-generate ResourceClaims for Pods.
+
+#### Use cases for ResourceClaims and ResourceClaimTemplates {#when-to-use-rc-rct}
+
+The method that you use depends on your requirements, as follows:
+
+* **ResourceClaim**: you want multiple Pods to share access to specific
+ devices. You manually manage the lifecycle of ResourceClaims that you create.
+* **ResourceClaimTemplate**: you want Pods to have independent access to
+ separate, similarly-configured devices. Kubernetes generates ResourceClaims
+ from the specification in the ResourceClaimTemplate. The lifetime of each
+ generated ResourceClaim is bound to the lifetime of the corresponding Pod.
+
+When you define a workload, you can use
+{{< glossary_tooltip term_id="cel" text="Common Expression Language (CEL)" >}}
+to filter for specific device attributes or capacity. The available parameters
+for filtering depend on the device and the drivers.
+
+If you directly reference a specific ResourceClaim in a Pod, that ResourceClaim
+must already exist in the same namespace as the Pod. If the ResourceClaim
+doesn't exist in the namespace, the Pod won't schedule. This behavior is similar
+to how a PersistentVolumeClaim must exist in the same namespace as a Pod that
+references it.
+
+You can reference an auto-generated ResourceClaim in a Pod, but this isn't
+recommended because auto-generated ResourceClaims are bound to the lifetime of
+the Pod that triggered the generation.
+
+To learn how to claim resources using one of these methods, see
+[Allocate Devices to Workloads with DRA](/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/).
+
+### ResourceSlice {#resourceslice}
+
+Each ResourceSlice represents one or more
+{{< glossary_tooltip term_id="device" text="devices" >}} in a pool. The pool is
+managed by a device driver, which creates and manages ResourceSlices. The
+resources in a pool might be represented by a single ResourceSlice or span
+multiple ResourceSlices.
+
+ResourceSlices provide useful information to device users and to the scheduler,
+and are crucial for dynamic resource allocation. Every ResourceSlice must include
+the following information:
+
+* **Resource pool**: a group of one or more resources that the driver manages.
+ The pool can span more than one ResourceSlice. Changes to the resources in a
+ pool must be propagated across all of the ResourceSlices in that pool. The
+ device driver that manages the pool is responsible for ensuring that this
+ propagation happens.
+* **Devices**: devices in the managed pool. A ResourceSlice can list every
+ device in a pool or a subset of the devices in a pool. The ResourceSlice
+ defines device information like attributes, versions, and capacity. Device
+ users can select devices for allocation by filtering for device information
+ in ResourceClaims or in DeviceClasses.
+* **Nodes**: the nodes that can access the resources. Drivers can choose which
+ nodes can access the resources, whether that's all of the nodes in the
+ cluster, a single named node, or nodes that have specific node labels.
+
+Drivers use a {{< glossary_tooltip text="controller" term_id="controller" >}} to
+reconcile ResourceSlices in the cluster with the information that the driver has
+to publish. This controller overwrites any manual changes, such as cluster users
+creating or modifying ResourceSlices.
+
+Consider the following example ResourceSlice:
```yaml
-apiVersion: resource.k8s.io/v1beta2
-kind: DeviceClass
-metadata:
- name: resource.example.com
-spec:
- selectors:
- - cel:
- expression: device.driver == "resource-driver.example.com"
----
-apiVersion: resource.k8s.io/v1beta2
-kind: ResourceClaimTemplate
-metadata:
- name: large-black-cat-claim-template
-spec:
- spec:
- devices:
- requests:
- - name: req-0
- exactly:
- deviceClassName: resource.example.com
- selectors:
- - cel:
- expression: |-
- device.attributes["resource-driver.example.com"].color == "black" &&
- device.attributes["resource-driver.example.com"].size == "large"
----
-apiVersion: v1
-kind: Pod
+apiVersion: resource.k8s.io/v1beta1
+kind: ResourceSlice
metadata:
- name: pod-with-cats
+ name: cat-slice
spec:
- containers:
- - name: container0
- image: ubuntu:20.04
- command: ["sleep", "9999"]
- resources:
- claims:
- - name: cat-0
- - name: container1
- image: ubuntu:20.04
- command: ["sleep", "9999"]
- resources:
- claims:
- - name: cat-1
- resourceClaims:
- - name: cat-0
- resourceClaimTemplateName: large-black-cat-claim-template
- - name: cat-1
- resourceClaimTemplateName: large-black-cat-claim-template
+ driver: "resource-driver.example.com"
+ pool:
+ generation: 1
+ name: "black-cat-pool"
+ resourceSliceCount: 1
+ # The allNodes field defines whether any node in the cluster can access the device.
+ allNodes: true
+ devices:
+ - name: "large-black-cat"
+ basic:
+ attributes:
+ color:
+ string: "black"
+ size:
+ string: "large"
+ cat:
+ boolean: true
```
+This ResourceSlice is managed by the `resource-driver.example.com` driver in the
+`black-cat-pool` pool. The `allNodes: true` field indicates that any node in the
+cluster can access the devices. There's one device in the ResourceSlice, named
+`large-black-cat`, with the following attributes:
+
+* `color`: `black`
+* `size`: `large`
+* `cat`: `true`
+
+A DeviceClass could select this ResourceSlice by using these attributes, and a
+ResourceClaim could filter for specific devices in that DeviceClass.
+
+## How resource allocation with DRA works {#how-it-works}
+
+The following sections describe the workflow for the various
+[types of DRA users](#dra-user-types) and for the Kubernetes system during
+dynamic resource allocation.
+
+### Workflow for users {#user-workflow}
+
+1. **Driver creation**: device owners or third-party entities create drivers
+ that can create and manage ResourceSlices in the cluster. These drivers
+ optionally also create DeviceClasses that define a category of devices and
+ how to request them.
+1. **Cluster configuration**: cluster admins create clusters, attach devices to
+ nodes, and install the DRA device drivers. Cluster admins optionally create
+ DeviceClasses that define categories of devices and how to request them.
+1. **Resource claims**: workload operators create ResourceClaimTemplates or
+ ResourceClaims that request specific device configurations within a
+ DeviceClass. In the same step, workload operators modify their Kubernetes
+ manifests to request those ResourceClaimTemplates or ResourceClaims.
+
+### Workflow for Kubernetes {#kubernetes-workflow}
+
+1. **ResourceSlice creation**: drivers in the cluster create ResourceSlices that
+ represent one or more devices in a managed pool of similar devices.
+1. **Workload creation**: the cluster control plane checks new workloads for
+ references to ResourceClaimTemplates or to specific ResourceClaims.
+
+ * If the workload uses a ResourceClaimTemplate, a controller named the
+ `resourceclaim-controller` generates ResourceClaims for every Pod in the
+ workload.
+ * If the workload uses a specific ResourceClaim, Kubernetes checks whether
+ that ResourceClaim exists in the cluster. If the ResourceClaim doesn't
+ exist, the Pods won't deploy.
+
+1. **ResourceSlice filtering**: for every Pod, Kubernetes checks the
+ ResourceSlices in the cluster to find a device that satisfies all of the
+ following criteria:
+
+ * The nodes that can access the resources are eligible to run the Pod.
+ * The ResourceSlice has unallocated resources that match the requirements of
+ the Pod's ResourceClaim.
+
+1. **Resource allocation**: after finding an eligible ResourceSlice for a
+ Pod's ResourceClaim, the Kubernetes scheduler updates the ResourceClaim
+ with the allocation details.
+1. **Pod scheduling**: when resource allocation is complete, the scheduler
+ places the Pod on a node that can access the allocated resource. The device
+ driver and the kubelet on that node configure the device and the Pod's access
+ to the device.
+
+## Observability of dynamic resources {#observability-dynamic-resources}
+
+You can check the status of dynamically allocated resources by using any of the
+following methods:
+
+* [kubelet device metrics](#monitoring-resources)
+* [ResourceClaim status](#resourceclaim-device-status)
+
+### kubelet device metrics {#monitoring-resources}
+
+The `PodResourcesLister` kubelet gRPC service lets you monitor in-use devices.
+The `DynamicResource` message provides information that's specific to dynamic
+resource allocation, such as the device name and the claim name. For details,
+see
+[Monitoring device plugin resources](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
+
+### ResourceClaim device status {#resourceclaim-device-status}
-## Scheduling
-
-The scheduler is responsible for allocating resources to a ResourceClaim whenever a pod needs
-them. It does so by retrieving the full list of available resources from
-ResourceSlice objects, tracking which of those resources have already been
-allocated to existing ResourceClaims, and then selecting from those resources
-that remain.
-
-The only kind of supported resources at the moment are devices. A device
-instance has a name and several attributes and capacities. Devices get selected
-through CEL expressions which check those attributes and capacities. In
-addition, the set of selected devices also can be restricted to sets which meet
-certain constraints.
+{{< feature-state feature_gate_name="DRAResourceClaimDeviceStatus" >}}
-The chosen resource is recorded in the ResourceClaim status together with any
-vendor-specific configuration, so when a pod is about to start on a node, the
-resource driver on the node has all the information it needs to prepare the
-resource.
+DRA drivers can report driver-specific
+[device status](/docs/concepts/overview/working-with-objects/#object-spec-and-status)
+data for each allocated device in the `status.devices` field of a ResourceClaim.
+For example, the driver might list the IP addresses that are assigned to a
+network interface device.
-By using structured parameters, the scheduler is able to reach a decision
-without communicating with any DRA resource drivers. It is also able to
-schedule multiple pods quickly by keeping information about ResourceClaim
-allocations in memory and writing this information to the ResourceClaim objects
-in the background while concurrently binding the pod to a node.
+The accuracy of the information that a driver adds to a ResourceClaim
+`status.devices` field depends on the driver. Evaluate drivers to decide whether
+you can rely on this field as the only source of device information.
-## Monitoring resources
+If you disable the `DRAResourceClaimDeviceStatus`
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), the
+`status.devices` field automatically gets cleared when storing the ResourceClaim.
+A ResourceClaim device status is supported when it is possible, from a DRA
+driver, to update an existing ResourceClaim where the `status.devices` field is
+set.
-The kubelet provides a gRPC service to enable discovery of dynamic resources of
-running Pods. For more information on the gRPC endpoints, see the
-[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
+For details about the `status.devices` field, see the
+{{< api-reference page="workload-resources/resource-claim-v1beta1" anchor="ResourceClaimStatus" text="ResourceClaim" >}} API reference.
## Pre-scheduled Pods
@@ -225,7 +379,17 @@ spec:
You may also be able to mutate the incoming Pod, at admission time, to unset
the `.spec.nodeName` field and to use a node selector instead.
-## Admin access
+## DRA alpha features {#alpha-features}
+
+The following sections describe DRA features that are available in the Alpha
+[feature stage](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages).
+To use any of these features, you must also set up DRA in your clusters by
+enabling the DynamicResourceAllocation feature gate and the DRA
+{{< glossary_tooltip text="API groups" term_id="api-group" >}}. For more
+information, see
+[Set up DRA in the cluster](/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/).
+
+### Admin access {#admin-access}
{{< feature-state feature_gate_name="DRAAdminAccess" >}}
@@ -258,26 +422,9 @@ multi-tenant clusters. Starting with Kubernetes v1.33, only users authorized to
create ResourceClaim or ResourceClaimTemplate objects in namespaces labeled with
`resource.k8s.io/admin-access: "true"` (case-sensitive) can use the
`adminAccess` field. This ensures that non-admin users cannot misuse the
-feature.
-
-## ResourceClaim Device Status
-
-{{< feature-state feature_gate_name="DRAResourceClaimDeviceStatus" >}}
-
-The drivers can report driver-specific device status data for each allocated device
-in a resource claim. For example, IPs assigned to a network interface device can be
-reported in the ResourceClaim status.
-
-The drivers setting the status, the accuracy of the information depends on the implementation
-of those DRA Drivers. Therefore, the reported status of the device may not always reflect the
-real time changes of the state of the device.
+feature.
-When the feature is disabled, that field automatically gets cleared when storing the ResourceClaim.
-
-A ResourceClaim device status is supported when it is possible, from a DRA driver, to update an
-existing ResourceClaim where the `status.devices` field is set.
-
-## Prioritized List
+### Prioritized list {#prioritized-list}
{{< feature-state feature_gate_name="DRAPrioritizedList" >}}
@@ -321,7 +468,11 @@ spec:
count: 2
```
-## Partitionable Devices
+Prioritized lists is an *alpha feature* and only enabled when the
+`DRAPrioritizedList` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+is enabled in the kube-apiserver and kube-scheduler.
+
+### Partitionable devices {#partitionable-devices}
{{< feature-state feature_gate_name="DRAPartitionableDevices" >}}
@@ -374,7 +525,12 @@ spec:
value: 6Gi
```
-## Device taints and tolerations
+Partitionable devices is an *alpha feature* and only enabled when the
+`DRAPartitionableDevices`
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+is enabled in the kube-apiserver and kube-scheduler.
+
+### Device taints and tolerations {#device-taints-and-tolerations}
{{< feature-state feature_gate_name="DRADeviceTaints" >}}
@@ -408,15 +564,22 @@ Allocating a device with admin access (described [above](#admin-access))
is not exempt either. An admin using that mode must explicitly tolerate all taints
to access tainted devices.
-Taints can be added to devices in two different ways:
+Device taints and tolerations is an *alpha feature* and only enabled when the
+`DRADeviceTaints` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+is enabled in the kube-apiserver, kube-controller-manager and kube-scheduler.
+To use DeviceTaintRules, the `resource.k8s.io/v1alpha3` API version must be
+enabled.
+
+You can add taints to devices in the following ways, by using the
+DeviceTaintRule API kind.
-### Taints set by the driver
+#### Taints set by the driver
A DRA driver can add taints to the device information that it publishes in ResourceSlices.
Consult the documentation of a DRA driver to learn whether the driver uses taints and what
their keys and values are.
-### Taints set by an admin
+#### Taints set by an admin
An admin or a control plane component can taint devices without having to tell
the DRA driver to include taints in its device information in ResourceSlices. They do that by
@@ -463,84 +626,10 @@ spec:
effect: NoExecute
```
-## Enabling dynamic resource allocation
-
-Dynamic resource allocation is a *beta feature* which is off by default and only enabled when the
-`DynamicResourceAllocation` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-and the `resource.k8s.io/v1beta1` and `resource.k8s.io/v1beta2` {{< glossary_tooltip text="API groups" term_id="api-group" >}}
-are enabled. For details on that, see the `--feature-gates` and `--runtime-config`
-[kube-apiserver parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
-kube-scheduler, kube-controller-manager and kubelet also need the feature gate.
-
-When a resource driver reports the status of the devices, then the
-`DRAResourceClaimDeviceStatus` feature gate has to be enabled in addition to
-`DynamicResourceAllocation`.
-
-A quick check whether a Kubernetes cluster supports the feature is to list
-DeviceClass objects with:
-
-```shell
-kubectl get deviceclasses
-```
-
-If your cluster supports dynamic resource allocation, the response is either a
-list of DeviceClass objects or:
-
-```
-No resources found
-```
-
-If not supported, this error is printed instead:
-
-```
-error: the server doesn't have a resource type "deviceclasses"
-```
-
-The default configuration of kube-scheduler enables the "DynamicResources"
-plugin if and only if the feature gate is enabled and when using
-the v1 configuration API. Custom configurations may have to be modified to
-include it.
-
-In addition to enabling the feature in the cluster, a resource driver also has to
-be installed. Please refer to the driver's documentation for details.
-
-### Enabling admin access
-
-[Admin access](#admin-access) is an *alpha feature* and only enabled when the
-`DRAAdminAccess` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-is enabled in the kube-apiserver and kube-scheduler.
-
-### Enabling Device Status
-
-[ResourceClaim Device Status](#resourceclaim-device-status) is an *alpha feature*
-and only enabled when the `DRAResourceClaimDeviceStatus`
-[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-is enabled in the kube-apiserver.
-
-### Enabling Prioritized List
-
-[Prioritized List](#prioritized-list)) is an *alpha feature* and only enabled when the
-`DRAPrioritizedList` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-is enabled in the kube-apiserver and kube-scheduler. It also requires that the
-`DynamicResourceAllocation` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-is enabled.
-
-### Enabling Partitionable Devices
-
-[Partitionable Devices](#partitionable-devices) is an *alpha feature*
-and only enabled when the `DRAPartitionableDevices`
-[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-is enabled in the kube-apiserver and kube-scheduler.
-
-### Enabling device taints and tolerations
-
-[Device taints and tolerations](#device-taints-and-tolerations) is an *alpha feature* and only enabled when the
-`DRADeviceTaints` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-is enabled in the kube-apiserver, kube-controller-manager and kube-scheduler. To use DeviceTaintRules, the
-`resource.k8s.io/v1alpha3` API version must be enabled.
-
## {{% heading "whatsnext" %}}
+- [Set Up DRA in a Cluster](/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster/)
+- [Allocate devices to workloads using DRA](/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra/)
- For more information on the design, see the
[Dynamic Resource Allocation with Structured Parameters](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/4381-dra-structured-parameters)
- KEP.
+ KEP.
\ No newline at end of file
diff --git a/content/en/docs/concepts/security/linux-security.md b/content/en/docs/concepts/security/linux-security.md
new file mode 100644
index 0000000000000..34895768fe834
--- /dev/null
+++ b/content/en/docs/concepts/security/linux-security.md
@@ -0,0 +1,29 @@
+---
+reviewers:
+- lmktfy
+title: Security For Linux Nodes
+content_type: concept
+weight: 40
+---
+
+
+
+This page describes security considerations and best practices specific to the Linux operating system.
+
+
+
+## Protection for Secret data on nodes
+
+On Linux nodes, memory-backed volumes (such as [`secret`](/docs/concepts/configuration/secret/)
+volume mounts, or [`emptyDir`](/docs/concepts/storage/volumes/#emptydir) with `medium: Memory`)
+are implemented with a `tmpfs` filesystem.
+
+If you have swap configured and use an older Linux kernel (or a current kernel and an unsupported configuration of Kubernetes),
+**memory** backed volumes can have data written to persistent storage.
+
+The Linux kernel officially supports the `noswap` option from version 6.3,
+therefore it is recommended the used kernel version is 6.3 or later,
+or supports the `noswap` option via a backport, if swap is enabled on the node.
+
+Read [swap memory management](/docs/concepts/cluster-administration/swap-memory-management/#memory-backed-volumes)
+for more info.
\ No newline at end of file
diff --git a/content/en/docs/concepts/security/secrets-good-practices.md b/content/en/docs/concepts/security/secrets-good-practices.md
index 0075fa9ebec77..bb42897eee865 100644
--- a/content/en/docs/concepts/security/secrets-good-practices.md
+++ b/content/en/docs/concepts/security/secrets-good-practices.md
@@ -87,6 +87,11 @@ the data.
For a list of supported providers, refer to
[Providers for the Secret Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/concepts.html#provider-for-the-secrets-store-csi-driver).
+## Good practices for using swap memory
+
+For best practices for setting swap memory for Linux nodes, please refer to
+[swap memory management](/docs/concepts/cluster-administration/swap-memory-management/#good-practice-for-using-swap-in-a-kubernetes-cluster).
+
## Developers
This section provides good practices for developers to use to improve the
diff --git a/content/en/docs/contribute/generate-ref-docs/kubectl.md b/content/en/docs/contribute/generate-ref-docs/kubectl.md
index f0221c651d985..e1544c726a396 100644
--- a/content/en/docs/contribute/generate-ref-docs/kubectl.md
+++ b/content/en/docs/contribute/generate-ref-docs/kubectl.md
@@ -94,7 +94,7 @@ git pull https://github.com/kubernetes/kubernetes {{< skew prevMinorVersion >}}.
```
If you do not need to edit the `kubectl` source code, follow the instructions for
-[Setting build variables](#setting-build-variables).
+[Setting build variables](#set-build-variables).
## Edit the kubectl source code
diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md
index 4938d4819a553..2ddc9b7f90975 100644
--- a/content/en/docs/reference/access-authn-authz/authentication.md
+++ b/content/en/docs/reference/access-authn-authz/authentication.md
@@ -699,12 +699,8 @@ jwt:
1. Egress selector configuration is not supported for calls to `issuer.url` and `issuer.discoveryURL`.
Kubernetes does not provide an OpenID Connect Identity Provider.
-You can use an existing public OpenID Connect Identity Provider (such as Google, or
-[others](https://connect2id.com/products/nimbus-oauth-openid-connect-sdk/openid-connect-providers)).
-Or, you can run your own Identity Provider, such as [dex](https://dexidp.io/),
-[Keycloak](https://github.com/keycloak/keycloak),
-CloudFoundry [UAA](https://github.com/cloudfoundry/uaa), or
-Tremolo Security's [OpenUnison](https://openunison.github.io/).
+You can use an existing public OpenID Connect Identity Provider or run your own Identity Provider
+that supports the OpenID Connect protocol.
For an identity provider to work with Kubernetes it must:
@@ -719,20 +715,11 @@ For an identity provider to work with Kubernetes it must:
1. Have a CA signed certificate (even if the CA is not a commercial CA or is self signed)
A note about requirement #3 above, requiring a CA signed certificate. If you deploy your own
-identity provider (as opposed to one of the cloud providers like Google or Microsoft) you MUST
-have your identity provider's web server certificate signed by a certificate with the `CA` flag
-set to `TRUE`, even if it is self signed. This is due to GoLang's TLS client implementation
-being very strict to the standards around certificate validation. If you don't have a CA handy,
-you can use the [gencert script](https://github.com/dexidp/dex/blob/master/examples/k8s/gencert.sh)
-from the Dex team to create a simple CA and a signed certificate and key pair. Or you can use
-[this similar script](https://raw.githubusercontent.com/TremoloSecurity/openunison-qs-kubernetes/master/src/main/bash/makessl.sh)
-that generates SHA256 certs with a longer life and larger key size.
-
-Refer to setup instructions for specific systems:
-
-- [UAA](https://docs.cloudfoundry.org/concepts/architecture/uaa.html)
-- [Dex](https://dexidp.io/docs/kubernetes/)
-- [OpenUnison](https://www.tremolosecurity.com/orchestra-k8s/)
+identity provider you MUST have your identity provider's web server certificate signed by a
+certificate with the `CA` flag set to `TRUE`, even if it is self signed. This is due to GoLang's
+TLS client implementation being very strict to the standards around certificate validation. If you
+don't have a CA handy, you can create a simple CA and a signed certificate and key pair using
+standard certificate generation tools.
#### Using kubectl
diff --git a/content/en/docs/reference/glossary/cel.md b/content/en/docs/reference/glossary/cel.md
new file mode 100644
index 0000000000000..3e24ac5ee644b
--- /dev/null
+++ b/content/en/docs/reference/glossary/cel.md
@@ -0,0 +1,24 @@
+---
+title: Common Expression Language
+id: cel
+date: 2025-06-04
+full_link: https://cel.dev
+short_description: >
+ An expression language that's designed to be safe for executing user code.
+tags:
+- extension
+- fundamental
+aka:
+- CEL
+---
+ A general-purpose expression language that's designed to be fast, portable, and
+safe to execute.
+
+
+
+In Kubernetes, CEL can be used to run queries and perform fine-grained
+filtering. For example, you can use CEL expressions with
+[dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
+to filter for specific fields in requests, and with
+[dynamic resource allocation (DRA)](/docs/concepts/scheduling-eviction/dynamic-resource-allocation)
+to select resources based on specific attributes.
diff --git a/content/en/docs/reference/glossary/device.md b/content/en/docs/reference/glossary/device.md
new file mode 100644
index 0000000000000..d52015d4109e7
--- /dev/null
+++ b/content/en/docs/reference/glossary/device.md
@@ -0,0 +1,23 @@
+---
+title: Device
+id: device
+date: 2025-05-13
+short_description: >
+ Any resource that's directly or indirectly attached your cluster's nodes, like
+ GPUs or circuit boards.
+
+tags:
+- extension
+- fundamental
+---
+ One or more
+{{< glossary_tooltip text="infrastructure resources" term_id="infrastructure-resource" >}}
+that are directly or indirectly attached to your
+{{< glossary_tooltip text="nodes" term_id="node" >}}.
+
+
+
+Devices might be commercial products like GPUs, or custom hardware like
+[ASIC boards](https://en.wikipedia.org/wiki/Application-specific_integrated_circuit).
+Attached devices usually require device drivers that let Kubernetes
+{{< glossary_tooltip text="Pods" term_id="pod" >}} access the devices.
diff --git a/content/en/docs/reference/glossary/deviceclass.md b/content/en/docs/reference/glossary/deviceclass.md
new file mode 100644
index 0000000000000..70b0280327874
--- /dev/null
+++ b/content/en/docs/reference/glossary/deviceclass.md
@@ -0,0 +1,23 @@
+---
+title: DeviceClass
+id: deviceclass
+date: 2025-05-26
+full_link: /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#deviceclass
+short_description: >
+ A category of devices in the cluster. Users can claim specific
+ devices in a DeviceClass.
+tags:
+- extension
+---
+ A category of {{< glossary_tooltip text="devices" term_id="device" >}} in the
+ cluster that can be used with dynamic resource allocation (DRA).
+
+
+
+Administrators or device owners use DeviceClasses to define a set of devices
+that can be claimed and used in workloads. Devices are claimed by creating
+{{< glossary_tooltip text="ResourceClaims" term_id="resourceclaim" >}}
+that filter for specific device parameters in a DeviceClass.
+
+For more information, see
+[Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#deviceclass)
diff --git a/content/en/docs/reference/glossary/dra.md b/content/en/docs/reference/glossary/dra.md
new file mode 100644
index 0000000000000..e334690e073e7
--- /dev/null
+++ b/content/en/docs/reference/glossary/dra.md
@@ -0,0 +1,25 @@
+---
+title: Dynamic Resource Allocation
+id: dra
+date: 2025-05-13
+full_link: /docs/concepts/scheduling-eviction/dynamic-resource-allocation/
+short_description: >
+ A Kubernetes feature for requesting and sharing resources, like hardware
+ accelerators, among Pods.
+
+aka:
+- DRA
+tags:
+- extension
+---
+ A Kubernetes feature that lets you request and share resources among Pods.
+These resources are often attached
+{{< glossary_tooltip text="devices" term_id="device" >}} like hardware
+accelerators.
+
+
+
+With DRA, device drivers and cluster admins define device _classes_ that are
+available to _claim_ in workloads. Kubernetes allocates matching devices to
+specific claims and places the corresponding Pods on nodes that can access the
+allocated devices.
diff --git a/content/en/docs/reference/glossary/resourceclaim.md b/content/en/docs/reference/glossary/resourceclaim.md
new file mode 100644
index 0000000000000..ae83cb88b5901
--- /dev/null
+++ b/content/en/docs/reference/glossary/resourceclaim.md
@@ -0,0 +1,23 @@
+---
+title: ResourceClaim
+id: resourceclaim
+date: 2025-05-26
+full_link: /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates
+short_description: >
+ Describes the resources that a workload needs, such as devices. ResourceClaims
+ can request devices from DeviceClasses.
+
+tags:
+- workload
+---
+ Describes the resources that a workload needs, such as
+{{< glossary_tooltip text="devices" term_id="device" >}}. ResourceClaims are
+used in
+[dynamic resource allocation (DRA)](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)
+to provide Pods with access to a specific resource.
+
+
+
+ResourceClaims can be created by workload operators or generated by Kubernetes
+based on a
+{{< glossary_tooltip text="ResourceClaimTemplate" term_id="resourceclaimtemplate" >}}.
diff --git a/content/en/docs/reference/glossary/resourceclaimtemplate.md b/content/en/docs/reference/glossary/resourceclaimtemplate.md
new file mode 100644
index 0000000000000..a79f1d0a2c91a
--- /dev/null
+++ b/content/en/docs/reference/glossary/resourceclaimtemplate.md
@@ -0,0 +1,24 @@
+---
+title: ResourceClaimTemplate
+id: resourceclaimtemplate
+date: 2025-05-26
+full_link: /docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaims-templates
+short_description: >
+ Defines a template for Kubernetes to create ResourceClaims. Used to provide
+ per-Pod access to separate, similar resources.
+
+tags:
+- workload
+---
+ Defines a template that Kubernetes uses to create
+{{< glossary_tooltip text="ResourceClaims" term_id="resourceclaim" >}}.
+ResourceClaimTemplates are used in
+[dynamic resource allocation (DRA)](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)
+to provide _per-Pod access to separate, similar resources_.
+
+
+
+When a ResourceClaimTemplate is referenced in a workload specification,
+Kubernetes automatically creates ResourceClaim objects based on the template.
+Each ResourceClaim is bound to a specific Pod. When the Pod terminates,
+Kubernetes deletes the corresponding ResourceClaim.
diff --git a/content/en/docs/reference/glossary/resourceslice.md b/content/en/docs/reference/glossary/resourceslice.md
new file mode 100644
index 0000000000000..e73fb9203af27
--- /dev/null
+++ b/content/en/docs/reference/glossary/resourceslice.md
@@ -0,0 +1,24 @@
+---
+title: ResourceSlice
+id: resourceslice
+date: 2025-05-26
+full_link: /docs/reference/kubernetes-api/workload-resources/resource-slice-v1beta1/
+short_description: >
+ Represents one or more infrastructure resources, like devices, in a pool of
+ similar resources.
+
+tags:
+- workload
+---
+ Represents one or more infrastructure resources, such as
+{{< glossary_tooltip text="devices" term_id="device" >}}, that are attached to
+nodes. Drivers create and manage ResourceSlices in the cluster. ResourceSlices
+are used for
+[dynamic resource allocation (DRA)](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/).
+
+
+
+When a {{< glossary_tooltip text="ResourceClaim" term_id="resourceclaim" >}} is
+created, Kubernetes uses ResourceSlices to find nodes that have access to
+resources that can satisfy the claim. Kubernetes allocates resources to the
+ResourceClaim and schedules the Pod onto a node that can access the resources.
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index d21603d809ea7..0a7701d04e104 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -1391,7 +1391,11 @@ Example: `service.kubernetes.io/service-proxy-name: "foo-bar"`
Used on: Service
-The kube-proxy has this label for custom proxy, which delegates service control to custom proxy.
+Setting a value for this label tells kube-proxy to ignore this service for proxying purposes.
+This allows for use of alternative proxy implementations for this service (e.g. running
+a DaemonSet that manages nftables its own way). Multiple alternative proxy implementations
+could be active simultaneously using this field, e.g. by having a value unique to each
+alternative proxy implementation to be responsible for their respective services.
### experimental.windows.kubernetes.io/isolation-type (deprecated) {#experimental-windows-kubernetes-io-isolation-type}
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index 0003b11d1795b..2cc6934ccaa41 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -204,7 +204,10 @@ On Windows the default CRI endpoint is `npipe://./pipe/containerd-containerd`.
#### Configuring the `systemd` cgroup driver {#containerd-systemd}
-To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`, set
+To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
+set the following config based on your Containerd version
+
+Containerd versions 1.x:
```
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
@@ -213,6 +216,15 @@ To use the `systemd` cgroup driver in `/etc/containerd/config.toml` with `runc`,
SystemdCgroup = true
```
+Containerd versions 2.x:
+
+```
+[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
+ ...
+ [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
+ SystemdCgroup = true
+```
+
The `systemd` cgroup driver is recommended if you use [cgroup v2](/docs/concepts/architecture/cgroups).
{{< note >}}
diff --git a/content/en/docs/tasks/configure-pod-container/assign-resources/_index.md b/content/en/docs/tasks/configure-pod-container/assign-resources/_index.md
new file mode 100644
index 0000000000000..5189c397e30f5
--- /dev/null
+++ b/content/en/docs/tasks/configure-pod-container/assign-resources/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Assign Devices to Pods and Containers"
+description: Assign infrastructure resources to your Kubernetes workloads.
+weight: 30
+---
+
diff --git a/content/en/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra.md b/content/en/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra.md
new file mode 100644
index 0000000000000..8dee1f8a569b7
--- /dev/null
+++ b/content/en/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra.md
@@ -0,0 +1,186 @@
+---
+title: Allocate Devices to Workloads with DRA
+content_type: task
+min-kubernetes-server-version: v1.32
+weight: 20
+---
+{{< feature-state feature_gate_name="DynamicResourceAllocation" >}}
+
+
+
+This page shows you how to allocate devices to your Pods by using
+_dynamic resource allocation (DRA)_. These instructions are for workload
+operators. Before reading this page, familiarize yourself with how DRA works and
+with DRA terminology like
+{{< glossary_tooltip text="ResourceClaims" term_id="resourceclaim" >}} and
+{{< glossary_tooltip text="ResourceClaimTemplates" term_id="resourceclaimtemplate" >}}.
+For more information, see
+[Dynamic Resource Allocation (DRA)](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/).
+
+
+
+## About device allocation with DRA {#about-device-allocation-dra}
+
+As a workload operator, you can _claim_ devices for your workloads by creating
+ResourceClaims or ResourceClaimTemplates. When you deploy your workload,
+Kubernetes and the device drivers find available devices, allocate them to your
+Pods, and place the Pods on nodes that can access those devices.
+
+
+
+## {{% heading "prerequisites" %}}
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+* Ensure that your cluster admin has set up DRA, attached devices, and installed
+ drivers. For more information, see
+ [Set Up DRA in a Cluster](/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster).
+
+
+
+## Identify devices to claim {#identify-devices}
+
+Your cluster administrator or the device drivers create
+_{{< glossary_tooltip term_id="deviceclass" text="DeviceClasses" >}}_ that
+define categories of devices. You can claim devices by using
+{{< glossary_tooltip term_id="cel" >}} to filter for specific device properties.
+
+Get a list of DeviceClasses in the cluster:
+
+```shell
+kubectl get deviceclasses
+```
+The output is similar to the following:
+
+```
+NAME AGE
+driver.example.com 16m
+```
+If you get a permission error, you might not have access to get DeviceClasses.
+Check with your cluster administrator or with the driver provider for available
+device properties.
+
+## Claim resources {#claim-resources}
+
+You can request resources from a DeviceClass by using
+{{< glossary_tooltip text="ResourceClaims" term_id="resourceclaim" >}}. To
+create a ResourceClaim, do one of the following:
+
+* Manually create a ResourceClaim if you want multiple Pods to share access to
+ the same devices, or if you want a claim to exist beyond the lifetime of a
+ Pod.
+* Use a
+ {{< glossary_tooltip text="ResourceClaimTemplate" term_id="resourceclaimtemplate" >}}
+ to let Kubernetes generate and manage per-Pod ResourceClaims. Create a
+ ResourceClaimTemplate if you want every Pod to have access to separate devices
+ that have similar configurations. For example, you might want simultaneous
+ access to devices for Pods in a Job that uses
+ [parallel execution](/docs/concepts/workloads/controllers/job/#parallel-jobs).
+
+If you directly reference a specific ResourceClaim in a Pod, that ResourceClaim
+must already exist in the cluster. If a referenced ResourceClaim doesn't exist,
+the Pod remains in a pending state until the ResourceClaim is created. You can
+reference an auto-generated ResourceClaim in a Pod, but this isn't recommended
+because auto-generated ResourceClaims are bound to the lifetime of the Pod that
+triggered the generation.
+
+To create a workload that claims resources, select one of the following options:
+
+{{< tabs name="claim-resources" >}}
+{{% tab name="ResourceClaimTemplate" %}}
+
+Review the following example manifest:
+
+{{% code_sample file="dra/resourceclaimtemplate.yaml" %}}
+
+This manifest creates a ResourceClaimTemplate that requests devices in the
+`example-device-class` DeviceClass that match both of the following parameters:
+
+ * Devices that have a `driver.example.com/type` attribute with a value of
+ `gpu`.
+ * Devices that have `64Gi` of capacity.
+
+To create the ResourceClaimTemplate, run the following command:
+
+```shell
+kubectl apply -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml
+```
+
+{{% /tab %}}
+{{% tab name="ResourceClaim" %}}
+
+Review the following example manifest:
+
+{{% code_sample file="dra/resourceclaim.yaml" %}}
+
+This manifest creates ResourceClaim that requests devices in the
+`example-device-class` DeviceClass that match both of the following parameters:
+
+ * Devices that have a `driver.example.com/type` attribute with a value of
+ `gpu`.
+ * Devices that have `64Gi` of capacity.
+
+To create the ResourceClaim, run the following command:
+
+```shell
+kubectl apply -f https://k8s.io/examples/dra/resourceclaim.yaml
+```
+
+{{% /tab %}}
+{{< /tabs >}}
+
+## Request devices in workloads using DRA {#request-devices-workloads}
+
+To request device allocation, specify a ResourceClaim or a ResourceClaimTemplate
+in the `resourceClaims` field of the Pod specification. Then, request a specific
+claim by name in the `resources.claims` field of a container in that Pod.
+You can specify multiple entries in the `resourceClaims` field and use specific
+claims in different containers.
+
+1. Review the following example Job:
+
+ {{% code_sample file="dra/dra-example-job.yaml" %}}
+
+ Each Pod in this Job has the following properties:
+
+ * Makes a ResourceClaimTemplate named `separate-gpu-claim` and a
+ ResourceClaim named `shared-gpu-claim` available to containers.
+ * Runs the following containers:
+ * `container0` requests the devices from the `separate-gpu-claim`
+ ResourceClaimTemplate.
+ * `container1` and `container2` share access to the devices from the
+ `shared-gpu-claim` ResourceClaim.
+
+1. Create the Job:
+
+ ```shell
+ kubectl apply -f https://k8s.io/examples/dra/dra-example-job.yaml
+ ```
+
+## Clean up {#clean-up}
+
+To delete the Kubernetes objects that you created in this task, follow these
+steps:
+
+1. Delete the example Job:
+
+ ```shell
+ kubectl delete -f https://k8s.io/examples/dra/dra-example-job.yaml
+ ```
+
+1. To delete your resource claims, run one of the following commands:
+
+ * Delete the ResourceClaimTemplate:
+
+ ```shell
+ kubectl delete -f https://k8s.io/examples/dra/resourceclaimtemplate.yaml
+ ```
+ * Delete the ResourceClaim:
+
+ ```shell
+ kubectl delete -f https://k8s.io/examples/dra/resourceclaim.yaml
+ ```
+
+## {{% heading "whatsnext" %}}
+
+* [Learn more about DRA](/docs/concepts/scheduling-eviction/dynamic-resource-allocation)
\ No newline at end of file
diff --git a/content/en/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster.md b/content/en/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster.md
new file mode 100644
index 0000000000000..1d8c895f279d1
--- /dev/null
+++ b/content/en/docs/tasks/configure-pod-container/assign-resources/set-up-dra-cluster.md
@@ -0,0 +1,189 @@
+---
+title: "Set Up DRA in a Cluster"
+content_type: task
+min-kubernetes-server-version: v1.32
+weight: 10
+---
+{{< feature-state feature_gate_name="DynamicResourceAllocation" >}}
+
+
+
+This page shows you how to configure _dynamic resource allocation (DRA)_ in a
+Kubernetes cluster by enabling API groups and configuring classes of devices.
+These instructions are for cluster administrators.
+
+
+
+## About DRA {#about-dra}
+
+{{< glossary_definition term_id="dra" length="all" >}}
+
+Ensure that you're familiar with how DRA works and with DRA terminology like
+{{< glossary_tooltip text="DeviceClasses" term_id="deviceclass" >}},
+{{< glossary_tooltip text="ResourceClaims" term_id="resourceclaim" >}}, and
+{{< glossary_tooltip text="ResourceClaimTemplates" term_id="resourceclaimtemplate" >}}.
+For details, see
+[Dynamic Resource Allocation (DRA)](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/).
+
+
+
+## {{% heading "prerequisites" %}}
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+* Directly or indirectly attach devices to your cluster. To avoid potential
+ issues with drivers, wait until you set up the DRA feature for your
+ cluster before you install drivers.
+
+
+
+## Enable the DRA API groups {#enable-dra}
+
+To let Kubernetes allocate resources to your Pods with DRA, complete the
+following configuration steps:
+
+1. Enable the `DynamicResourceAllocation`
+ [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+ on all of the following components:
+
+ * `kube-apiserver`
+ * `kube-controller-manager`
+ * `kube-scheduler`
+ * `kubelet`
+
+1. Enable the following
+ {{< glossary_tooltip text="API groups" term_id="api-group" >}}:
+
+ * `resource.k8s.io/v1beta1`: required for DRA to function.
+ * `resource.k8s.io/v1beta2`: optional, recommended improvements to the user
+ experience.
+
+ For more information, see
+ [Enabling or disabling API groups](/docs/reference/using-api/#enabling-or-disabling).
+
+## Verify that DRA is enabled {#verify}
+
+To verify that the cluster is configured correctly, try to list DeviceClasses:
+
+```shell
+kubectl get deviceclasses
+```
+If the component configuration was correct, the output is similar to the
+following:
+
+```
+No resources found
+```
+
+If DRA isn't correctly configured, the output of the preceding command is
+similar to the following:
+
+```
+error: the server doesn't have a resource type "deviceclasses"
+```
+Try the following troubleshooting steps:
+
+1. Ensure that the `kube-scheduler` component has the `DynamicResourceAllocation`
+ feature gate enabled *and* uses the
+ [v1 configuration API](/docs/reference/config-api/kube-scheduler-config.v1/).
+ If you use a custom configuration, you might need to perform additional steps
+ to enable the `DynamicResource` plugin.
+1. Restart the `kube-apiserver` component and the `kube-controller-manager`
+ component to propagate the API group changes.
+
+## Install device drivers {#install-drivers}
+
+After you enable DRA for your cluster, you can install the drivers for your
+attached devices. For instructions, check the documentation of your device
+owner or the project that maintains the device drivers. The drivers that you
+install must be compatible with DRA.
+
+To verify that your installed drivers are working as expected, list
+ResourceSlices in your cluster:
+
+```shell
+kubectl get resourceslices
+```
+The output is similar to the following:
+
+```
+NAME NODE DRIVER POOL AGE
+cluster-1-device-pool-1-driver.example.com-lqx8x cluster-1-node-1 driver.example.com cluster-1-device-pool-1-r1gc 7s
+cluster-1-device-pool-2-driver.example.com-29t7b cluster-1-node-2 driver.example.com cluster-1-device-pool-2-446z 8s
+```
+
+## Create DeviceClasses {#create-deviceclasses}
+
+You can define categories of devices that your application operators can
+claim in workloads by creating
+{{< glossary_tooltip text="DeviceClasses" term_id="deviceclass" >}}. Some device
+driver providers might also instruct you to create DeviceClasses during driver
+installation.
+
+The ResourceSlices that your driver publishes contain information about the
+devices that the driver manages, such as capacity, metadata, and attributes. You
+can use {{< glossary_tooltip term_id="cel" >}} to filter for properties in your
+DeviceClasses, which can make finding devices easier for your workload
+operators.
+
+1. To find the device properties that you can select in DeviceClasses by using
+ CEL expressions, get the specification of a ResourceSlice:
+
+ ```shell
+ kubectl get resourceslice -o yaml
+ ```
+ The output is similar to the following:
+
+ ```yaml
+ apiVersion: resource.k8s.io/v1beta1
+ kind: ResourceSlice
+ # lines omitted for clarity
+ spec:
+ devices:
+ - basic:
+ attributes:
+ type:
+ string: gpu
+ capacity:
+ memory:
+ value: 64Gi
+ name: gpu-0
+ - basic:
+ attributes:
+ type:
+ string: gpu
+ capacity:
+ memory:
+ value: 64Gi
+ name: gpu-1
+ driver: driver.example.com
+ nodeName: cluster-1-node-1
+ # lines omitted for clarity
+ ```
+ You can also check the driver provider's documentation for available
+ properties and values.
+
+1. Review the following example DeviceClass manifest, which selects any device
+ that's managed by the `driver.example.com` device driver:
+
+ {{% code_sample file="dra/deviceclass.yaml" %}}
+
+1. Create the DeviceClass in your cluster:
+
+ ```shell
+ kubectl apply -f https://k8s.io/examples/dra/deviceclass.yaml
+ ```
+
+## Clean up {#clean-up}
+
+To delete the DeviceClass that you created in this task, run the following
+command:
+
+```shell
+kubectl delete -f https://k8s.io/examples/dra/deviceclass.yaml
+```
+
+## {{% heading "whatsnext" %}}
+
+* [Learn more about DRA](/docs/concepts/scheduling-eviction/dynamic-resource-allocation)
+* [Allocate Devices to Workloads with DRA](/docs/tasks/configure-pod-container/assign-resources/allocate-devices-dra)
\ No newline at end of file
diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md
index 7e46633d73942..d004152fcab6c 100644
--- a/content/en/docs/tasks/configure-pod-container/security-context.md
+++ b/content/en/docs/tasks/configure-pod-container/security-context.md
@@ -288,7 +288,7 @@ See the Pod's status:
kubectl get pod security-context-demo -o yaml
```
-You can see that the `status.containerStatuses[].user.linux` field exposes the process identitiy
+You can see that the `status.containerStatuses[].user.linux` field exposes the process identity
attached to the first container process.
```none
diff --git a/content/en/docs/tasks/network/reconfigure-default-service-ip-ranges.md b/content/en/docs/tasks/network/reconfigure-default-service-ip-ranges.md
index cfd0e68d7f406..e5cbe8bd0ca7d 100644
--- a/content/en/docs/tasks/network/reconfigure-default-service-ip-ranges.md
+++ b/content/en/docs/tasks/network/reconfigure-default-service-ip-ranges.md
@@ -66,7 +66,7 @@ We can categorize Service CIDR reconfiguration into the following scenarios:
replacing the default ServiceCIDR is a complex operation. If the new
ServiceCIDR does not overlap with the existing one, [it will require
renumbering all existing Services and changing the `kubernetes.default`
- service](#Illustrative Reconfiguration Steps). The case where the primary IP
+ service](#illustrative-reconfiguration-steps). The case where the primary IP
family also changes is even more complicated, and may require to change
multiple cluster components (kubelet, network plugins, etc.) to match the new
primary IP family.
diff --git a/content/en/docs/tutorials/configuration/provision-swap-memory.md b/content/en/docs/tutorials/configuration/provision-swap-memory.md
new file mode 100644
index 0000000000000..f5906bb490b97
--- /dev/null
+++ b/content/en/docs/tutorials/configuration/provision-swap-memory.md
@@ -0,0 +1,131 @@
+---
+reviewers:
+- lmktfy
+title: Configuring swap memory on Kubernetes nodes
+content_type: tutorial
+weight: 35
+min-kubernetes-server-version: "1.33"
+---
+
+
+
+This page provides an example of how to provision and configure swap memory on a Kubernetes node using kubeadm.
+
+
+
+## {{% heading "objectives" %}}
+
+* Provision swap memory on a Kubernetes node using kubeadm.
+* Learn to configure both encrypted and unencrypted swap.
+* Learn to enable swap on boot.
+
+## {{% heading "prerequisites" %}}
+
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+You need at least one worker node in your cluster which needs to run a Linux operating system.
+It is required for this demo that the kubeadm tool be installed, following the steps outlined in the
+[kubeadm installation guide](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm).
+
+On each worker node where you will configure swap use, you need:
+* `fallocate`
+* `mkswap`
+* `swapon`
+
+* For encrypted swap space (recommended), you also need:
+* `cryptsetup`
+
+
+
+
+## Install a swap-enabled cluster with kubeadm
+
+### Create a swap file and turn swap on
+
+If swap is not enabled, there's a need to provision swap on the node.
+The following sections demonstrate creating 4GiB of swap, both in the encrypted and unencrypted case.
+
+{{< tabs name="Create a swap file and turn swap on" >}}
+
+{{% tab name="Setting up encrypted swap" %}}
+An encrypted swap file can be set up as follows.
+Bear in mind that this example uses the `cryptsetup` binary (which is available
+on most Linux distributions).
+
+```bash
+# Allocate storage and restrict access
+fallocate --length 4GiB /swapfile
+chmod 600 /swapfile
+
+# Create an encrypted device backed by the allocated storage
+cryptsetup --type plain --cipher aes-xts-plain64 --key-size 256 -d /dev/urandom open /swapfile cryptswap
+
+# Format the swap space
+mkswap /dev/mapper/cryptswap
+
+# Activate the swap space for paging
+swapon /dev/mapper/cryptswap
+```
+
+{{% /tab %}}
+
+{{% tab name="Setting up unencrypted swap" %}}
+An unencrypted swap file can be set up as follows.
+
+```bash
+# Allocate storage and restrict access
+fallocate --length 4GiB /swapfile
+chmod 600 /swapfile
+
+# Format the swap space
+mkswap /swapfile
+
+# Activate the swap space for paging
+swapon /swapfile
+```
+
+{{% /tab %}}
+
+{{< /tabs >}}
+
+#### Verify that swap is enabled
+
+Swap can be verified to be enabled with both `swapon -s` command or the `free` command.
+
+Using `swapon -s`:
+```
+Filename Type Size Used Priority
+/dev/dm-0 partition 4194300 0 -2
+```
+
+Using `free -h`:
+```
+ total used free shared buff/cache available
+Mem: 3.8Gi 1.3Gi 249Mi 25Mi 2.5Gi 2.5Gi
+Swap: 4.0Gi 0B 4.0Gi
+```
+
+#### Enable swap on boot
+
+After setting up swap, to start the swap file at boot time,
+you typically either set up a systemd unit to activate (encrypted) swap, or you
+add a line similar to `/swapfile swap swap defaults 0 0` into `/etc/fstab`.
+
+Using systemd for swap activation allows the system to delay kubelet start until swap is available,
+if that is something you want to ensure.
+In a similar way, using systemd allows your server to leave swap active until kubelet
+(and, typically, your container runtime) have shut down.
+
+### Set up kubelet configuration
+
+After enabling swap on the node, kubelet needs to be configured in the following way:
+
+```yaml
+ # this fragment goes into the kubelet's configuration file
+ failSwapOn: false
+ memorySwap:
+ swapBehavior: LimitedSwap
+```
+
+In order for these configurations to take effect, kubelet needs to be restarted.
diff --git a/content/en/docs/tutorials/kubernetes-basics/_index.md b/content/en/docs/tutorials/kubernetes-basics/_index.md
index 0885da9b50282..2edb6729d32ad 100644
--- a/content/en/docs/tutorials/kubernetes-basics/_index.md
+++ b/content/en/docs/tutorials/kubernetes-basics/_index.md
@@ -35,49 +35,45 @@ container orchestration, combined with best-of-breed ideas from the community.
## Kubernetes Basics Modules
-
-
+
+{{< tutorials/modules >}}
+ {{< tutorials/module
+ path="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347"
+ alt="Module 1"
+ title="1. Create a Kubernetes cluster" >}}
-
+ {{< tutorials/module
+ path="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347"
+ alt="Module 2"
+ title="2. Deploy an app" >}}
+
+ {{< tutorials/module
+ path="/docs/tutorials/kubernetes-basics/explore/explore-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347"
+ alt="Module 3"
+ title="3. Explore your app" >}}
+
+ {{< tutorials/module
+ path="/docs/tutorials/kubernetes-basics/expose/expose-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347"
+ alt="Module 4"
+ title="4. Expose your app publicly" >}}
+
+ {{< tutorials/module
+ path="/docs/tutorials/kubernetes-basics/scale/scale-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347"
+ alt="Module 5"
+ title="5. Scale up your app" >}}
+
+ {{< tutorials/module
+ path="/docs/tutorials/kubernetes-basics/update/update-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347"
+ alt="Module 6"
+ title="6. Update your app" >}}
+{{< /tutorials/modules >}}
## {{% heading "whatsnext" %}}
-* Tutorial [Using Minikube to Create a
-Cluster](/docs/tutorials/kubernetes-basics/create-cluster/)
\ No newline at end of file
+* Tutorial [Using Minikube to Create a Cluster](/docs/tutorials/kubernetes-basics/create-cluster/)
diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.md b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.md
index 2b2f5b65b614b..8162bfd97dace 100644
--- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.md
+++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.md
@@ -44,18 +44,14 @@ kubectl expose deployment/kubernetes-bootcamp --type="LoadBalancer" --port 8080
## Scaling overview
-
-
-
-
-
-
-
-
-
-
-
-
+{{< tutorials/carousel id="myCarousel" interval="3000" >}}
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_05_scaling1.svg"
+ active="true" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_05_scaling2.svg" >}}
+{{< /tutorials/carousel >}}
{{% alert %}}
_Scaling is accomplished by changing the number of replicas in a Deployment._
@@ -114,6 +110,7 @@ Two important columns of this output are:
* _DESIRED_ displays the desired number of replicas of the application, which you
define when you create the Deployment. This is the desired state.
* _CURRENT_ displays how many replicas are currently running.
+
Next, let’s scale the Deployment to 4 replicas. We’ll use the `kubectl scale` command,
followed by the Deployment type, name and desired number of instances:
@@ -229,4 +226,4 @@ This confirms that 2 Pods were terminated.
* Tutorial
[Performing a Rolling Update](/docs/tutorials/kubernetes-basics/update/update-intro/).
* Learn more about [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/).
-* Learn more about [Autoscaling](/docs/concepts/workloads/autoscaling/).
\ No newline at end of file
+* Learn more about [Autoscaling](/docs/concepts/workloads/autoscaling/).
diff --git a/content/en/docs/tutorials/kubernetes-basics/update/update-intro.md b/content/en/docs/tutorials/kubernetes-basics/update/update-intro.md
index 4ac55d9387cd7..9be42abcc4958 100644
--- a/content/en/docs/tutorials/kubernetes-basics/update/update-intro.md
+++ b/content/en/docs/tutorials/kubernetes-basics/update/update-intro.md
@@ -31,24 +31,20 @@ versioned and any Deployment update can be reverted to a previous (stable) versi
## Rolling updates overview
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+{{< tutorials/carousel id="myCarousel" interval="3000" >}}
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates1.svg"
+ active="true" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates2.svg" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates3.svg" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates4.svg" >}}
+{{< /tutorials/carousel >}}
{{% alert %}}
_If a Deployment is exposed publicly, the Service will load-balance the traffic
@@ -212,4 +208,4 @@ kubectl delete deployments/kubernetes-bootcamp services/kubernetes-bootcamp
## {{% heading "whatsnext" %}}
-* Learn more about [Deployments](/docs/concepts/workloads/controllers/deployment/).
\ No newline at end of file
+* Learn more about [Deployments](/docs/concepts/workloads/controllers/deployment/).
diff --git a/content/en/examples/dra/deviceclass.yaml b/content/en/examples/dra/deviceclass.yaml
new file mode 100644
index 0000000000000..dcad8e488bdfe
--- /dev/null
+++ b/content/en/examples/dra/deviceclass.yaml
@@ -0,0 +1,9 @@
+apiVersion: resource.k8s.io/v1beta2
+kind: DeviceClass
+metadata:
+ name: example-device-class
+spec:
+ selectors:
+ - cel:
+ expression: |-
+ device.driver == "driver.example.com"
diff --git a/content/en/examples/dra/dra-example-job.yaml b/content/en/examples/dra/dra-example-job.yaml
new file mode 100644
index 0000000000000..4548406277228
--- /dev/null
+++ b/content/en/examples/dra/dra-example-job.yaml
@@ -0,0 +1,34 @@
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: example-dra-job
+spec:
+ completions: 10
+ parallelism: 2
+ template:
+ spec:
+ restartPolicy: Never
+ containers:
+ - name: container0
+ image: ubuntu:24.04
+ command: ["sleep", "9999"]
+ resources:
+ claims:
+ - name: separate-gpu-claim
+ - name: container1
+ image: ubuntu:24.04
+ command: ["sleep", "9999"]
+ resources:
+ claims:
+ - name: shared-gpu-claim
+ - name: container2
+ image: ubuntu:24.04
+ command: ["sleep", "9999"]
+ resources:
+ claims:
+ - name: shared-gpu-claim
+ resourceClaims:
+ - name: separate-gpu-claim
+ resourceClaimTemplateName: example-resource-claim-template
+ - name: shared-gpu-claim
+ resourceClaimName: example-resource-claim
diff --git a/content/en/examples/dra/resourceclaim.yaml b/content/en/examples/dra/resourceclaim.yaml
new file mode 100644
index 0000000000000..88c031d5a0d52
--- /dev/null
+++ b/content/en/examples/dra/resourceclaim.yaml
@@ -0,0 +1,16 @@
+apiVersion: resource.k8s.io/v1beta2
+kind: ResourceClaim
+metadata:
+ name: example-resource-claim
+spec:
+ devices:
+ requests:
+ - name: single-gpu-claim
+ exactly:
+ deviceClassName: example-device-class
+ allocationMode: All
+ selectors:
+ - cel:
+ expression: |-
+ device.attributes["driver.example.com"].type == "gpu" &&
+ device.capacity["driver.example.com"].memory == quantity("64Gi")
diff --git a/content/en/examples/dra/resourceclaimtemplate.yaml b/content/en/examples/dra/resourceclaimtemplate.yaml
new file mode 100644
index 0000000000000..83e9c63b1b627
--- /dev/null
+++ b/content/en/examples/dra/resourceclaimtemplate.yaml
@@ -0,0 +1,16 @@
+apiVersion: resource.k8s.io/v1beta2
+kind: ResourceClaimTemplate
+metadata:
+ name: example-resource-claim-template
+spec:
+ spec:
+ devices:
+ requests:
+ - name: gpu-claim
+ exactly:
+ deviceClassName: example-device-class
+ selectors:
+ - cel:
+ expression: |-
+ device.attributes["driver.example.com"].type == "gpu" &&
+ device.capacity["driver.example.com"].memory == quantity("64Gi")
diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go
index b3597346470ef..109192a58aaa2 100644
--- a/content/en/examples/examples_test.go
+++ b/content/en/examples/examples_test.go
@@ -600,6 +600,12 @@ func TestExampleObjectSchemas(t *testing.T) {
"node-problem-detector-configmap": {&apps.DaemonSet{}},
"termination": {&api.Pod{}},
},
+ "dra": {
+ "deviceclass": {&resource.DeviceClass{}},
+ "resourceclaim": {&resource.ResourceClaim{}},
+ "resourceclaimtemplate": {&resource.ResourceClaimTemplate{}},
+ "dra-example-job": {&batch.Job{}},
+ },
"pods": {
"commands": {&api.Pod{}},
"image-volumes": {&api.Pod{}},
diff --git a/content/hi/docs/setup/production-environment/container-runtimes.md b/content/hi/docs/setup/production-environment/container-runtimes.md
index 73d9d437fd0f0..361a9d8e8b925 100644
--- a/content/hi/docs/setup/production-environment/container-runtimes.md
+++ b/content/hi/docs/setup/production-environment/container-runtimes.md
@@ -386,6 +386,6 @@ cgroup_manager = "cgroupfs"
### मिरांटिस कंटेनर रनटाइम {#mcr}
-[Mirantis Container Runtime](https://docs.mirantis.com/mcr/20.10/overview.html) (MCR) एक व्यावसायिक रूप से है उपलब्ध कंटेनर रनटाइम जिसे पहले डॉकर एंटरप्राइज एडिशन के नाम से जाना जाता था।
+[Mirantis Container Runtime](https://docs.mirantis.com/mcr/25.0/overview.html) (MCR) एक व्यावसायिक रूप से है उपलब्ध कंटेनर रनटाइम जिसे पहले डॉकर एंटरप्राइज एडिशन के नाम से जाना जाता था।
आप खुले स्रोत का उपयोग करके कुबेरनेट्स के साथ मिरांटिस कंटेनर रनटाइम का उपयोग कर सकते हैं [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd) घटक, MCR के साथ शामिल है।
\ No newline at end of file
diff --git a/content/ja/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/ja/docs/concepts/scheduling-eviction/node-pressure-eviction.md
index 4575930118a3e..15ac7d408859c 100644
--- a/content/ja/docs/concepts/scheduling-eviction/node-pressure-eviction.md
+++ b/content/ja/docs/concepts/scheduling-eviction/node-pressure-eviction.md
@@ -6,9 +6,8 @@ weight: 100
{{}}
-{{< feature-state feature_gate_name="KubeletSeparateDiskGC" >}}
-
{{}}
+{{< feature-state feature_gate_name="KubeletSeparateDiskGC" >}}
_分割イメージファイルシステム_ 機能は、`containerfs`ファイルシステムのサポートを有効にし、いくつかの新しい退避シグナル、閾値、メトリクスを追加します。
`containerfs`を使用するには、Kubernetesリリース v{{< skew currentVersion >}}で`KubeletSeparateDiskGC`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にする必要があります。
現在、`containerfs`ファイルシステムのサポートを提供しているのはCRI-O(v1.29以降)のみです。
diff --git a/content/pl/_index.html b/content/pl/_index.html
index e5b49e2651dc5..5d33ee980f838 100644
--- a/content/pl/_index.html
+++ b/content/pl/_index.html
@@ -6,31 +6,31 @@
priority: 1.0
---
-{{< blocks/section id="oceanNodes" >}}
-{{% blocks/feature image="flower" %}}
+{{< blocks/section class="k8s-overview" >}}
+{{% blocks/feature image="flower" id="feature-primary" %}}
[Kubernetes]({{< relref "/docs/concepts/overview/" >}}), znany też jako K8s, to otwarte oprogramowanie służące do automatyzacji procesów uruchamiania, skalowania i zarządzania aplikacjami w kontenerach.
Kubernetes grupuje kontenery, które są częścią jednej aplikacji, w logicznie grupy, ułatwiając ich odnajdywanie i zarządzanie nimi. Korzysta z [piętnastoletniego doświadczenia Google w uruchamianiu wielkoskalowych serwisów](https://queue.acm.org/detail.cfm?id=2898444) i łączy je z najlepszymi pomysłami i praktykami wypracowanymi przez społeczność.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
-#### W skali globalnej
+#### W skali globalnej {#planet-scale}
Zaprojektowany według tych samych zasad, które pozwalają Google uruchamiać miliardy kontenerów każdego tygodnia, Kubernetes może skalować się bez konieczności powiększania zespołu administratorów.
{{% /blocks/feature %}}
{{% blocks/feature image="blocks" %}}
-#### Nigdy za mały
+#### Nigdy za mały {#never-outgrow}
Niezależnie, czy prowadzisz tylko testy, czy globalny koncern, dzięki elastyczności Kubernetesa Twoje aplikacje mogą być instalowane w łatwy i nieprzerwany sposób, bez względu na to, jak bardzo skomplikowane są Twoje wymagania.
{{% /blocks/feature %}}
{{% blocks/feature image="suitcase" %}}
-#### K8s działa w każdym środowisku
+#### K8s działa w każdym środowisku {#run-k8s-anywhere}
-Kubernetes jako projekt open-source daje Ci wolność wyboru ⏤ skorzystaj z prywatnego centrum danych, infrastruktury hybrydowej lub chmury publicznej. Bez wysiłku możesz przenieść swoje aplikacje tam, gdzie są najbardziej potrzebne.
+Kubernetes jako projekt open-source daje Ci wolność wyboru - skorzystaj z prywatnego centrum danych, infrastruktury hybrydowej lub chmury publicznej. Bez wysiłku możesz przenieść swoje aplikacje tam, gdzie są najbardziej potrzebne.
Żeby pobrać Kubernetesa, odwiedź sekcję [pobierania](/releases/download/).
@@ -45,11 +45,9 @@
The Challenges of Migrating 150+ Microservices to Kubernetes
diff --git a/content/pl/docs/concepts/overview/working-with-objects/field-selectors.md b/content/pl/docs/concepts/overview/working-with-objects/field-selectors.md
new file mode 100644
index 0000000000000..80c663fd1df16
--- /dev/null
+++ b/content/pl/docs/concepts/overview/working-with-objects/field-selectors.md
@@ -0,0 +1,83 @@
+---
+title: Selektory pól
+content_type: concept
+weight: 70
+---
+
+Selektory pól (_Field selectors_) pozwalają na wybór {{< glossary_tooltip text="obiektów" term_id="object" >}}
+Kubernetesa na podstawie wartości jednego lub kilku pól zasobów. Oto kilka przykładów zapytań z użyciem selektora pól:
+
+* `metadata.name=my-service`
+* `metadata.namespace!=default`
+* `status.phase=Pending`
+
+Polecenie `kubectl` wybiera wszystkie Pody, dla których wartość pola [`status.phase`](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) to `Running`:
+
+```shell
+kubectl get pods --field-selector status.phase=Running
+```
+
+{{< note >}}
+Selektory pól to zasadniczo *filtry* zasobów. Domyślnie nie stosuje się żadnych selektorów/filtrów, co oznacza, że wszystkie zasoby określonego typu są wybierane. Dzięki temu zapytania `kubectl` `kubectl get pods` i `kubectl get pods --field-selector ""` są równoważne.
+{{< /note >}}
+
+## Obsługiwane pola {#supported-fields}
+
+Obsługiwane selektory pól różnią się w zależności od typu zasobu Kubernetesa. Wszystkie typy zasobów obsługują pola `metadata.name` oraz `metadata.namespace`. Użycie nieobsługiwanych selektorów pól skutkuje błędem. Na przykład:
+
+```shell
+kubectl get ingress --field-selector foo.bar=baz
+```
+```
+Error from server (BadRequest): Unable to find "ingresses" that match label selector "", field selector "foo.bar=baz": "foo.bar" is not a known field selector: only "metadata.name", "metadata.namespace"
+```
+
+### Lista obsługiwanych pól {#list-of-supported-fields}
+
+| Rodzaj | Pola |
+| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Pod | `spec.nodeName` `spec.restartPolicy` `spec.schedulerName` `spec.serviceAccountName` `spec.hostNetwork` `status.phase` `status.podIP` `status.nominatedNodeName` |
+| Event | `involvedObject.kind` `involvedObject.namespace` `involvedObject.name` `involvedObject.uid` `involvedObject.apiVersion` `involvedObject.resourceVersion` `involvedObject.fieldPath` `reason` `reportingComponent` `source` `type` |
+| Secret | `type` |
+| Namespace | `status.phase` |
+| ReplicaSet | `status.replicas` |
+| ReplicationController | `status.replicas` |
+| Job | `status.successful` |
+| Node | `spec.unschedulable` |
+| CertificateSigningRequest | `spec.signerName` |
+
+### Pola zasobów niestandardowych {#custom-resources-fields}
+
+Wszystkie niestandardowe typy zasobów obsługują pola `metadata.name` oraz `metadata.namespace`.
+
+Dodatkowo, pole `spec.versions[*].selectableFields` w {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}} określa,
+które inne pola w zasobie niestandardowym mogą być używane w selektorach pól. Zobacz
+[selectable fields for custom resources](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#crd-selectable-fields) aby uzyskać więcej informacji o tym, jak używać selektorów pól z CustomResourceDefinitions.
+
+## Obsługiwane operatory {#supported-operators}
+
+Możesz używać operatorów `=`, `==` i `!=` z selektorami pól (`=` and `==` oznaczają to samo). Na przykład ta komenda `kubectl` wybiera wszystkie usługi Kubernetesa, które nie znajdują się w przestrzeni nazw `default`:
+
+```shell
+kubectl get services --all-namespaces --field-selector metadata.namespace!=default
+```
+{{< note >}}
+Operatory dla zbiorów ([Set-based operators](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement)
+) (`in`, `notin`, `exists`) nie są obsługiwane dla selektorów pól.
+{{< /note >}}
+
+## Złożone selektory {#chained-selectors}
+
+Podobnie jak [etykieta](/docs/concepts/overview/working-with-objects/labels) i inne selektory, selektory pól mogą być łączone w postaci listy rozdzielanej przecinkami. To polecenie `kubectl` wybiera wszystkie Pody, dla których `status.phase` nie jest równe `Running`, a pole `spec.restartPolicy` jest równe `Always`:
+
+```shell
+kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always
+```
+
+## Wiele typów zasobów {#multiple-resource-types}
+
+Możesz używać selektorów pól w różnych typach zasobów. To polecenie `kubectl` wybiera wszystkie obiekty typu Statefulset i Service, które nie znajdują się w przestrzeni nazw `default`:
+
+```shell
+kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default
+```
diff --git a/content/pl/docs/contribute/generate-ref-docs/kubectl.md b/content/pl/docs/contribute/generate-ref-docs/kubectl.md
index 698790865ab51..90a5f3c38f8fd 100644
--- a/content/pl/docs/contribute/generate-ref-docs/kubectl.md
+++ b/content/pl/docs/contribute/generate-ref-docs/kubectl.md
@@ -94,7 +94,7 @@ git pull https://github.com/kubernetes/kubernetes {{< skew prevMinorVersion >}}.
```
Jeśli nie musisz edytować kodu źródłowego `kubectl`, postępuj zgodnie z
-instrukcjami dotyczącymi [Ustawiania zmiennych kompilacji](#setting-build-variables).
+instrukcjami dotyczącymi [Ustawiania zmiennych kompilacji](#set-build-variables).
## Edytowanie kodu źródłowego kubectl {#edit-the-kubectl-source-code}
diff --git a/content/pl/docs/tutorials/kubernetes-basics/_index.md b/content/pl/docs/tutorials/kubernetes-basics/_index.md
index 750c86d6e6feb..815a71c1f60ba 100644
--- a/content/pl/docs/tutorials/kubernetes-basics/_index.md
+++ b/content/pl/docs/tutorials/kubernetes-basics/_index.md
@@ -35,50 +35,45 @@ nagromadzonego przez Google doświadczenia w zarządzaniu kontenerami, w połąc
## Podstawy Kubernetesa - Moduły {#kubernetes-basics-modules}
-
-``
+
+{{< tutorials/modules >}}
+ {{< tutorials/module
+ path="/pl/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347"
+ alt="Moduł 1"
+ title="1. Jak użyć Minikube do stworzenia klastra" >}}
-
+ {{< tutorials/module
+ path="/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347"
+ alt="Moduł 2"
+ title="2. Jak użyć kubectl do tworzenia Deploymentu" >}}
+ {{< tutorials/module
+ path="/pl/docs/tutorials/kubernetes-basics/explore/explore-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347"
+ alt="Moduł 3"
+ title="3. Pody i Węzły" >}}
+
+ {{< tutorials/module
+ path="/pl/docs/tutorials/kubernetes-basics/expose/expose-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347"
+ alt="Moduł 4"
+ title="4. Jak używać Service do udostępniania aplikacji" >}}
+
+ {{< tutorials/module
+ path="/pl/docs/tutorials/kubernetes-basics/scale/scale-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347"
+ alt="Moduł 5"
+ title="5. Uruchamianie wielu instancji aplikacji" >}}
+
+ {{< tutorials/module
+ path="/pl/docs/tutorials/kubernetes-basics/update/update-intro/"
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347"
+ alt="Moduł 6"
+ title="6. Aktualizacje Rolling Update" >}}
+{{< /tutorials/modules >}}
## {{% heading "whatsnext" %}}
* Samouczek [Jak użyć Minikube do stworzenia klastra](/pl/docs/tutorials/kubernetes-basics/create-cluster/)
-
\ No newline at end of file
diff --git a/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.md b/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.md
index d284f43354ef8..02984c46f6b0b 100644
--- a/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.md
+++ b/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.md
@@ -61,7 +61,7 @@ omawiają skalowanie i aktualizowanie Deploymentów.
Na potrzeby pierwszej instalacji użyjesz aplikacji hello-node zapakowaną w kontener Docker-a,
która korzysta z NGINXa i powtarza wszystkie wysłane do niej zapytania. (Jeśli jeszcze nie
próbowałeś stworzyć aplikacji hello-node i uruchomić za pomocą kontenerów, możesz spróbować
-teraz, kierując się instrukcjami samouczka [samouczku Hello Minikube](/docs/tutorials/hello-minikube/).
+teraz, kierując się instrukcjami samouczka [samouczku Hello Minikube](/docs/tutorials/hello-minikube/).)
Musisz mieć zainstalowane narzędzie kubectl. Jeśli potrzebujesz
go zainstalować, odwiedź [install tools](/docs/tasks/tools/#kubectl).
diff --git a/content/pl/docs/tutorials/kubernetes-basics/scale/scale-intro.md b/content/pl/docs/tutorials/kubernetes-basics/scale/scale-intro.md
index 71df034bf9856..1c5a2396cdf99 100644
--- a/content/pl/docs/tutorials/kubernetes-basics/scale/scale-intro.md
+++ b/content/pl/docs/tutorials/kubernetes-basics/scale/scale-intro.md
@@ -44,18 +44,14 @@ kubectl expose deployment/kubernetes-bootcamp --type="LoadBalancer" --port 8080
## Ogólnie o skalowaniu {#scaling-overview}
-
-
-
-
-
-
-
-
-
-
-
-
+{{< tutorials/carousel id="myCarousel" interval="3000" >}}
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_05_scaling1.svg"
+ active="true" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_05_scaling2.svg" >}}
+{{< /tutorials/carousel >}}
{{% alert %}}
_Skalowanie polega na zmianie liczby replik w ramach Deploymentu._
@@ -113,9 +109,10 @@ Dwie istotne kolumny tego wyniku to:
* _DESIRED_ pokazuje żądaną liczbę replik aplikacji, którą
określasz podczas tworzenia Deploymentu. Jest to pożądany stan.
-* _CURRENT_ pokazuje, ile replik obecnie działa. Następnie
- skalujemy Deployment do 4 replik. Użyjemy polecenia `kubectl scale`,
- po którym podajemy typ Deployment, nazwę i pożądaną liczbę instancji:
+* _CURRENT_ pokazuje, ile replik obecnie działa.
+
+Następnie skalujemy Deployment do 4 replik. Użyjemy polecenia
+`kubectl scale`, po którym podajemy typ Deployment, nazwę i pożądaną liczbę instancji:
```shell
kubectl scale deployments/kubernetes-bootcamp --replicas=4
@@ -229,4 +226,4 @@ To potwierdza, że 2 Pody zostały zakończone.
* Samouczek [Aktualizacje Rolling Update](/docs/tutorials/kubernetes-basics/update/update-intro/).
* Dowiedz się więcej o [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/).
-* Dowiedz się więcej o [Autoskalowaniu](/docs/concepts/workloads/autoscaling/).
\ No newline at end of file
+* Dowiedz się więcej o [Autoskalowaniu](/docs/concepts/workloads/autoscaling/).
diff --git a/content/pl/docs/tutorials/kubernetes-basics/update/update-intro.md b/content/pl/docs/tutorials/kubernetes-basics/update/update-intro.md
index 9fff76ded2301..a560a4fa6d99d 100644
--- a/content/pl/docs/tutorials/kubernetes-basics/update/update-intro.md
+++ b/content/pl/docs/tutorials/kubernetes-basics/update/update-intro.md
@@ -31,24 +31,20 @@ aktualizacja ma nadany numer wersji i każdy Deployment może być wycofany do w
## Ogólnie o Rolling updates {#rolling-updates-overview}
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+{{< tutorials/carousel id="myCarousel" interval="3000" >}}
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates1.svg"
+ active="true" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates2.svg" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates3.svg" >}}
+
+ {{< tutorials/carousel-item
+ image="/docs/tutorials/kubernetes-basics/public/images/module_06_rollingupdates4.svg" >}}
+{{< /tutorials/carousel >}}
{{% alert %}}
_Jeśli Deployment jest udostępniony publicznie, Serwis będzie kierował ruch w trakcie aktualizacji tylko do Podów, które są aktualnie dostępne._
@@ -212,4 +208,4 @@ kubectl delete deployments/kubernetes-bootcamp services/kubernetes-bootcamp
## {{% heading "whatsnext" %}}
-* Dowiedz się więcej o [Deploymentach](/docs/concepts/workloads/controllers/deployment/).
\ No newline at end of file
+* Dowiedz się więcej o [Deploymentach](/docs/concepts/workloads/controllers/deployment/).
diff --git a/content/pt-br/docs/tasks/run-application/run-replicated-stateful-application.md b/content/pt-br/docs/tasks/run-application/run-replicated-stateful-application.md
new file mode 100644
index 0000000000000..d50d1354214ba
--- /dev/null
+++ b/content/pt-br/docs/tasks/run-application/run-replicated-stateful-application.md
@@ -0,0 +1,498 @@
+---
+title: Execute uma Aplicação Com Estado e Replicada
+content_type: tutorial
+weight: 30
+---
+
+
+
+Esta página mostra como executar uma aplicação com estado e replicada usando um
+{{< glossary_tooltip term_id="statefulset" >}}.
+Esta aplicação é um banco de dados MySQL replicado. A topologia de exemplo possui
+um único servidor primário e múltiplas réplicas, utilizando replicação assíncrona
+baseada em linhas.
+
+{{< note >}}
+**Esta não é uma configuração para produção**. As configurações do MySQL permanecem nos padrões inseguros
+para manter o foco nos padrões gerais de execução de aplicações com estado no Kubernetes.
+{{< /note >}}
+
+## {{% heading "prerequisites" %}}
+
+- {{< include "task-tutorial-prereqs.md" >}}
+- {{< include "default-storage-class-prereqs.md" >}}
+- Este tutorial assume que você está familiarizado com
+ [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
+ e [StatefulSets](/docs/concepts/workloads/controllers/statefulset/),
+ assim como outros conceitos centrais como [Pods](/docs/concepts/workloads/pods/),
+ [Services](/docs/concepts/services-networking/service/) e
+ [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
+- Algum conhecimento prévio de MySQL ajuda, mas este tutorial busca apresentar
+ padrões gerais que devem ser úteis para outros sistemas.
+- Você está utilizando o namespace padrão ou outro namespace que não contenha objetos conflitantes.
+- Você precisa ter uma CPU compatível com AMD64.
+
+## {{% heading "objectives" %}}
+
+- Implantar uma topologia MySQL replicada com um StatefulSet.
+- Enviar tráfego de cliente MySQL.
+- Observar a resistência a indisponibilidades.
+- Escalonar o StatefulSet para mais ou para menos réplicas.
+
+
+
+## Implantar o MySQL
+
+A instalação de exemplo do MySQL consiste em um ConfigMap, dois Services
+e um StatefulSet.
+
+### Criar um ConfigMap {#configmap}
+
+Crie o ConfigMap a partir do seguinte arquivo de configuração YAML:
+
+{{% code_sample file="application/mysql/mysql-configmap.yaml" %}}
+
+```shell
+kubectl apply -f https://k8s.io/examples/application/mysql/mysql-configmap.yaml
+```
+
+Este ConfigMap fornece substituições para o `my.cnf` que permitem controlar independentemente
+a configuração no servidor MySQL primário e em suas réplicas.
+Neste caso, você deseja que o servidor primário possa disponibilizar logs de replicação para as réplicas
+e que as réplicas rejeitem qualquer escrita que não venha por meio da replicação.
+
+Não há nada de especial no próprio ConfigMap que faça com que diferentes
+partes sejam aplicadas a diferentes Pods.
+Cada Pod decide qual parte utilizar durante sua inicialização,
+com base nas informações fornecidas pelo controlador StatefulSet.
+
+### Criar Services {#services}
+
+Crie os Services a partir do seguinte arquivo de configuração YAML:
+
+{{% code_sample file="application/mysql/mysql-services.yaml" %}}
+
+```shell
+kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml
+```
+
+O Service headless fornece um local para as entradas de DNS que o
+{{< glossary_tooltip text="controlador" term_id="controller" >}} do StatefulSet cria para cada
+Pod que faz parte do conjunto.
+Como o Service headless se chama `mysql`, os Pods são acessíveis por meio da resolução de `.mysql`
+a partir de qualquer outro Pod no mesmo cluster e namespace do Kubernetes.
+
+O Service de cliente, chamado `mysql-read`, é um Service normal com seu próprio IP de cluster,
+que distribui as conexões entre todos os Pods MySQL que estejam prontos (Ready).
+O conjunto de endpoints potenciais inclui o servidor MySQL primário e todas as réplicas.
+
+Observe que apenas consultas de leitura podem utilizar o Service de cliente com balanceamento de carga.
+Como existe apenas um servidor MySQL primário, os clientes devem se conectar diretamente
+ao Pod MySQL primário (por meio de sua entrada DNS no Service headless) para executar operações de escrita.
+
+### Criar o StatefulSet {#statefulset}
+
+Por fim, crie o StatefulSet a partir do seguinte arquivo de configuração YAML:
+
+{{% code_sample file="application/mysql/mysql-statefulset.yaml" %}}
+
+```shell
+kubectl apply -f https://k8s.io/examples/application/mysql/mysql-statefulset.yaml
+```
+
+Você pode acompanhar o progresso da inicialização executando:
+
+```shell
+kubectl get pods -l app=mysql --watch
+```
+
+Após algum tempo, você deverá ver os 3 Pods com o status `Running`:
+
+```
+NAME READY STATUS RESTARTS AGE
+mysql-0 2/2 Running 0 2m
+mysql-1 2/2 Running 0 1m
+mysql-2 2/2 Running 0 1m
+```
+
+Pressione **Ctrl+C** para cancelar o watch.
+
+{{< note >}}
+Se você não observar nenhum progresso, certifique-se de que há um provisionador dinâmico
+de PersistentVolume habilitado, conforme mencionado nos [pré-requisitos](#antes-de-você-começar).
+{{< /note >}}
+
+Este manifesto utiliza diversas técnicas para gerenciar Pods com estado como parte de um StatefulSet.
+A próxima seção destaca algumas dessas técnicas para explicar o que acontece à medida que o StatefulSet cria os Pods.
+
+## Entendendo a inicialização de Pods com estado
+
+O controlador de StatefulSet inicia os Pods um de cada vez, na ordem do seu índice ordinal.
+Ele aguarda até que cada Pod reporte estar Ready antes de iniciar o próximo.
+
+Além disso, o controlador atribui a cada Pod um nome único e estável no formato
+`-<índice-ordinal>`, o que resulta em Pods chamados `mysql-0`, `mysql-1` e `mysql-2`.
+
+O template de Pod no manifesto do StatefulSet acima aproveita essas propriedades
+para realizar a inicialização ordenada da replicação do MySQL.
+
+### Gerando configuração
+
+Antes de iniciar qualquer um dos contêineres especificados no Pod, o Pod executa primeiro todos os
+[contêineres de inicialização](/docs/concepts/workloads/pods/init-containers/) na ordem definida.
+
+O primeiro init container, chamado `init-mysql`, gera arquivos de configuração
+especiais do MySQL com base no índice ordinal.
+
+O script determina seu próprio índice ordinal extraindo-o do final do nome do Pod, que é retornado pelo comando `hostname`.
+Em seguida, ele salva o ordinal (com um deslocamento numérico para evitar valores reservados)
+em um arquivo chamado `server-id.cnf` no diretório `conf.d` do MySQL.
+Isso traduz a identidade única e estável fornecida pelo StatefulSet
+para o domínio dos IDs de servidor do MySQL, que exigem as mesmas propriedades.
+
+O script no contêiner `init-mysql` também aplica `primary.cnf` ou
+`replica.cnf` do ConfigMap, copiando o conteúdo para o diretório `conf.d`.
+Como a topologia de exemplo consiste em um único servidor MySQL primário e qualquer número de réplicas,
+o script atribui o ordinal `0` como o servidor primário, e todos os demais como réplicas.
+Combinado com a [garantia de ordem de implantação](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees)
+do controlador StatefulSet, isso garante que o servidor MySQL primário esteja Ready antes de criar as réplicas,
+para que elas possam começar a replicar.
+
+### Clonando dados existentes
+
+De modo geral, quando um novo Pod entra no conjunto como réplica,
+ele deve assumir que o servidor MySQL primário pode já conter dados.
+Também deve considerar que os logs de replicação podem não cobrir todo o histórico desde o início.
+Essas suposições conservadoras são fundamentais para permitir que um StatefulSet em execução
+possa ser escalonado para mais ou para menos ao longo do tempo, em vez de ficar limitado ao seu tamanho inicial.
+
+O segundo container de inicialização, chamado `clone-mysql`, realiza uma operação de clonagem em um Pod réplica
+na primeira vez que ele é iniciado em um PersistentVolume vazio.
+Isso significa que ele copia todos os dados existentes de outro Pod em execução,
+de modo que seu estado local fique consistente o suficiente para começar a replicar a partir do servidor primário.
+
+O próprio MySQL não fornece um mecanismo para isso, então o exemplo utiliza uma ferramenta
+open source popular chamada Percona XtraBackup.
+Durante a clonagem, o servidor MySQL de origem pode sofrer redução de desempenho.
+Para minimizar o impacto no servidor MySQL primário, o script instrui cada Pod a clonar a partir do Pod cujo índice ordinal é um a menos.
+Isso funciona porque o controlador do StatefulSet sempre garante que o Pod `N` esteja Ready antes de iniciar o Pod `N+1`.
+
+### Iniciando a replicação
+
+Após a conclusão bem-sucedida dos contêineres de inicialização, os contêineres regulares são executados.
+Os Pods MySQL consistem em um contêiner `mysql`, que executa o servidor `mysqld`,
+e um contêiner `xtrabackup`, que atua como um [sidecar](/blog/2015/06/the-distributed-system-toolkit-patterns).
+
+O sidecar `xtrabackup` analisa os arquivos de dados clonados e determina se
+é necessário inicializar a replicação do MySQL na réplica.
+Se for o caso, ele aguarda o `mysqld` estar pronto e então executa os comandos
+`CHANGE MASTER TO` e `START SLAVE` com os parâmetros de replicação extraídos dos arquivos clonados pelo XtraBackup.
+
+Assim que uma réplica inicia a replicação, ela memoriza seu servidor MySQL primário e
+reconecta-se automaticamente caso o servidor reinicie ou a conexão seja perdida.
+Além disso, como as réplicas procuram o servidor primário pelo seu nome DNS estável
+(`mysql-0.mysql`), elas o encontram automaticamente mesmo que ele receba um novo
+IP de Pod devido a um reagendamento.
+
+Por fim, após iniciar a replicação, o contêiner `xtrabackup` fica aguardando conexões de outros
+Pods que solicitam a clonagem de dados.
+Esse servidor permanece ativo indefinidamente caso o StatefulSet seja escalonado para mais réplicas,
+ou caso o próximo Pod perca seu PersistentVolumeClaim e precise refazer a clonagem.
+
+## Enviando tráfego de cliente
+
+Você pode enviar consultas de teste para o servidor MySQL primário (hostname `mysql-0.mysql`)
+executando um contêiner temporário com a imagem `mysql:5.7` e utilizando o cliente `mysql`.
+
+```shell
+kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\
+ mysql -h mysql-0.mysql <` pelo nome do Nó que você encontrou no passo anterior.
+
+{{< caution >}}
+Drenar um Nó pode impactar outras cargas de trabalho e aplicações
+em execução no mesmo nó. Execute o passo a seguir apenas em um cluster de testes.
+{{< /caution >}}
+
+```shell
+# Veja o aviso acima sobre o impacto em outras cargas de trabalho
+kubectl drain --force --delete-emptydir-data --ignore-daemonsets
+```
+
+Agora você pode observar o Pod sendo realocado em outro Nó:
+
+```shell
+kubectl get pod mysql-2 -o wide --watch
+```
+
+Deverá se parecer com isto:
+
+```
+NAME READY STATUS RESTARTS AGE IP NODE
+mysql-2 2/2 Terminating 0 15m 10.244.1.56 kubernetes-node-9l2t
+[...]
+mysql-2 0/2 Pending 0 0s kubernetes-node-fjlm
+mysql-2 0/2 Init:0/2 0 0s kubernetes-node-fjlm
+mysql-2 0/2 Init:1/2 0 20s 10.244.5.32 kubernetes-node-fjlm
+mysql-2 0/2 PodInitializing 0 21s 10.244.5.32 kubernetes-node-fjlm
+mysql-2 1/2 Running 0 22s 10.244.5.32 kubernetes-node-fjlm
+mysql-2 2/2 Running 0 30s 10.244.5.32 kubernetes-node-fjlm
+```
+
+E novamente, você deverá ver o ID de servidor `102` desaparecer da saída do loop do
+`SELECT @@server_id` por um tempo e depois retornar.
+
+Agora, remova o isolamento do Nó para retorná-lo ao estado normal:
+
+```shell
+kubectl uncordon
+```
+
+## Escalonando o número de réplicas
+
+Ao utilizar replicação MySQL, você pode aumentar a capacidade de consultas de leitura adicionando réplicas.
+Para um StatefulSet, isso pode ser feito com um único comando:
+
+```shell
+kubectl scale statefulset mysql --replicas=5
+```
+
+Acompanhe a criação dos novos Pods executando:
+
+```shell
+kubectl get pods -l app=mysql --watch
+```
+
+Assim que estiverem ativos, você deverá ver os IDs de servidor `103` e `104` começarem a
+aparecer na saída do loop do `SELECT @@server_id`.
+
+Você também pode verificar se esses novos servidores possuem os dados que você adicionou
+antes de eles existirem:
+
+```shell
+kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
+ mysql -h mysql-3.mysql -e "SELECT * FROM test.messages"
+```
+
+```
+Waiting for pod default/mysql-client to be running, status is Pending, pod ready: false
++---------+
+| message |
++---------+
+| hello |
++---------+
+pod "mysql-client" deleted
+```
+
+Reduzir o número de réplicas também é um processo transparente:
+
+```shell
+kubectl scale statefulset mysql --replicas=3
+```
+
+{{< note >}}
+Embora o escalonamento para cima crie novos PersistentVolumeClaims automaticamente,
+o escalonamento para baixo não exclui esses PVCs automaticamente.
+
+Isso lhe dá a opção de manter esses PVCs inicializados para tornar o escalonamento
+para cima mais rápido, ou extrair os dados antes de excluí-los.
+{{< /note >}}
+
+Você pode ver isso executando:
+
+```shell
+kubectl get pvc -l app=mysql
+```
+
+O que mostra que todos os 5 PVCs ainda existem, apesar de o StatefulSet ter sido reduzido para 3 réplicas:
+
+```
+NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
+data-mysql-0 Bound pvc-8acbf5dc-b103-11e6-93fa-42010a800002 10Gi RWO 20m
+data-mysql-1 Bound pvc-8ad39820-b103-11e6-93fa-42010a800002 10Gi RWO 20m
+data-mysql-2 Bound pvc-8ad69a6d-b103-11e6-93fa-42010a800002 10Gi RWO 20m
+data-mysql-3 Bound pvc-50043c45-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m
+data-mysql-4 Bound pvc-500a9957-b1c5-11e6-93fa-42010a800002 10Gi RWO 2m
+```
+
+Se você não pretende reutilizar os PVCs extras, pode excluí-los:
+
+```shell
+kubectl delete pvc data-mysql-3
+kubectl delete pvc data-mysql-4
+```
+
+## {{% heading "cleanup" %}}
+
+1. Cancele o loop `SELECT @@server_id` pressionando **Ctrl+C** no terminal correspondente,
+ ou executando o seguinte comando em outro terminal:
+
+ ```shell
+ kubectl delete pod mysql-client-loop --now
+ ```
+
+1. Exclua o StatefulSet. Isso também inicia a finalização dos Pods.
+
+ ```shell
+ kubectl delete statefulset mysql
+ ```
+
+1. Verifique se os Pods desapareceram.
+ Eles podem levar algum tempo para serem finalizados.
+
+ ```shell
+ kubectl get pods -l app=mysql
+ ```
+
+ Você saberá que os Pods foram finalizados quando o comando acima retornar:
+
+ ```
+ No resources found.
+ ```
+
+1. Exclua o ConfigMap, os Services e os PersistentVolumeClaims.
+
+ ```shell
+ kubectl delete configmap,service,pvc -l app=mysql
+ ```
+
+1. Se você provisionou PersistentVolumes manualmente, também será necessário excluí-los manualmente, assim como liberar os recursos subjacentes.
+ Se você utilizou um provisionador dinâmico, ele exclui automaticamente os PersistentVolumes ao detectar que você excluiu os PersistentVolumeClaims.
+ Alguns provisionadores dinâmicos (como os de EBS e PD) também liberam os recursos subjacentes ao excluir os PersistentVolumes.
+
+## {{% heading "whatsnext" %}}
+
+- Saiba mais sobre [escalonar um StatefulSet](/docs/tasks/run-application/scale-stateful-set/).
+- Saiba mais sobre [depurar um StatefulSet](/docs/tasks/debug/debug-application/debug-statefulset/).
+- Saiba mais sobre [excluir um StatefulSet](/docs/tasks/run-application/delete-stateful-set/).
+- Saiba mais sobre [forçar a exclusão de Pods de um StatefulSet](/docs/tasks/run-application/force-delete-stateful-set-pod/).
+- Consulte o [repositório de Helm Charts](https://artifacthub.io/) para outros exemplos de aplicações com estado.
\ No newline at end of file
diff --git a/content/pt-br/examples/application/mysql/mysql-configmap.yaml b/content/pt-br/examples/application/mysql/mysql-configmap.yaml
new file mode 100644
index 0000000000000..1c40dc9e6e51f
--- /dev/null
+++ b/content/pt-br/examples/application/mysql/mysql-configmap.yaml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: mysql
+ labels:
+ app: mysql
+ app.kubernetes.io/name: mysql
+data:
+ primary.cnf: |
+ # Aplique esta configuração apenas no primário.
+ [mysqld]
+ log-bin
+ replica.cnf: |
+ # Aplique esta configuração apenas nas réplicas.
+ [mysqld]
+ super-read-only
+
diff --git a/content/pt-br/examples/application/mysql/mysql-services.yaml b/content/pt-br/examples/application/mysql/mysql-services.yaml
new file mode 100644
index 0000000000000..4691c4ae91897
--- /dev/null
+++ b/content/pt-br/examples/application/mysql/mysql-services.yaml
@@ -0,0 +1,32 @@
+# Service headless para entradas DNS estáveis dos membros do StatefulSet.
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysql
+ labels:
+ app: mysql
+ app.kubernetes.io/name: mysql
+spec:
+ ports:
+ - name: mysql
+ port: 3306
+ clusterIP: None
+ selector:
+ app: mysql
+---
+# Client service para conectar a qualquer instância MySQL para leituras.
+# Para escritas, é necessário conectar-se ao primário: mysql-0.mysql.
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysql-read
+ labels:
+ app: mysql
+ app.kubernetes.io/name: mysql
+ readonly: "true"
+spec:
+ ports:
+ - name: mysql
+ port: 3306
+ selector:
+ app: mysql
diff --git a/content/pt-br/examples/application/mysql/mysql-statefulset.yaml b/content/pt-br/examples/application/mysql/mysql-statefulset.yaml
new file mode 100644
index 0000000000000..d1361911613f9
--- /dev/null
+++ b/content/pt-br/examples/application/mysql/mysql-statefulset.yaml
@@ -0,0 +1,168 @@
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: mysql
+spec:
+ selector:
+ matchLabels:
+ app: mysql
+ app.kubernetes.io/name: mysql
+ serviceName: mysql
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: mysql
+ app.kubernetes.io/name: mysql
+ spec:
+ initContainers:
+ - name: init-mysql
+ image: mysql:5.7
+ command:
+ - bash
+ - "-c"
+ - |
+ set -ex
+ # Gerar o server-id do MySQL a partir do índice ordinal do pod.
+ [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
+ ordinal=${BASH_REMATCH[1]}
+ echo [mysqld] > /mnt/conf.d/server-id.cnf
+ # Adicione um deslocamento (offset) para evitar o valor reservado server-id=0.
+ echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
+ # Copie os arquivos conf.d apropriados do config-map para o emptyDir.
+ if [[ $ordinal -eq 0 ]]; then
+ cp /mnt/config-map/primary.cnf /mnt/conf.d/
+ else
+ cp /mnt/config-map/replica.cnf /mnt/conf.d/
+ fi
+ volumeMounts:
+ - name: conf
+ mountPath: /mnt/conf.d
+ - name: config-map
+ mountPath: /mnt/config-map
+ - name: clone-mysql
+ image: gcr.io/google-samples/xtrabackup:1.0
+ command:
+ - bash
+ - "-c"
+ - |
+ set -ex
+ # Pule a clonagem se os dados já existirem.
+ [[ -d /var/lib/mysql/mysql ]] && exit 0
+ # Pule a clonagem no primário (índice ordinal 0).
+ [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
+ ordinal=${BASH_REMATCH[1]}
+ [[ $ordinal -eq 0 ]] && exit 0
+ # Clone os dados do peer anterior.
+ ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
+ # Prepare o backup.
+ xtrabackup --prepare --target-dir=/var/lib/mysql
+ volumeMounts:
+ - name: data
+ mountPath: /var/lib/mysql
+ subPath: mysql
+ - name: conf
+ mountPath: /etc/mysql/conf.d
+ containers:
+ - name: mysql
+ image: mysql:5.7
+ env:
+ - name: MYSQL_ALLOW_EMPTY_PASSWORD
+ value: "1"
+ ports:
+ - name: mysql
+ containerPort: 3306
+ volumeMounts:
+ - name: data
+ mountPath: /var/lib/mysql
+ subPath: mysql
+ - name: conf
+ mountPath: /etc/mysql/conf.d
+ resources:
+ requests:
+ cpu: 500m
+ memory: 1Gi
+ livenessProbe:
+ exec:
+ command: ["mysqladmin", "ping"]
+ initialDelaySeconds: 30
+ periodSeconds: 10
+ timeoutSeconds: 5
+ readinessProbe:
+ exec:
+ # Verifique se é possível executar consultas via TCP (skip-networking está desativado).
+ command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
+ initialDelaySeconds: 5
+ periodSeconds: 2
+ timeoutSeconds: 1
+ - name: xtrabackup
+ image: gcr.io/google-samples/xtrabackup:1.0
+ ports:
+ - name: xtrabackup
+ containerPort: 3307
+ command:
+ - bash
+ - "-c"
+ - |
+ set -ex
+ cd /var/lib/mysql
+
+ # Determine a posição do binlog dos dados clonados, se houver.
+ if [[ -f xtrabackup_slave_info && "x$( change_master_to.sql.in
+ # Ignore o xtrabackup_binlog_info neste caso (não é útil).
+ rm -f xtrabackup_slave_info xtrabackup_binlog_info
+ elif [[ -f xtrabackup_binlog_info ]]; then
+ # Estamos clonando diretamente do primário. Interprete a posição do binlog.
+ [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
+ rm -f xtrabackup_binlog_info xtrabackup_slave_info
+ echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
+ MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
+ fi
+
+ # Verifique se é necessário completar a clonagem iniciando a replicação.
+ if [[ -f change_master_to.sql.in ]]; then
+ echo "Waiting for mysqld to be ready (accepting connections)"
+ until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
+
+ echo "Initializing replication from clone position"
+ mysql -h 127.0.0.1 \
+ -e "$(О сложности миграции 150+ микросервисов в Ku