Skip to content

Commit 2979ae7

Browse files
author
dinghaiyang
committed
Repalce limits with requests in scheduler documentation.
Due to kubernetes#11713
1 parent 9ad982e commit 2979ae7

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

docs/devel/scheduler.md

100644100755
Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,13 +42,13 @@ indicating where the Pod should be scheduled.
4242

4343
The scheduler tries to find a node for each Pod, one at a time, as it notices
4444
these Pods via watch. There are three steps. First it applies a set of "predicates" that filter out
45-
inappropriate nodes. For example, if the PodSpec specifies resource limits, then the scheduler
45+
inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler
4646
will filter out nodes that don't have at least that much resources available (computed
47-
as the capacity of the node minus the sum of the resource limits of the containers that
47+
as the capacity of the node minus the sum of the resource requests of the containers that
4848
are already running on the node). Second, it applies a set of "priority functions"
4949
that rank the nodes that weren't filtered out by the predicate check. For example,
5050
it tries to spread Pods across nodes while at the same time favoring the least-loaded
51-
nodes (where "load" here is sum of the resource limits of the containers running on the node,
51+
nodes (where "load" here is sum of the resource requests of the containers running on the node,
5252
divided by the node's capacity).
5353
Finally, the node with the highest priority is chosen
5454
(or, if there are multiple such nodes, then one of them is chosen at random). The code

docs/devel/scheduler_algorithm.md

100644100755
Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,10 +37,10 @@ For each unscheduled Pod, the Kubernetes scheduler tries to find a node across t
3737

3838
## Filtering the nodes
3939

40-
The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including:
40+
The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including:
4141

4242
- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted.
43-
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node.
43+
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../proposals/resource-qos.md).
4444
- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node.
4545
- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field.
4646
- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field).
@@ -58,7 +58,7 @@ After the scores of all nodes are calculated, the node with highest score is cho
5858

5959
Currently, Kubernetes scheduler provides some practical priority functions, including:
6060

61-
- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of limits of all Pods already on the node - limit of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption.
61+
- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption.
6262
- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label.
6363
- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed.
6464
- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node.

0 commit comments

Comments
 (0)