Skip to content

Commit

Permalink
spread OSDs across zones uniformly
Browse files Browse the repository at this point in the history
issue:
- consider we have three nodes in a zone and one of the nodes (bigger) which is
  cordoned has 5 OSDs running, two other nodes (smaller) are not running OSDs
- assume the region has three such zones in same config
- now if we evict OSDs from the cordoned node and have tsc at hostname
  level to satisfy the constraint all OSDs should be running on one of
  smaller nodes which isn't possible due to less resources
- due to this we can't ever evict pods from the bigger node if tsc takes
  into account of cordoned nodes as well

rc:
- we don't have a way to take tainted nodes into consideration in tsc
  calculations until k8s 1.26 [0]

fix:
- set tsc at zone level which effectively counts number of OSDs running
  per zone even with cordon nodes
- as a result we can have 5 OSDs running in a zone irrespective of
  bigger/smaller nodes

[0]: kubernetes/enhancements/pull/3105

Signed-off-by: Leela Venkaiah G <lgangava@redhat.com>
  • Loading branch information
leelavg committed Feb 12, 2023
1 parent baba52e commit 51e8f65
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions templates/providerstoragecluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ const (

var commonTSC corev1.TopologySpreadConstraint = corev1.TopologySpreadConstraint{
MaxSkew: 1,
TopologyKey: "kubernetes.io/hostname",
TopologyKey: "topology.kubernetes.io/zone",
WhenUnsatisfiable: corev1.DoNotSchedule,
LabelSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
Expand All @@ -50,7 +50,7 @@ var commonTSC corev1.TopologySpreadConstraint = corev1.TopologySpreadConstraint{

var preparePlacementTSC corev1.TopologySpreadConstraint = corev1.TopologySpreadConstraint{
MaxSkew: 1,
TopologyKey: "topology.kubernetes.io/zone",
TopologyKey: "kubernetes.io/hostname",
WhenUnsatisfiable: corev1.DoNotSchedule,
LabelSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
Expand Down

0 comments on commit 51e8f65

Please sign in to comment.