Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Topology spread constraints on zones and anti-affinity for receivers and dispatchers #2092

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions data-plane/config/broker/500-dispatcher.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,23 @@ spec:
app: kafka-broker-dispatcher
kafka.eventing.knative.dev/release: devel
spec:
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-broker-dispatcher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: kafka-broker-dispatcher
topologyKey: kubernetes.io/hostname
weight: 100
Comment on lines +36 to +52
Copy link
Contributor

@aavarghese aavarghese Apr 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if this combination is functionally better than having two pod topology spread constraints - one for zone and one for node? (Was trying to read about this...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we don't have any specific anti-affinity rules for nodes, we could instead have multiple pod topology spread constraints?

     # To avoid node becoming SPOF, spread our replicas to different nodes and zones.
      topologySpreadConstraints:
        - maxSkew: 2
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              app: kafka-broker-dispatcher
        - maxSkew: 2
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              app: kafka-broker-dispatcher

This should also satisfy the existing anti-affinity rule for preferredDuringSchedulingIgnoredDuringExecution.

On the other hand, our existing dispatchers already had podAntiAffinity for nodes, so if we want to make minimal changes, that is understandable
cc: @pierDipi was this your thinking too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, afaik, topologySpreadConstraints + maxSkew = 1 + ScheduleAnyway should be equivalent to our existing antiAffinity rule, I'm ok to migrate antiAffinity rules to use
topologySpreadConstraints + maxSkew = 1 + ScheduleAnyway

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, i think making this equivalent change separately another time/another PR is fine too.

serviceAccountName: knative-kafka-broker-data-plane
securityContext:
runAsNonRoot: true
Expand Down
15 changes: 11 additions & 4 deletions data-plane/config/broker/500-receiver.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,14 @@ spec:
app: kafka-broker-receiver
kafka.eventing.knative.dev/release: devel
spec:
serviceAccountName: knative-kafka-broker-data-plane
securityContext:
runAsNonRoot: true
# To avoid node becoming SPOF, spread our replicas to different nodes.
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-broker-receiver
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
Expand All @@ -46,6 +50,9 @@ spec:
app: kafka-broker-receiver
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-broker-data-plane
securityContext:
runAsNonRoot: true
containers:
- name: kafka-broker-receiver
image: ${KNATIVE_KAFKA_RECEIVER_IMAGE}
Expand Down
17 changes: 17 additions & 0 deletions data-plane/config/brokerv2/500-dispatcher.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,23 @@ spec:
app.kubernetes.io/component: kafka-dispatcher
kafka.eventing.knative.dev/release: devel
spec:
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-broker-dispatcher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: kafka-broker-dispatcher
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-broker-data-plane
securityContext:
runAsNonRoot: true
Expand Down
17 changes: 17 additions & 0 deletions data-plane/config/channel/500-dispatcher.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,23 @@ spec:
app: kafka-channel-dispatcher
kafka.eventing.knative.dev/release: devel
spec:
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-channel-dispatcher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: kafka-channel-dispatcher
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-channel-data-plane
securityContext:
runAsNonRoot: true
Expand Down
15 changes: 11 additions & 4 deletions data-plane/config/channel/500-receiver.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,14 @@ spec:
app: kafka-channel-receiver
kafka.eventing.knative.dev/release: devel
spec:
serviceAccountName: knative-kafka-channel-data-plane
securityContext:
runAsNonRoot: true
# To avoid node becoming SPOF, spread our replicas to different nodes.
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-channel-receiver
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
Expand All @@ -46,6 +50,9 @@ spec:
app: kafka-channel-receiver
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-channel-data-plane
securityContext:
runAsNonRoot: true
containers:
- name: kafka-channel-receiver
image: ${KNATIVE_KAFKA_RECEIVER_IMAGE}
Expand Down
17 changes: 17 additions & 0 deletions data-plane/config/channelv2/500-dispatcher.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,23 @@ spec:
app.kubernetes.io/component: kafka-dispatcher
kafka.eventing.knative.dev/release: devel
spec:
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-channel-dispatcher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: kafka-channel-dispatcher
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-channel-data-plane
securityContext:
runAsNonRoot: true
Expand Down
15 changes: 11 additions & 4 deletions data-plane/config/sink/500-receiver.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,14 @@ spec:
app: kafka-sink-receiver
kafka.eventing.knative.dev/release: devel
spec:
serviceAccountName: knative-kafka-sink-data-plane
securityContext:
runAsNonRoot: true
# To avoid node becoming SPOF, spread our replicas to different nodes.
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-sink-receiver
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
Expand All @@ -46,6 +50,9 @@ spec:
app: kafka-sink-receiver
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-sink-data-plane
securityContext:
runAsNonRoot: true
containers:
- name: kafka-sink-receiver
image: ${KNATIVE_KAFKA_RECEIVER_IMAGE}
Expand Down
17 changes: 17 additions & 0 deletions data-plane/config/source/500-dispatcher.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,23 @@ spec:
app: kafka-source-dispatcher
kafka.eventing.knative.dev/release: devel
spec:
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-source-dispatcher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: kafka-source-dispatcher
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-source-data-plane
securityContext:
runAsNonRoot: true
Expand Down
17 changes: 17 additions & 0 deletions data-plane/config/sourcev2/500-dispatcher.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,23 @@ spec:
app.kubernetes.io/component: kafka-dispatcher
kafka.eventing.knative.dev/release: devel
spec:
# To avoid node becoming SPOF, spread our replicas to different nodes and zones.
topologySpreadConstraints:
- maxSkew: 2
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: kafka-source-dispatcher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: kafka-source-dispatcher
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: knative-kafka-source-data-plane
securityContext:
runAsNonRoot: true
Expand Down