You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Keda currently uses the pick_first load balancing policy for GRPC connections which is the grpc library default. This currently works in most setups because of default coredns configurations. However, if we're not using the default settings, Keda should perform client-side round-robin load balancing for grpc connections aswell.
Use-Case
Some Kubernetes environments customize the default DNS configurations; For example:
In this case, Keda should round-robin load balance grpc connections... otherwise it's always going to talk to the same pod and not send any requests off to the second one; this is a simple 1-line config change built into the grpc library; and shouldn't affect existing setups.
Is this a feature you are interested in implementing yourself?
Yes
Anything else?
No response
The text was updated successfully, but these errors were encountered:
Proposal
Keda currently uses the
pick_first
load balancing policy for GRPC connections which is the grpc library default. This currently works in most setups because of default coredns configurations. However, if we're not using the default settings, Keda should perform client-side round-robin load balancing for grpc connections aswell.Use-Case
Some Kubernetes environments customize the default DNS configurations; For example:
the above changes means DNS requests will always return the results in the same order and won't randomize the ordering, ex with 2 pods:
In this case, Keda should round-robin load balance grpc connections... otherwise it's always going to talk to the same pod and not send any requests off to the second one; this is a simple 1-line config change built into the grpc library; and shouldn't affect existing setups.
Is this a feature you are interested in implementing yourself?
Yes
Anything else?
No response
The text was updated successfully, but these errors were encountered: