-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loadbalancer targets all nodes in the cluster, not just ones with targets #222
Comments
Hi @jacksontj. What you are requesting should already be possible by setting the upstream Service field Would that serve your needs? |
I actually have the externalTrafficPolicy set to local already, yet I still
have all nodes in the cluster as targets for the lb
…On Wed, Jun 5, 2019, 12:31 PM Timo Reimann ***@***.***> wrote:
Hi @jacksontj <https://github.com/jacksontj>. What you are requesting
should already be possible by setting the upstream Service field
externalTrafficPolicy
<https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip>
to Local (at the price of possibly imbalanced workloads). An excellent
blog post describing the different trade-offs is available here
<https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies>
.
Would that serve your needs?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#222?email_source=notifications&email_token=AAMLPHPUP7EXAGIIH7XA2KLPZAIC3A5CNFSM4HT3O752YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXAYXHQ#issuecomment-499223454>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMLPHN42B5RB7MOE2XYYKLPZAIC3ANCNFSM4HT3O75Q>
.
|
The nodes that do not host pods targeted by your Service should fail their health checks, however, and thus not be part of the LB's rotation and never receive traffic. The UX on that part is really improvable. Nevertheless, it should hopefully yield the result you're looking for. Let me know though in case I missed something. |
Today I have that situation (all nodes in the lb, nodes without pods
failing hc). I was hoping to get the controller to only add nodes with pods
(instead of sending hcs to nodes we know don't have traffic).
…On Wed, Jun 5, 2019, 1:28 PM Timo Reimann ***@***.***> wrote:
The nodes that do not host pods targeted by your Service should fail their
health checks, however, and thus not be part of the LB's rotation and never
receive traffic.
The UX on that part is really improvable. Nevertheless, it should
hopefully yield the result you're looking for. Let me know know though in
case I missed something.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#222?email_source=notifications&email_token=AAMLPHKKORMVD54AUUEARJTPZAOX3A5CNFSM4HT3O752YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXA5LDI#issuecomment-499242381>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMLPHMB7V5TXFMM2OBGIM3PZAOX3ANCNFSM4HT3O75Q>
.
|
@jacksontj to better understand, what's your exact concern about the status quo? Is it about usability or something different? AFAICS, the health checks should not be harmful or lead to actual traffic being sent the wrong way. I'd be happy to discuss potential improvements if I understand the impact of the problem as it manifests for you. |
To clarify, I'm not seeing traffic going to all nodes in the cluster (assuming the healthcheck is configured properly). The main thing I see is that my loadbalancer has N targets (all of the nodes in my k8s cluster) so as this cluster scales I'll likely end up with significantly more nodes in the target group that fail the HC than pass it. This would make it effectively impossible to set any sort of alerting etc. around alive/non-alive reals on the LB. Additionaly (granted, a much smaller concern likely) for each LB there will be some number of healthchecks sent to every node, which is not a great scaling mechanism. If it is desired to have the ability to include all nodes in the target group, then I think we'd just want to add an annotation to make the controller only add nodes with pods on them. |
I am new to this topic and got bitten by this today. Installed ambassador according to DO tutorial, and it correctly for performance reasons specifies to use Since DO UI displays both load balancers as having assigned all nodes in a cluster, the one with My proposal is to adjust DO UI to only list nodes that are part of the load balancer in all cases, perhaps indicating the traffic policy somewhere in the UI. This is to avoid displaying LB as half broken in cases where it is working perfectly well. thanks, |
The current behavior for loadbalancers is to add all nodes as droplets of the LoadBalancer. This is actually pretty inefficient as the service isn't necessarily running on all nodes in the cluster. Through the Service object in k8s we can determine which nodes have endpoints on them and set the droplets in the LB accordingly. This will avoid the unnecessary hops from non-service droplets to ones that actually run the service.
The text was updated successfully, but these errors were encountered: