Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Kubernetes Service Discovery Provider to use pods endpoints #1134

Merged
merged 3 commits into from
Apr 11, 2020

Conversation

ussamoo
Copy link
Contributor

@ussamoo ussamoo commented Feb 16, 2020

Fixes #1126 #1129

Proposed Changes

Updated the KubeProvider to build the list of services based on the endpoints of a given kubernetes service in a specific namespace.

This will fix the issues related to the load balancing since Ocelot will send traffic directly to the pod endpoint instead of the service.

@ussamoo ussamoo changed the title Update Kubernetes Service Discovery Provider to use pod endpoints Update Kubernetes Service Discovery Provider to use pods endpoints Feb 16, 2020
@ussamoo ussamoo requested a review from TomPallister February 22, 2020 15:31
@enriko-riba
Copy link

enriko-riba commented Mar 28, 2020

👍 this is a must-have because currently there is no load balancing between multiple pods (unless the http connection resets) - one pod behind the service receives all traffic.

@TomPallister
Copy link
Member

@ussamoo sorry it has taken me so long to merge this.

Thanks for your help with Ocelot!!!!!

@chazt3n
Copy link

chazt3n commented May 24, 2021

Are we sure this is right? We are having the exact behavior this sounds like it was supposed to fix.

The k8s service endpoint exists specifically to distribute load. Just trying to understand because we've looked up and down the issues/docs here and can't tell

a) what ocelot is actually supposed to do and
b) why are we routing traffic to the same pod during burst traffic scenarios (the only time we care about balancing load)

@enriko-riba
Copy link

Are we sure this is right? We are having the exact behavior this sounds like it was supposed to fix.

Do you think the implementation is wrong or just asking about the behavior?
The original issue with load balancing was that once a client connects and starts sending requests, all requests are delivered to the same pod due to the persistent HTTP connection.
Persistent connections are default since HTTP/1.1 but even with 1.0 there is a "keep-alive" option so there is no way around except closing the connection client side every time. Closing the connection tremendously impacts performance since more time is spent establishing connections than transferring data while leaving it open causes a single pod to serve all traffic from a given client.
The PR is supposed to enable load balancing across all endpoints behind a service object as returned by K8s API.

I haven't tested the new behavior but please share your observations and how you tested.

@chazt3n
Copy link

chazt3n commented Jun 7, 2021

Hey there, we found that we had to add the load balancer options and specify round robin on each route to get load balancing - after that, it does appear to be working.

Previously, it did some load balancing, but during burst traffic it would single out a pod. I hope this helps

We tested in AWS EKS and used cloudwatch logs to see which pods were accepting the requests. Pretty low tech, but that's how we spotted the issue and it was fine enough

@raman-m raman-m added Service Discovery Ocelot feature: Service Discovery Kubernetes Service discovery by Kubernetes labels Feb 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Kubernetes Service discovery by Kubernetes Service Discovery Ocelot feature: Service Discovery
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants