-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about Ocelot LoadBalancer and Kubernetes Service Discovery Provider #1126
Comments
I know this is old and you already provided a PR but just to clarify the issue. There is nothing wrong in sending the traffic to the virtual service IP but above the TCP/IP routing is the HTTP layer with the connection concept. There is one additional layer controlled via HTTP HEADER "connection: keep-alive". That's the default. You can test with postman or similar tool sending traffic to a loadbalancer service and see that only one pod consumes all traffic. If you change the connection header to "close" all pods will receive traffic in random order (not round robin). This is something that need to be taken into account for endpoint implementation. |
Thanks @enriko-riba , so what do you recommend ? |
I am by no mean an expert in this area and I had no time to take a look at
the Ocelot implementation. I assume a httpClient is created via
IHttpClientFactory or maybe SocketsHttpHandler?
Given that both maintain an underlying connection pool I would try the
following:
1. create a watch on Kubernetes API to receive endpoint changes (so we
don't need to pool the K8s API per request)
2. on route init or on endpoint change: update the endpoint list per service
3. on incoming request: pick the next endpoint and create a httpclient
instance via IHttpClientFactory (it will reuse an existing
HttpMessageHandler from the pool)
Maybe you implemented it that way already?
The connection pool is shared between http client instances so it should be
large enough to accommodate the sum of all endpoints. I can't speculate
what number would be a good guess for the pool size.
HTH
…On Fri, Apr 3, 2020 at 2:54 PM ussamoo ***@***.***> wrote:
Thanks @enriko-riba <https://github.com/enriko-riba> , so what do you
recommend ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1126 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAM3LYLUANV27UN5UXUCAYDRKXMATANCNFSM4KSPSAMQ>
.
|
fixed by #1134 |
Expected Behavior / New Feature
When using Kubernetes service discovery and LoadBalancer of type RoundRobin, Ocelot balance traffic beteween running pods.
Actual Behavior / Motivation for New Feature
Ocelot send traffic to the same pod without balancing.
Steps to Reproduce the Problem
{
"GlobalConfiguration": {
"ServiceDiscoveryProvider": {
"Namespace": "my-service-name",
"Type": "Kube"
}
},
"ReRoutes": [
{
"DownstreamPathTemplate": "/{everything}",
"DownstreamScheme": "http",
"ServiceName": "my-service-name",
"UpstreamPathTemplate": "/api/{everything}",
"LoadBalancerOptions": {
"Type": "RoundRobin"
},
"UpstreamHttpMethod": []
}
]
}
Specifications
Ocelot KubeProvider gets the information (IP and Port) correctly about the service named "my-service-name" and sends traffic TO THE SERVICE using the Ip/port.
The first issue here is that the service sends traffic always to the same pod and the second issue is that the Load Balancing is not controlled by Ocelot.
Isn't better to discover pods endpoints based on the service name and send traffic directly to pod endpoint ??
I tested this scenario with Ocelot and it works well, but i'm not sure if it's a good idea.
But doing so means that the Load Balancing can be controlled by Ocelot.
The text was updated successfully, but these errors were encountered: