Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Ocelot LoadBalancer and Kubernetes Service Discovery Provider #1126

Closed
ussamoo opened this issue Feb 10, 2020 · 4 comments
Closed

Comments

@ussamoo
Copy link
Contributor

ussamoo commented Feb 10, 2020

Expected Behavior / New Feature

When using Kubernetes service discovery and LoadBalancer of type RoundRobin, Ocelot balance traffic beteween running pods.

Actual Behavior / Motivation for New Feature

Ocelot send traffic to the same pod without balancing.

Steps to Reproduce the Problem

  1. Use the configuration below
    {
    "GlobalConfiguration": {
    "ServiceDiscoveryProvider": {
    "Namespace": "my-service-name",
    "Type": "Kube"
    }
    },
    "ReRoutes": [
    {
    "DownstreamPathTemplate": "/{everything}",
    "DownstreamScheme": "http",
    "ServiceName": "my-service-name",
    "UpstreamPathTemplate": "/api/{everything}",
    "LoadBalancerOptions": {
    "Type": "RoundRobin"
    },
    "UpstreamHttpMethod": []
    }
    ]
    }
  2. Run 3 instance of a pod having as name "my-pod-name"
  3. Create a service named "my-service-name" having as Selector "my-pod-name"

Specifications

  • Version: 14.0.5
  • Platform: Linux
  • Subsystem:

Ocelot KubeProvider gets the information (IP and Port) correctly about the service named "my-service-name" and sends traffic TO THE SERVICE using the Ip/port.

The first issue here is that the service sends traffic always to the same pod and the second issue is that the Load Balancing is not controlled by Ocelot.

Isn't better to discover pods endpoints based on the service name and send traffic directly to pod endpoint ??

I tested this scenario with Ocelot and it works well, but i'm not sure if it's a good idea.

But doing so means that the Load Balancing can be controlled by Ocelot.

@enriko-riba
Copy link

I know this is old and you already provided a PR but just to clarify the issue.

There is nothing wrong in sending the traffic to the virtual service IP but above the TCP/IP routing is the HTTP layer with the connection concept.
Basically once you establish a HTTP connection with an endpoint you are stuck with the same connection/endpoint. The connection is kept alive until the pod resets it (which is not going to happen) or the connection times out (http linger, keepalive). IIRC the timeout is like 4 minutes.

There is one additional layer controlled via HTTP HEADER "connection: keep-alive". That's the default. You can test with postman or similar tool sending traffic to a loadbalancer service and see that only one pod consumes all traffic. If you change the connection header to "close" all pods will receive traffic in random order (not round robin).
This comes at a cost of every request having to renegotiate the connection. The TCP handshake takes roughly the same time as the whole following request/response. So congratz we just doubled the latency or halved our throughput (for small requests without body like GET).

This is something that need to be taken into account for endpoint implementation.

@ussamoo
Copy link
Contributor Author

ussamoo commented Apr 3, 2020

Thanks @enriko-riba , so what do you recommend ?

@enriko-riba
Copy link

enriko-riba commented Apr 3, 2020 via email

@TomPallister
Copy link
Member

fixed by #1134

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants