Kong Upstream Time Drastically Improves When Not Using Ingress Controller #7240
Replies: 3 comments
-
@probably-not Would you be able to compare the difference of one service/route/upstream/target set from |
Beta Was this translation helpful? Give feedback.
-
I only have one plugin (prometheus as a global plugin). That's actually where I'm getting the data about the p99, from the latency buckets reported by the prometheus plugin. I can try to compare the differences although adding the ingresses back via the ingress controller is potentially damaging, since it will use the slower ingresses... so I'll have to wait on that until tomorrow when we have a lower traffic time, I will update when I have it. The main difference that I know of is the amount of upstream targets. When using the ingress controller, it defines each pod individually as a target, so there are between 150 (in low traffic) and 600 (in high traffic) targets. When I switched to using a single target of the Kubernetes host name, there was only the one target. Could that lead to an issue in the speed? |
Beta Was this translation helpful? Give feedback.
-
I unfortunately can't provide the information from the db since rolling back to the ingresses via the ingress controller is a problem for us (it affects our latencies and uptime). As a side note: If you can break down the latency breakdown metrics to more than just "proxy"/"upstream"/"request" so that we know what steps are actually part of each section, then this will help tremendously. That way we can determine if it's due to retries, DNS resolving, target resolving, etc. |
Beta Was this translation helpful? Give feedback.
-
Summary
I have a GKE cluster using Kong as the Ingress Controller. Until now, I've been using the Kubernetes Ingress Controller for routes/services, everything according to the documentation.
Yesterday, we decided to switch 50% of our traffic from using the ingress controller managed services/routes/upstreams/targets to defining the service directly in the Admin API, and using an upstream that targets the Kubernetes host name of the service, instead of letting the ingress controller define the targets to the pods directly. Miraculously, this resulted in a 10x performance improvement in the p99 of the upstream time, going from 100ms to 10ms. No changes to the service itself, just changing Kong from using the ingress controller managed services/routes/upstreams/targets to setting the upstream to be a single target as opposed to the multiple targets set by the controller.
Steps To Reproduce
Additional Details & Logs
Beta Was this translation helpful? Give feedback.
All reactions