You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Installing a controller+Kong Deployment with the default LoadBalancer type proxy Service in a cluster that does not actually provide LoadBalancers prevents it from ingesting configuration. Ingresses do not result in routes, etc.
Expected Behavior
The controller proceeds as usual and starts handling configuration even though it cannot determine status of the LoadBalancer.
Steps To Reproduce
1. Start a KIND cluster with no arguments (so no LB provisioning)
2.`helm install trk /tmp/symkong --set ingressController.admissionWebhook.enabled=true --set ingressController.env.anonymous_reports=false`3. Create a basic Ingress, e.g. https://docs.konghq.com/kubernetes-ingress-controller/2.0.x/guides/getting-started/4. Check the admin API route list. There will be no routes.
5. Edit the proxy Service and change its type to NodePort.
6. Check routes again.
log.V(util.InfoLevel).Info("LoadBalancer type Service for the Kong proxy is not yet provisioned, waiting...", "service", publishService)
, which spams constantly with this configuration. However, this is within the status update loop and should be separate from the rest of the controller. It should not obviously block config updates. However, logs clearly show that the regular config sync message does not appear until after editing the service type:
Although the status code above was blocking here, it should not have worked previously. The failure to process configuration occurs because sendconfig blocks on sending to the status update channel, and the status subsystem does not start receiving on that channel until after its setup finishes. Prior to #1931, we simply failed immediately if LB IP info was not available, and we would not start receiving. After, we wait indefinitely for an LB IP to become available, and if that never happens we never start receiving.
Is there an existing issue for this?
Current Behavior
Installing a controller+Kong Deployment with the default LoadBalancer type proxy Service in a cluster that does not actually provide LoadBalancers prevents it from ingesting configuration. Ingresses do not result in routes, etc.
Expected Behavior
The controller proceeds as usual and starts handling configuration even though it cannot determine status of the LoadBalancer.
Steps To Reproduce
Kong Ingress Controller version
Kubernetes version
No response
Anything else?
Initially I thought this was directly related to
kubernetes-ingress-controller/internal/ctrlutils/ingress-status.go
Line 54 in 7939ad9
logs.txt
cmds.txt
We probably need to do a bit more digging to see where this blocks (unless deleting that block from status does magically resolve the issue).
The text was updated successfully, but these errors were encountered: