Replies: 10 comments
-
Hi @spbkelt This is expected behavior when
|
Beta Was this translation helpful? Give feedback.
-
Hi @pleshakov Thank you for shedding some light on this. |
Beta Was this translation helpful? Give feedback.
-
@spbkelt With |
Beta Was this translation helpful? Give feedback.
-
@pleshakov thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi @spbkelt Setting the |
Beta Was this translation helpful? Give feedback.
-
hi @pleshakov I'm facing exactly the same issue but in my case I do only have 1 single worker node.
Every single nginx-ingress resource is being deployed successfully but the NLB target group /healthz endpoint returns 503, any idea of what am i missing? Thanks |
Beta Was this translation helpful? Give feedback.
-
it is VERY annoying to see an upgrade to software where it is more of a pain the nlb with new ingress controller is a complete disaster, even more of a pain when adding AWS acm to the nlb to terminate https traffic, and yet we are told my AWS not to use legacy classic elb, but this nlb is always an extra pain to use |
Beta Was this translation helpful? Give feedback.
-
I actually have the controller set to daemon, and I'm still seeing this. Ingress controller pods are indeed running on all nodes, but querying |
Beta Was this translation helpful? Give feedback.
-
if it is still relevant, when a health check on the node returns 503, what is the output of the |
Beta Was this translation helpful? Give feedback.
-
I had this problem using the default HTTP/healthz default target. Switched to TCP that it seems to work good for me.
|
Beta Was this translation helpful? Give feedback.
-
I've created an AWS NLB via Helm chart with 2 listeners ( 80 and 443 port). Then I've deployed the chart with the latest ingress controller chart version
helm upgrade --install ***-external-nlb -f values.yml nginx-stable/nginx-ingress --version 0.6.1 --namespace nginx-ingress
It did run with success
In AWS console I see that helm provisioned 2 target groups ( seems one group per NLB listener) with 4 instances in each and the only node is healthy in terms of health checking probes.
So i checked this endpoint from a separate VM from the same VPC
First which is working and healthy :
Now unhealthy:
So this request returns correct HTML body content and 503 at the same time which is pretty weird
Tried to grab some logs from Nginx Ingress Controller pod
No issues or errors popping here
values.yml's content is the following:
Masked company name reference with stars ***
So at the end we have these resources being created in kubernetes cluster
So i don't understand how to troubleshoot this issue deeper.
Beta Was this translation helpful? Give feedback.
All reactions