You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Kubernetes version (use kubectl version):
1.14.8 Environment:
Cloud provider or hardware configuration: AZURE - AKS
OS (e.g. from /etc/os-release): aks-ubuntu
Kernel (e.g. uname -a): Linux aks-apps-35501747-vmss00000V 4.15.0-1064-azure AWS ingress #69-Ubuntu SMP Tue Nov 19 16:58:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Others:
What happened:
What you expected to happen:
nginx-ingress 0.27.1 with HPA base on CPU has strange behavior in AKS 1.14.8
How to reproduce it:
Easy, with the workload, you can reproduce this behavior (it was tested with the same behavior in other production environments). With this version, every time the HPA is creating and terminating pods.
We saw in the logs the SIGTERM signal, and in previous seconds the pod was taking the limit cpu.
We rolled back to 0.25.1, as you can see in the previous picture the hpa comes back to normal without pods scale up and down every 8 min. Anything else we need to know:
/kind bug
The text was updated successfully, but these errors were encountered:
NGINX Ingress controller version:
Kubernetes version (use
kubectl version
):1.14.8
Environment:
uname -a
): Linux aks-apps-35501747-vmss00000V 4.15.0-1064-azure AWS ingress #69-Ubuntu SMP Tue Nov 19 16:58:01 UTC 2019 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
What you expected to happen:
nginx-ingress 0.27.1 with HPA base on CPU has strange behavior in AKS 1.14.8
How to reproduce it:

Easy, with the workload, you can reproduce this behavior (it was tested with the same behavior in other production environments). With this version, every time the HPA is creating and terminating pods.
We saw in the logs the SIGTERM signal, and in previous seconds the pod was taking the limit cpu.
We rolled back to 0.25.1, as you can see in the previous picture the hpa comes back to normal without pods scale up and down every 8 min.
Anything else we need to know:
/kind bug
The text was updated successfully, but these errors were encountered: