-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEDA Operator crash with SO option restoreToOriginalReplicaCount #2872
Comments
Thanks for notifying it @karimrut |
Hi @karimrut kubectl get deploy keda-operator -n KEDA-NAMESPACE -o jsonpath="{.spec.template.spec.containers[0].image}" |
Hi @JorTurFer ! I tried to do it from scratch without our CI/CD. Just installed it directly and got the same result. This is happening in 2 different EKS clusters. SO yaml:
Installing KEDA.
Creating SO only.
|
Thanks for reporting, I will check that @JorTurFer ! |
@karimrut good catch. It was a corner case not covered properly. The referenced PR fixes this. Thanks for reporting. |
Thanks a lot for the help @zroubalik ! |
Report
We are having an issue with KEDA and the advanced option "restoreToOriginalReplicaCount".
If the SO is created with a existing deployment as "scaleTargetRef" there is no issue.
Sometimes the SO can be created when the deployment no longer exists. The creation of the SO by itself is not problem, but if we delete the SO before any deployment has started to use it keda-operator crashes with:
panic: runtime error: invalid memory address or nil pointer dereference
The last debug log every time this happen before the crash is:
DEBUG scalehandler ScaleObject was not found in controller cache
The SO is stuck with the message “scaledobject.keda.sh "cron-scaledobject" deleted” even though it’s still there.
To resolve the issue you have to edit the SO and remove the two lines “finalizers:” and “- finalizer.keda.sh”.
Then KEDA will run as normal.
If you do not use the “restoreToOriginalReplicaCount” and you delete the SO there is no problem.
If the KEDA operator is not running it’s also not an issue even if the restoreToOriginalReplicaCount is set to true.
Expected Behavior
Expect the operator to keep running as normal.
Actual Behavior
The operator is crashing with:
panic: runtime error: invalid memory address or nil pointer dereference
Steps to Reproduce the Problem
kubectl delete so cron-scaledobject -n default
and the operator will crash.Logs from KEDA operator
KEDA Version
2.6.1
Kubernetes Version
1.21
Platform
Amazon Web Services
Scaler Details
Cron
Anything else?
No response
The text was updated successfully, but these errors were encountered: