fix: unnecessary HPA updates when cpu utilization trigger is used #5822
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
To address the issue of continuous updates triggered by discrepancies between the ScaledObject and Kubernetes HPA v2 when using CPU utilization triggers, I have implemented a logic similar to the conversion logic used for HPA v1.
This change ensures that the handling of CPU utilization triggers is consistent and prevents the KEDA operator from continuously detecting differences and updating the HPA. The solution mimics the behavior observed when converting HPA v1, thus maintaining stability and expected behavior within the system.
However, this change may cause discrepancies in the latest version (Kubernetes version >= 1.27) of ScaledObject compared to user expectations. Specifically, when users configure CPU utilization triggers, the generated HPA places the CPU utilization metric last. If multiple CPU utilization triggers are configured, only the first one will take effect.
Checklist
Fixes #5821